Vous êtes sur la page 1sur 243

TEST AUTOMATION

BODY OF KNOWLEDGE
(TABOK)
GUIDEBOOK
Version 1.1
© 2011

Automated Testing Institute | © 2011


This page intentionally left blank
TABLE OF CONTENTS

ATI’S TEST AUTOMATION BODY OF KNOWLEDGE (TABOK) INTRODUCTION ..................................1


The Critical Skills Categories ............................................................................................................... 2
Automated Testing Defined and Contrasted With Manual Testing ...................................................... 6
Roles and Responsibilities ................................................................................................................... 9
Organization of this Manual ................................................................................................................ 10
A Word about Expectations ................................................................................................................ 11
MACROSCOPIC PROCESS SKILLS .................................................................................................................... 13
SKILL CATEGORY 1: DETERMINING THE ROLE OF TEST AUTOMATION IN THE SOFTWARE LIFECYCLE ............. 15
1.1 Test Automation Impact on Software Development Lifecycle (SDLC).................................. 16
1.2 Test Tool Acquisition & Integration........................................................................................ 30
1.3 Automation Return-on-Investment (ROI) .............................................................................. 36
1.4 Resource References for Skill Category 1 ............................................................................ 43
SKILL CATEGORY 2: TEST AUTOMATION TYPES AND INTERFACES ............................................................... 46
2.1 Test Automation Types ......................................................................................................... 46
2.2 Application Interfaces ............................................................................................................ 50
2.3 Resource References for Skill Category 2 ............................................................................ 52
SKILL CATEGORY 3: AUTOMATION TOOLS .................................................................................................. 53
3.1 Tool Types ............................................................................................................................. 53
3.2 Tool License Categories ........................................................................................................ 64
3.3 Resource References for Skill Category 3 ............................................................................ 65
SKILL CATEGORY 4: TEST AUTOMATION FRAMEWORKS .............................................................................. 67
4.1 Automation Scope ................................................................................................................. 68
4.2 Framework Levels ................................................................................................................. 72
4.3 Resource References for Skill Category 4 ............................................................................ 85
SKILL CATEGORY 5: AUTOMATED TEST FRAMEWORK DESIGN .................................................................... 87
5.1 Select a Framework Type ..................................................................................................... 87
5.2 Identify Framework Components .......................................................................................... 88
5.3 Identify Framework Directory Structure ................................................................................. 95
5.4 Develop Implementation Standards ...................................................................................... 96
5.5 Resource References for Skill Category 5 .......................................................................... 101
SKILL CATEGORY 6: AUTOMATED TEST SCRIPT CONCEPTS ...................................................................... 102
6.1 Test Selection ...................................................................................................................... 102
6.2 Automated Test Design and Development ......................................................................... 105
6.3 Automated Test Execution, Analysis, and Reporting .......................................................... 113
6.4 Resource References for Skill Category 6 .......................................................................... 118
SKILL CATEGORY 7: QUALITY ATTRIBUTE OPTIMIZATION .......................................................................... 120
7.1 Typical Quality Attributes ..................................................................................................... 120
7.2 Ranking Frameworks in Terms of Quality Attributes ........................................................... 131
7.3 Identifying Key Organizational Quality Attributes and Selecting a Framework Type .......... 136
7.4 Resource References for Skill Category 7 .......................................................................... 138
MICROSCOPIC PROCESS SKILLS ................................................................................................................... 139
SKILL CATEGORY 8: PROGRAMMING CONCEPTS ...................................................................................... 141
8.1 Developing Algorithms ........................................................................................................ 141
8.2 Scripting vs. Compiled Languages ...................................................................................... 143
8.3 Variables .............................................................................................................................. 143
8.4 Object-oriented Concepts .................................................................................................... 145
8.5 Control Flow Functions ........................................................................................................ 147
8.6 Custom Function Development ........................................................................................... 150
8.7 Resource References for Skill Category 8 .......................................................................... 152
SKILL CATEGORY 9: AUTOMATION OBJECTS ............................................................................................ 153
9.1 Recognizing Application Objects ......................................................................................... 153
9.2 Object Maps ........................................................................................................................ 156
9.3 Object Models...................................................................................................................... 158
9.4 Dynamic Object Behavior .................................................................................................... 162
9.5 Resource References for Skill Category 9 .......................................................................... 166
SKILL CATEGORY 10: DEBUGGING TECHNIQUES ...................................................................................... 167
10.1 Types of Errors .................................................................................................................... 167
10.2 Debugging Techniques ....................................................................................................... 170
10.3 Resource References for Skill Category 10 ........................................................................ 177
SKILL CATEGORY 11: ERROR HANDLING ................................................................................................. 178
11.1 Error Handling Implementations .......................................................................................... 179
11.2 Error Handling Development ............................................................................................... 181
11.3 Common Approaches for Handling Errors .......................................................................... 187
11.4 Resource References for Skill Category 11 ........................................................................ 187
SKILL CATEGORY 12: AUTOMATED TEST REPORTING & ANALYSIS ............................................................ 188
12.1 Automation Development Metrics ....................................................................................... 188
12.2 Execution Reports and Analysis.......................................................................................... 191
12.3 Resource References for Skill Category ............................................................................. 196
APPENDICES.......................................................................................................................................................... 197
APPENDIX A: SAMPLE EVALUATION CRITERIA........................................................................................... 199
APPENDIX B: SAMPLE CODING STANDARDS CHECKLIST ........................................................................... 204
APPENDIX C: SAMPLE MANUAL-TO-AUTOMATION TRANSITION................................................................... 208
APPENDIX D: SAMPLE ROI CALCULATIONS .............................................................................................. 210
APPENDIX E: THE TABOK, SDLC AND THE AUTOMATED TESTING LIFECYCLE METHODOLOGY................... 216
APPENDIX F: TEST AUTOMATION IMPLEMENTATION PLAN TEMPLATE ......................................................... 219
APPENDIX G: SAMPLE KEYWORD DRIVER SCRIPT .................................................................................... 221
APPENDIX H: AUTOMATED TEST ROLES ................................................................................................... 222
APPENDIX I: GLOSSARY .......................................................................................................................... 225

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page iv


FIGURES

Figure 1-1: SDLC and the Automated Testing Lifecycle Methodology (ATLM) .......................................... 16
Figure 1-2: Cumulative Coverage ............................................................................................................... 19
Figure 1-3: NIST software quality study results .......................................................................................... 22
Figure 1-4: Communication Breakdown (unknown author) ........................................................................ 24
Figure 1-5: Quality benefits of discovering defects earlier in the development cycle ................................. 38
Figure 1-6: Escalating costs to repair defects ............................................................................................. 38
Figure 1-7: ROI Formula ............................................................................................................................. 39
Figure 4-1: Automation Cost ....................................................................................................................... 68
Figure 4-2: Data-driven Construct Example................................................................................................ 76
Figure 4-3: Functional Decomposition ........................................................................................................ 80
Figure 5-1: Generic Framework Component Diagram ................................................................................ 89
Figure 5-2: Execution Level File Example .................................................................................................. 90
Figure 5-3: Driver Script Example ............................................................................................................... 90
Figure 5-4: Initialization Script Example ...................................................................................................... 92
Figure 5-5: Example Test Environment Configuration and Iterations ......................................................... 93
Figure 5-6: Configuration Script Example ................................................................................................... 94
Figure 5-7: Automated Framework Directory Structure Example ............................................................... 96
Figure 6-1: Automation Criteria Checklist ................................................................................................. 104
Figure 6-2: Code-based Interface ............................................................................................................. 105
Figure 6-3: Functional Decomposition Interface ....................................................................................... 106
Figure 6-4: Keyword Driven Interface ....................................................................................................... 106
Figure 6-5: VNC Illustration ....................................................................................................................... 113
Figure 6-6: Test Execution Domino Effect ................................................................................................ 114
Figure 6-7: Parallel Test Execution ........................................................................................................... 116
Figure 7-1: Quality Attribute Optimization Examples ................................................................................ 136
Figure 8-1: Sample Pseudocode for Data-Driven Invalid Login Test ....................................................... 142
Figure 8-2: Class, Object, Properties, Methods, Collections Illustration ................................................... 146
Figure 9-1: Object Map Example .............................................................................................................. 157
Figure 9-2: Document Object Model ......................................................................................................... 160
Figure 9-3: Dynamic Object Illustration ..................................................................................................... 162
Figure 9-4: Dynamic Object Map .............................................................................................................. 163
Figure 10-1: Script Error Types ................................................................................................................. 168
Figure 10-2: Debugging With Breakpoints Example ................................................................................. 172
Figure 10-3: Application Error Scenario .................................................................................................... 173
Figure 10-4: Error Simplification ............................................................................................................... 174
Figure 10-5: Wait Statement ..................................................................................................................... 176
Figure 10-6: Synchronization Statement ................................................................................................... 177
Figure 11-1: Popup Error Message ........................................................................................................... 180
Figure 11-2: Error Handling Development Process .................................................................................. 182
Figure 11-3: In-Script Error Handling Example ......................................................................................... 185
Figure 11-4: Passing to Error Handler ...................................................................................................... 186
Figure 11-5: Error Handler Example ......................................................................................................... 186

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page v


Figure 12-1: Automation Development Sample Metrics ........................................................................... 189
Figure 12-2: Automation Development Sample Chart .............................................................................. 190
Figure 12-3: Types of Execution Reports ................................................................................................. 191
Figure 12-4: Automated Test Results Analysis ......................................................................................... 193
Figure 12-5: Automated Test Execution Log Sample ............................................................................... 194
Figure 12-6: Sample Deferred Error ......................................................................................................... 195

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page vi


TABLES

Table 3-1: Testing Tool Types .................................................................................................................... 53


Table 3-2: SCM Tool Features .................................................................................................................... 54
Table 3-3: Modeling Tool Features ............................................................................................................. 55
Table 3-4: Requirements Management Tool Features ............................................................................... 57
Table 3-5: Unit Testing Framework Features ............................................................................................. 58
Table 3-6: Test Management Tool Features............................................................................................... 59
Table 3-7: Data Generation Tool Features ................................................................................................. 60
Table 3-8: Defect Tracking Tool Features .................................................................................................. 60
Table 3-9: Code Coverage Analyzer Features ........................................................................................... 61
Table 3-10: Functional System Test Automation Tool Features ................................................................. 62
Table 3-11: Performance Test Automation Tool Features .......................................................................... 63
Table 4-1: Sample Data Table .................................................................................................................... 77
Table 4-2: Navigation Function Commonalities .......................................................................................... 78
Table 7-1: Measuring and Improving Maintainability ................................................................................ 121
Table 7-2: Measuring and Improving Portability ....................................................................................... 123
Table 7-3: Measuring and Improving Flexibility ........................................................................................ 125
Table 7-4: Measuring and Improving Robustness .................................................................................... 125
Table 7-5: Measuring and Improving Scalability ....................................................................................... 127
Table 7-6: Measuring and Improving Reliability ........................................................................................ 128
Table 7-7: Measuring and Improving Usability.......................................................................................... 129
Table 7-8: Measuring and Improving Performance .................................................................................. 130
Table 7-9: Framework Quality Attribute Rankings .................................................................................... 131
Table 9-1: Good vs. Bad Object Properties Combinations ....................................................................... 154

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page vii


This page intentionally left blank

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page viii


TABOK Segment 1

ATI’s Test Automation Body of


Knowledge (TABOK)
Introduction

The Test Automation Body of Knowledge (TABOK) tool-neutral skill set is designed to
help software test automation professionals address automated software testing
challenges. Although they are geared to similar aims, automated software testing is a
discipline that is separate from manual software testing, and must be treated as such.
For this reason, the TABOK provides engineers with a way to assess, improve, and
better market their automated testing skills more effectively than tool-specific
benchmarks can do alone.
The body of knowledge may also be used by organizations as specific criteria for more
effectively assessing resources and establishing career development tracks. Not every
test automation engineer is required to be an expert in each skill category but
knowledge of the different skill categories is essential for professional improvement,
growth, and development. That said, we recommend that each automated test effort
include a team of professionals who collectively possess all of the skills, regardless of
how many people make up the team.
This TABOK Guidebook is a resource for providing guidance in understanding the Test
Automation Body of Knowledge, and may also be used by engineers as a self-study
guide for the Functional Test Automation Professional (F-TAP) Certification. Test
automation is a broad discipline that no single resource can cover exhaustively, so this
manual addresses test automation concepts in a broad sense while providing a deeper
focus on System Functional Test Automation. Although this manual includes key
concepts and workflows relative to the creation and implementation of automated

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 1


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

testing, readers should plan on using several supplemental resources to compliment the
concepts relayed in this guidebook. Each major section of this guidebook includes a
sample list of references that may be used to gain greater insight and information on the
topics covered in the respective section. In addition, the References section at the end
of the manual also provides a list of useful and comprehensive references. The reader
is responsible, however, for finding their own additional and relevant references as
deemed necessary. Every automated test tool has its own characteristics, advantages,
disadvantages, and scope, but there are common approaches and concepts that may
be applied in order for any tool to be used reliably and to best advantage; the TABOK
guidebook offers guidance in understanding those approaches and concepts.
The TABOK guidebook assumes that the reader has some background in software or
systems testing and is comfortable with the concepts of testing, its terminology, and
methodologies. If used as a self-study guide for the F-TAP certification, this guidebook
should not be construed as a single resource for all topics that will be addressed on the
certification exam. It is instead a non-mandatory tool to aid in exam preparation that
assumes the certification candidate also has the prerequisite experience and education
in the discipline of software test automation that is necessary for passing the exam.

The Critical Skills Categories


Skill categories 1 through 7 (macroscopic) include skill concentrations for automated
test leads whereas skill categories 8 through 12 (microscopic) include skill
concentrations for automated test engineers. Given the state of many software projects,
the Test Lead depends on the Automation Engineer for answers regarding automation
planning, therefore the Automation Engineer may also want to consider a concentration
in skills 1 through 7 as well (More information on automated test team roles and
responsibilities is offered further down). In addition, Appendix H: Automated Test Roles
provides a list of automated testing roles and the recommended skill categories for that
role. Keep in mind that one member of a team may play several roles. Also, to better
assure accuracy and test coverage (as well as to share expertise), team members
should be well-acquainted with the spectrum of skills.
The twelve critical skill categories in which the members of an automated test team
should be knowledgeable are:

1. Automation’s Role in the Software Development and Testing Lifecycle

The ability to understand the role of automated testing in the software development
and testing lifecycle is perhaps the most critical of all since that knowledge guides

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 2


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

decisions in design, implementation, tools, organizational resources (including


people, environment, funding, stakeholders, and support infrastructure) and mission,
and strategic goals with implications long after the application is deployed. Inherent
in this skill is also the ability to craft a solid business case with recommendations for
tool acquisition and integration, an automation plan and schedule that makes a
convincing, defensible case for automation to the extent that it is appropriate for the
organization, and the ability to determine the return-on-investment (ROI) based on
implementing test automation.
This skill touches every point in the software development lifecycle from
requirements gathering and validation, code development, builds and integration,
deployment, and maintenance. It is also integral with the ancillary tasks of
configuration and change management, and other ―behind the scenes‖ activities.
Therefore, this skill includes the ability to objectively weigh the benefits and
disadvantages that automating testing can play at each point in the application‘s
lifecycle to understand which tool (if any) can provide the best support and service to
the effort within that organization’s culture, goals, resources, and constraints.
2. Automation Types

For simplicity the different types of automated testing – functional (regression), unit,
integration, performance, etc. – are often discussed together but they perform
different functions at different phases of the software development lifecycle. While
you are not expected to be an expert in all test automation types, it is important to
understand basic concepts in each type of automation to provide management with
confidence in your test automation organization.
3. Automation Tools

Test automation professionals must be able to advise stakeholders on tools that


support all aspects of the testing lifecycle. This requires an understanding of the
different types and categories of tools that support the testing lifecycle, how they
interact and support each other, their strengths and limitations. Some of the most
common tool types include those for functional (regression testing), configuration
management, business and system modeling, requirements management, unit
testing, test management, defect tracking, code coverage analyzers, and functional
and performance test automation tools. Fortunately, many of the tools include
graphical, command line, or application programming (API) interfaces to ease their
integration and use in the development lifecycle.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 3


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

4. Automation Frameworks

Total automation cost includes both development and maintenance costs. As the
automation framework becomes more defined, scripting (and costs) increases in
complexity but over time, maintenance efforts (and costs) will decrease. Post-
release, maintenance becomes increasingly important which requires that the
automation framework likewise mature in order to reduce total automation costs.
Therefore, before defining the framework to support test automation, you must
evaluate the implied and/or stated scope of the organization‘s automation effort, and
implement it in concert with the requirements and design phase of the software
development lifecycle.
Automation frameworks may include simple (but unreliable) Record & Playback
processes, more complex functional decomposition to marry requirements with test
scripts that may be run independently or in defined combinations or sequences in a
given test bed, or more complex, abstracted scripts that may be reused by tests
within the same or different applications.
5. Automation Framework Design Process

Designing an automated test framework is not an exact science. You can definitively
identify the different types of frameworks but the process to select and implement a
particular framework is a little more difficult to pinpoint. The most important point,
however, is to base your well-considered approach on common, successfully
implemented industry practices and tailor it to your organization. This skill category
addresses developing and executing critical activities including selecting a
framework type, identifying framework components, identifying the framework
directory structure, developing implementation standards, and developing automated
tests.
6. Automated Test Script Concepts

Choosing what and when to automate involves determining appropriate candidates


for automation. This skill category addresses how these candidates may be
determined. The automated test design, development, execution and analysis of the
selected test cases are also addressed as part of this skill category.
7. Quality Attribute Optimization

This skill category defines different quality attributes of the automated test suite and
identifies ways of addressing these attributes based on priorities and constraints
surrounding each. Some quality attributes include maintainability, portability,
flexibility, robustness, scalability, reliability, usability, and performance.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 4


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

8. Programming Concepts

Whether you use a tool with a scripting language, tree structure, and/or keywords,
fundamental programming concepts are necessary to effectively automate tests and
increase testing flexibility to include application-to-system coverage.
9. Automation Objects

In recent history, automated testing relied on specific coordinate locations of a


screen or application window for performing mouse and keyboard operations that
simulated user activities. This approach was usually unreliable due to constant minor
changes in object location each time the test was run, and was difficult to maintain.
The modern automated test approach locates the objects or images on the screen
based on properties that define that object or image and then performs the desired
operations once the object is found. This critical skill includes developing object
maps to define objects and assign variable (a.k.a. logical) names to them for explicit
testing, modeling abstract object behaviors and interactions with other components
to help develop more powerful scripts, and reconcile test scripts with the unreliability
of dynamic object behaviors.
10. Debugging Techniques

Regardless of how well automated tests are planned, designed, and created, bugs
will occur. It can be difficult, in fact, to determine whether the problem is due to the
application under test (AUT), the test script itself, the test environment, or a
combination of these and other factors. Skillful debugging identifies syntax, runtime,
logical, and application errors that may be the root cause(s) of failure so they can be
repaired, re-tested, and ultimately deployed. Successful debugging also affects the
entire project by helping to avoid potential schedule delay brought by unexpected
crashing of the automation framework, unreliable test results, and at worst, releasing
a poor quality product that costs much time, money, and credibility to repair.
11. Error Handling

Error handling dictates how the test script responds and reports when the application
under test deviates from anticipated behaviors and outputs. This makes it a critical
component in pinpointing bugs and their remediation while allowing testing to
continue. Well-developed error handling helps to diagnose potential errors, log error
data, identify and report points of failure, and trap critical diagnostic outputs. Error
handling is implemented in a variety of ways within automated test scripts, generally
step-by-step, by component, or at run time.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 5


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

12. Automated Test Reporting

Test reporting and analysis is an extremely repetitive and time-consuming process


that results in the critical data that drives how application (or component) bugs can
be addressed and when the application passes to the next round of testing, staging,
or deployment. There are a number of different types of generated reports, each with
unique purposes that can support solid, objective analysis to help the test
professionals and development team create and maintain an application that meets
the spectrum of requirements.

Automated Testing Defined and Contrasted With


Manual Testing
Automated testing is often considered to be the same processes, skills, and protocols
as manual testing with the addition of a few ―out of the box‖ tools to help out. Indeed,
both share a number of common processes and skills and both are designed to carry
out a similar mission.
In both cases, test engineers analyze the entire project environment (hardware and
software, development and deployment) to determine the universe of possible tests and
then narrow that universe to a finite test bed designed to discover the largest number of
high-risk defects. In addition, both disciplines require skills in quality test design,
efficient test execution, defect tracking, analysis of test results, and maintenance. Both
disciplines usually result in similar outputs, such as scripts, plans, reports, schedules,
and the like. Above all, both rely on the skill, knowledge, creativity, intelligence, and
talent of test team‘s collective mind.
Manual testing assumes that the test logic – test plans, test scripts, test report analysis,
and defect tracking – is designed to be implemented by a human. Scripts, for example,
are written in documents or spreadsheets, perhaps referencing specific requirements or
outlining specific steps to execute and the expected results of each. A human physically
works through scripts and manually notes (also often as documents or spreadsheets)
the results to provide debugging guidance to developers and requirements analysts.
Scripts and test plans are maintained manually and must be versioned and replicated
manually should they be needed for other projects, including updated releases of the
application. Manual testing may be more economical in testing a small or short-term
application; the time and resource costs of automating test scripts is generally
prohibitive for a suite of scripts that will be run only once or twice.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 6


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

Manual testing, however, is able to leverage one characteristic of the human mind that
is devilishly difficult and expensive to automate: the power of learning and judgment.
Consider that a computer is a small, 1-processor computing element while the human
brain is composed of about 100 billion connected neurons that function as an organic
whole. In a sense, this makes the human mind a system of millions of processors that
have been preprogrammed through years of learning (and, as some might assert,
through thousands of years of evolutionary programming) that work together. In a
fraction of a second, the human brain can recognize patterns that would take hours for a
computer to learn or a test engineer to script, and can then learn from, and make
adjustments, based on those patterns. Also, humans visually inspect applications and
perform numerous verification points whether or not these verification points are
documented as part of a test procedure. While computers are faster at accomplishing
routine, simple tasks, they only make adjustments (a form of judgment) in processing
based only how they are programmed to adjust. Automated test tools and scripts only
check exactly what they are programmed to check.
Consider this example: A certain AUT mysteriously shuts down during a test; a manual
tester knows instinctively that to continue the test, the application must be re-launched.
The manual test procedure need not be written to include this type of exception
handling instruction. An automated test, however, would fail miserably in this situation
unless it was specifically programmed to handle that situation or state at the exact point
that it occurred.
Automated testing is the use of software to test or support the testing of other software
known as an AUT. This includes the use of software to dynamically setup test
preconditions, produce test data, execute test procedures against the AUT, and
dynamically compare actual results to expected results of a test. Also, this may involve
the use of software to dynamically collate test results and produce meaningful test
charts, graphs and reports that effectively relay those results to stakeholders.
Additionally, it entails general support for and implementation of software tools that
integrate with and help to facilitate the testing process. The ‗general support‘ view of
test automation – supported by Agile Test Automation Principles1, as well as many
testing organizations – calls for awareness of all phases of the testing lifecycle, the tools
that support the lifecycle, and the approaches for effectively using these tools.

1
http://www.satisfice.com/articles/agileauto-paper.pdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 7


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

A key difference between manual and automated testing is that a single test engineer
can run many test scripts simultaneously or in sequence, as the test plan demands.
Automated scripts can be executed by a manual command, or launched on a timer or
from a software trigger. Not only can this potentially reduce load on the test team‘s
system resources, it allows more flexibility in the timing and staffing in the test cycle.
Beyond running scripts, automated testing can store results for regression test results
comparison.
The protocols in test planning, design, development, and implementation are therefore,
very different for each testing discipline. So what does this mean for considering
implementing a test automation approach?
Automated test staff must have a deep knowledge of the full hardware,
development, and software environments as well as all phases of the testing
lifecycle and, thus, the test tools that support each phase.
The human element is still paramount in automated testing to question the
results of test to assure that they are reliable and actionable.
Automated test staff must have proficiency in the test tools in use, as well as in
the AUT.
Good tools do not replace the ingenuity, skill, and judgment of test professionals.
Nor do they fix poor test processes or decision-making.
In general, implementing test automation will likely result in higher up-front costs
for software due to
→ The costs of tools, including their evaluation, purchase, licensing,
installation, and maintenance
→ Increased pre-planning and planning time for the appropriate integration of
automated testing into all phases of the application‘s lifecycle
→ The ability to develop more sophisticated test scripts that evaluate the
application itself in addition to its integration in the deployment hardware
and software environment
→ Testing the test scripts to assure they fully address requirements
→ Skills of the test professionals
→ Expanded test environment
→ Richer test results to analyze and use

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 8


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

Armed with this information, it is easy to see how developing automated tests to be
implemented by a computer program requires a different set of skills than developing
manual tests to be implemented by a manual tester. See Roles and Responsibilities to
learn more about specific skills required for each role.

Roles and Responsibilities


Test automation touches every aspect of the software development lifecycle (SDLC),
from requirements development through deployment and subsequent maintenance.
Therefore, test professionals responsible for test automation implementation must be
very well-versed in each phase in addition to having specific skills in the test
methodology and the tools used. This requires a diverse set of roles within the test
automation team which may include the following:
Team Lead
Test Engineer
Lead Automation Architect
Cross Coverage Coordinator
Automation Engineer (Automator)
Team Lead
The Team Lead‘s responsibilities mainly involve administrative tasks. Administrative
tasks such as creating an Automation Implementation Plan which is a document that
details the development and implementation of the automated test framework. In
addition, the Team Lead is responsible for allocating automation personnel to
appropriate tasks, and for communicating with and providing reports to management.
This role may coincide with the overall test team lead role or even the program
management role, yet the automation team often has a Team Lead that functions
independently of these other two roles.
Test Engineer
The Test Engineer is normally not directly involved with automation tasks but is rather
responsible for the manual testing tasks that are leveraged by Automation Engineers.
The Test Engineer is the subject matter expert for the application or feature of the
application that is being automated. Often responsible for the manual test design and
execution, the Test Engineer works directly with the Automation Engineer to decide
what should be automated and how manual procedures may be modified to better
facilitate automation. The Test Engineer often also doubles as the Automation Engineer.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 9


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

Lead Automation Architect


Framework specific responsibilities fall to the Lead Automation Architect. This role is
typically held by a test tool subject matter expert as well as an automation framework
subject matter expert. The Lead Automation Architect is responsible for framework
maintenance, configuration management activities relative to the framework and for
being ―on-call‖ support responsible for answering questions relative to the framework.
Cross Coverage Coordinator
The Cross Coverage Coordinator is responsible for ensuring all automation efforts that
utilize a single framework within a given organization are in sync with one another.
Staying abreast of all application specific automation efforts while keeping Automation
Engineers abreast of framework concerns are important for this role. The Cross
Coverage Coordinator is responsible for helping to identify maintenance procedures for
this role while ensuring that these maintenance procedures are being followed;
maintenance procedures including the proper use of versioning software and suggested
use of reusable components. This role works with the Lead Automation Architect by
suggesting necessary framework changes and works with Automation Engineers by
suggesting automation techniques that facilitate flexibility, reusability, maintainability
and other quality attributes. Very often the Lead Automation Architect functions as the
Cross Coverage Coordinator.
Automation Engineer
The Automation Engineer or the Test Automator is responsible for the application
specific automation tasks. While displaying some degree of test tool expertise is
important, the primary concern for this role is the automation of assigned application
functionality or tasks within the framework being implemented by the organization. The
Test Automator is the automated test suite subject matter expert and is responsible for
coordinating with the Test Engineer to better understand the application and manual
tests, as well as for making decisions regarding what should be automated and how
manual procedures may be modified to better facilitate automation.

Organization of this Manual


After this introduction to automation testing and the roles that make it successful, each
skill is discussed in its own chapter, including identification of the relevant roles and
abilities, and a section for additional resources that will help supplement the readers‘
understanding of that skill. Appendices provide further resources and references.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 10


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

A Word about Expectations


Please note that language in this manual is rarely presented as being definitive.
Phrases such as ―may result in efficiencies,‖ ―can benefit,‖ and ―might occur‖ are
deliberate. When automated testing is very well-planned for methodology, resources,
and execution, the likelihood is high that automation will bring benefit to the application‘s
development and deployment. That said, changes in leadership and decision-makers,
cutting corners by rushing and omitting critical tests, insufficient expertise by the test
team members, poor communication across the teams, lack of objectivity in reporting
the ―state of the testing effort‖, and simply untested test scripts and plans can derail the
effort. While decision-makers and practitioners may wrongly opine that failure occurred
due to the automation itself, usually a poorly designed or poorly executed test plan is
the culprit.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 11


TABOK Segment 1: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

This page intentionally left blank

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 12


TABOK Segment 2

Macroscopic Process Skills


This section discusses in detail the high-level process knowledge required for
successfully leading and/or managing a test automation effort.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 13


TABOK Segment 2: ATI‘s Test Automation Body of Knowledge (TABOK) Introduction

This page intentionally left blank

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 14


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle

Skill Category 1: Determining the Role of Test


Automation in the Software Lifecycle

Primary role(s) involved: Test Lead, Lead Automation Architect


Primary skills needed: Business case development, understanding of the role of
testing, understanding of the SDLC, liaison building between the test team, the
development team, user representatives, application support team, and management
decision-makers

Success in automated testing requires a solid understanding of what test automation is


and how it fits into the overall software development lifecycle (SDLC). Without this
knowledge, it is impossible to develop a test strategy that verifies that the developers
built the application right, that is, meets its functional requirements. Without that tight
relationship between requirements and test scripts and protocols to assure that both
solidly reflect the desired result, no development project can be considered to be
reliable, with or without automated testing.
Keep in mind that the SDLC involves all activities from requirements and design
development to coding to build to test to deployment and finally, to maintenance.
Automated testing may verify the integrity of each of these iterative phases by providing
predictable quality assurance protocols that assess the changes introduced by each
improvement or remediation in the application or its environments.
This critical skill addresses planning-related items that are often overlooked or short-
changed. With that said, they are extremely important for test automation success such
as developing a business case, assessing tools, and calculating automation ROI. For
additional information on how test automation fits into the SDLC see Appendix E: The
TABOK, SDLC and the Automated Testing Lifecycle Methodology.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 15


TABOK Segment 2: Macroscopic Process Skills

1.1 Test Automation Impact on Software


Development Lifecycle (SDLC)
Short Answer
Test automation affects every phase of the software development lifecycle (SDLC).2
With that said, automated tools are not processes but rather vehicles that support the
existing processes. If strong processes are in place, tools may enhance those
processes and provide great benefit as long as they are implemented with a deliberate,
realistic, focused approach. Attempting to automate in an environment fraught with
confusion, misconceptions, poor or ignored processes, will simply create faster and
more profound (also ―expensive‖) confusion.

3
Figure 1-1: SDLC and the Automated Testing Lifecycle Methodology (ATLM)

2
Several methodologies can be classified as the software (or system) development lifecycle. Regardless
of whether the organization follows a waterfall, agile, or extreme programming approach, testing touches
every point in the application‘s lifespan.
3
Dustin, Elfriede, Jeff Rashka, and John Paul. Automated Software Testing: Introduction, Management,
and Performance. Boston, MA: Addison-Wesley, 1999. For more information see Appendix E: The
TABOK, SDLC and the Automated Testing Lifecycle Methodology.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 16


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
Behind every decision to introduce test automation is a host of explicit or implied goals
and expectations for what the automation will accomplish. Without a realistic
understanding of the full impact of test automation on a project, the automation effort
will be nothing more than a disappointing and expensive experiment, will not survive,
and will provide no benefit to a project or organization, and will eventually be discarded.
Worse yet, it may be replaced with something ―new and improved‖ which, without
planning and understanding, will likewise fail.
Understanding the impact of test automation means having a firm grasp on the benefits
of test automation and an equal understanding its common misconceptions. Equipped
with this knowledge, an automation test team may more effectively ―sell‖ use of
automation to the management stakeholders, development team, and possibly
customers and compliance boards by presenting a credible business case, and manage
effort-ending misconceptions and unrealistic expectations held by the organization
decision makers. When these hurdles are crossed, you can also more effectively
implement a test automation effort.4

1.1.1 Benefits of Test Automation


Test automation, like any job, requires more than just doing a good job and assuming
that the work speaks for itself. You must be able to effectively communicate (with
verifiable objectivity, candor, and integrity) the positives of the job and its outputs. This
begins with a firm understanding of the benefits that may be reaped from a properly
implemented test automation effort and a solid, realistic plan to achieve them. Some of
the customary benefits include:
Increased repeatability
Implementation of manually unfeasible tests
Increased confidence in reliability
Freeing test engineers to focus on other, more complex and challenging tasks
Helping to prevent the introduction of defects into the system
Improving metrics collection

4
Your work is not done. Part of implementing the test automation effort is regularly managing those
expectations and the occasional nay-saying derisions.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 17


TABOK Segment 2: Macroscopic Process Skills

Improving intra-team and inter-team collaboration among project personnel


Possible automated integration between test scripts and functional and system
requirements
Testing across software components, hardware, and the development and
deployment environments, and specific time constraints (e.g., running a process
every 3 minutes)

In a well-developed test automation effort, the net effect of these benefits can result in:
Cost Savings – long-term savings through repeatable and reusable tests,
reduced staff load, earlier identification and repair of defects, reduced rework
Increased Efficiency– Savings through faster test execution time and schedule
reduction.
Increased Software Quality – Increased and deeper test coverage throughout
software and hardware components to reduce the risk and cost of a potential
failure reaching production.
These categories are uncompromisingly interrelated in that increased efficiency and
increased software quality will ultimately lead to cost savings (when failure costs are
considered), and increased efficiency may lead to increased software quality. Please
see Section 1.3 Automation Return-on-Investment (ROI) for different ways of
quantifying these benefits.

1.1.1.1 Increase repeatability


One of the major advantages of test automation is the fact that the tests are repeatable
– they can be re-executed without re-writing (which buys consistency) both before and
after defects are repaired. In fact, in most situations, a positive return on your
investment (see Section 1.3 for more information on ROI) is not achieved until after the
tests have been executed multiple times.
Repeatability results in cost savings because tests can be reused many times (rather
than manually developing many new tests for each application or test cycle). They can
also be executed while unattended and often in parallel, saving significant staff time. In
addition to running unattended, the tests can be executed during off-hours when system
load is generally lower so they run more quickly to increase test efficiency. With careful
planning, this may result in a decrease in the test schedule and a faster time to
deployment.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 18


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
―Cumulative Coverage‖ is a term that represents the level of test coverage that can be
conducted across multiple builds and releases of an application. Figure 1-2 offers an
introductory illustration.

Build 1 Build 2 Build 3 Build 4 Build 5


Regression 0 25 40 52 70
New Functionality Tests 25 15 12 18 11
Total Tests 25 40 52 70 81

Figure 1-2: Cumulative Coverage

The figure depicts a hypothetical software release with 5 builds.


Build 1 has 25 new tests,
Build 2 has 15 new tests,
Build 3 has 12 new tests,
Build 4 has 18 new tests, and
Build 5 has 11 new tests.
As is common practice for a given release, test engineers test the new functionality
developed and integrated (depending on the scope of the build under test) but also the
existing functionality to verify that the new changes haven‘t introduced any defects.
Therefore, some subset of the tests developed for the previous builds will also need to
be executed. Figure 1-2 assumes all of the tests are to be executed. Given this
scenario, it is evident that with each subsequent build, more tests need to be executed

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 19


TABOK Segment 2: Macroscopic Process Skills

(from 25 during the first build to 81 by the final build, for an ideal Cumulative Coverage
of 268 tests).
When the allotted testing time is short, some regression testing is often sacrificed,
especially if a substantial amount of new functionality has been included in the build.
Neglecting to regression test the existing functionality along with the new, however,
often poses a greater risk to the quality and integrity of the application‘s reliability.
Cutting corners in full regression occurs for many reasons. Sometimes, an organization
considers this type of redundancy to be a wasteful use of time and resources without
the understanding that the best designed and developed functional can still introduce
defects upon integration. In the current commercial market, defects in established
functionality deployed with the release of a new version can greatly – and adversely –
affect the system developer‘s (and organization‘s) reputation and by effect, profits. The
time and cost to repair and deploy the defects as well as the organization‘s place in the
market may far outweigh the initial cost of responsible testing.
By running repeatable test scripts unpredictable defects can be identified and repaired
more quickly over a shorter period of time. This can increase test coverage over
multiple generations of builds which can result in a more reliable, stable product that
meets requirements while reducing risks of failure.

1.1.1.2 Implement manually unfeasible tests


Some functionality simply cannot feasibly be tested manually, such as system load
testing, testing distributed transactions, new technologies in development and
deployment environments (e.g., cloud environments) for which traditional testing may
not be appropriate.
Automating performance tests is an excellent example of manually unfeasible tests due
to the logistical challenges involved. Suppose requirements demand that the application
accommodate 5,000 simultaneous users accessing the same data files to perform
complex routines. It is not realistic to gather 5,000 people in room with a stopwatch to
test the performance of an application! Therefore, if an organization wants to conduct a
significant performance test, there is little choice other than to use automation. Another
example of a manually unfeasible test is a scenario in which there are 100 different
types of users for an application, each user requiring a different set of permissions.
Testing each user and user permission may not be feasible for manual execution due to
time constraints place on the project. But since automated tests typically execute faster
than manual tests, this may be more feasible via test automation.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 20


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
1.1.1.3 Increase confidence in reliability
Monotony breeds errors, and executing the same tests manually can be very
monotonous and time-consuming indeed. And the variation introduced by human testing
(e.g., not following a test script consistently, not testing in a consistent environment, and
several testers interpreting a test script differently) can result in a system evaluation that
may be unreliable and insufficient. Sometimes, manual testing is most appropriate;
human factors testing, tests executed only once or twice on small data sets or limited
functionality are good examples. But in some cases, shear robotic consistency is of the
utmost importance. In addition, when test scripts are executed multiple times manually,
human sensibilities begin to dull, assumptions are made, and things may get defects.
Automated testing provides greater assurance that test scripts will be executed the
same way each time a test is run, or anomalies will be reported for debugging;
automated tests are not subject to human fatigue and inattention.

1.1.1.4 Free test engineers to focus on complex quality assurance tasks


When application test time is minimal and numerous tests must be executed, human
time must be used carefully. While it is tempting to judge the effectiveness of a test
effort by the number of tests written and run rather than determining how closely the
application meets requirements, the result is a ―numbers game‖ of quantity over quality.
If executing a large number of simple tests that can be run quickly takes precedence
over a few complex, time-consuming tests that requires substantial amounts of
resources but may uncover the higher severity defects, neither the application‘s users
nor the cost of testing in hours is served with integrity.
Automating these simpler tests makes it possible to maintain high coverage while still
allowing test engineers to focus on more complex issues that require an exploratory
testing approach. This can boost the quality of the system.

1.1.1.5 Reduce the introduction of defects into the system


About 70 percent of software defects5 are introduced in the early phases of
requirements development. A number of tools exist that help improve requirements
generation, helping to make them more testable and reliable. Assuring the quality of
requirements will invariably help to prevent many defects from ever occurring.

5
These figures are reported in the thorough study by RTI for the National Institute of Standards and
Technology. See Gregory Tassey‘s The Economic Impacts of Inadequate Infrastructure for Software
Testing. Planning Report 02-3. May 2002. Available at
http://www.nist.gov/director/planning/upload/report02-3.pdf.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 21


TABOK Segment 2: Macroscopic Process Skills

6
Figure 1-3: NIST software quality study results

1.1.1.6 Integrate all areas of development for full test coverage


Many commercial and open source tools are available in the current market that
automate every phase of the SDLC. A number of these tools integrate with one another
to provide comprehensive tracking throughout the application‘s lifecycle. These tool
suites can identify (and version) changes in requirements, which in turn, generate
warning flags for the code base, test scripts, test plans, project schedule, defect
tracking, reporting, configuration managers, change managers. Thus, changes
(including repaired defects as well as new or updated functionality) are synchronized
throughout the application, assuring that automated testing consistently tests against
current requirements and that accurate results are reported. This greatly increases the
efficiency of these processes if use of these tools is carefully planned and integrated as
part of the application project. It also supports reliable application maintenance by pre-
identifying all aspects of the application that are affected by proposed changes before
they are implemented.
Well-integrated tool suites also identify aspects of the application that are not included
in test scripts. This helps test engineers address omissions as appropriate to assure
comprehensive coverage.

6
Hewlett-Packard. Reducing risk through requirements-driven quality management: An end-to-end
approach. 2007. Available at
http://viewer.media.bitpipe.com/1000733242_857/1181846419_126/74536mg.pdf.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 22


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
1.1.1.7 Improve metrics collection
One of the most tedious and time consuming tasks that will ever need to be done during
project implementation involves collecting and presenting metrics. Critical metrics
illustrate the current state of ratios of test coverage vs. non-coverage, implementation
passed vs. implementation that requires rework, hours spent in each test activity, status
on meeting thresholds of acceptance, adherence to schedule, and a number of other
critical data points that help determine the location (in functionality, component,
environment, code base, and other loci) of the majority of bugs, return-on-investment of
all phases of the SDLC, how closely the application meets requirements, and how soon
the application will be ready to deploy. Yet for all the value it brings, metrics collection is
one of the fundamental tasks that will take place on the project. Quantitative analysis is
the only way that a project team (including the test team, customer stakeholders,
developers, and business decision makers) can truly understand the state of the
application, and use that understanding to drive critical strategic and tactical decisions
and planning.
Coupled with a well-defined process, automated tools are excellent for gathering,
processing and presenting metrics that may be analyzed for the purposes of reporting
status and making the appropriate project adjustments. Many tools include default
metrics collection functionality and most provide functionality for test engineers to define
the measures, metrics, and output presentation that best suit the application team‘s
needs. Collecting metrics that do not have a specific, identified purpose wastes time
and resources, and provides no value. That said, the test team must work with the
entire application team to identify those metrics that are helpful to the project, the inputs
and logic that create them, and how they should be presented for actionable decision-
making.

1.1.1.8 Improve collaboration across all stakeholder teams


Breakdowns in communication are at the heart of many issues and defects that get
injected into a system. Illustrated in figure 1-4 is a popular diagram that reveals what
can happen when communication fails on a project. These breakdowns often result from
the fact that people on different teams – users, analysts, developers, testers –
communicate in the language of their particular domains, have different terminology,
and different objectives. Tools help to bridge the gap by providing a common
environment that represents the needs and objectives of the different stakeholders. This
common environment can improve communication and collaboration while mitigating
opportunity for misunderstanding and confusion. For example, a requirements
management tool helps to bridge the gap between the user and analyst, developers and
analyst, and testers and analyst, especially if the requirements management tool is

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 23


TABOK Segment 2: Macroscopic Process Skills

integrated with the test tool suite. This bridge results in a neat mapping of user goals to
functional requirements, and those functional requirements may be mapped to the code
specifications, code base, and test plans and scripts.

Figure 1-4: Communication Breakdown (unknown author)

Examination of automated test tools used by testers for automated test development
and implementation provides another example of how tools can bridge communication
gaps. Given that these tools require coding, and a lot of the same skills commonly used
by developers, the testers have an opportunity for greater collaboration with developers
from which a mutual respect for each other can be formed. Tearing down poor
communication walls ultimately build efficiency in the development process and team
workings, save valuable time and resources, and improve product quality.

1.1.2 Misconceptions
Many of the challenges involved in the implementation of a successful automated test
effort have nothing to do with the technical skills in getting the job done. The challenge
is in getting the job done while effectively managing unrealistic expectations and
misconception held by stakeholders, specifically when it comes to functional test
automation. Leaving these misconceptions unchecked means operating a test

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 24


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
automation effort in a perpetual state of underachievement in the eyes of the
stakeholders whose expectations will never be met, and this is a sure prescription for
shelfware. Therefore, a successful test automation effort requires recognition of
common misconceptions about test automation, an understanding of the truths that may
successfully debunk those misconceptions, and how open pre-emptive planning and
communication can help to assure that more time is available for testing and less
needed for repeatedly level-setting expectations. Common misconceptions include:
Automated tools fix broken processes
Automation success means achieving 100% automation
Automated tests find lots of new defects
Automated testing replaces manual testing
Record & Playback is an effective automation approach
One tool can integrate every phase of testing
Automated tools will immediately decrease the testing schedule
Automation costs = tool software costs + script development costs
The result of these unrealistic expectations is that the automator is forced into a
continuous cycle of defending the automation effort. Certainly, it is important to
continuously show value but being forced into regularly justifying the need for
automation is a distraction for the work at hand. Sometimes it seems that stakeholders
are looking for a reason to suspend automation rather than a way to make it useful, are
generally suspicious or leery of change, or truly do not understand the scope of the
effort. A successful pilot on a small application or component can help establish
expectations and show proof of concept.

1.1.2.1 Automated tools fix broken processes


All too often, organizations realize their application development projects are impeded
by lack of efficiency, effectiveness, and sufficient system quality, and immediately
conclude that automated tools will fix the problems. These organizations develop and
implement elaborate plans for tool acquisition and integration only to find that the same
problems persist. The organization then assumes that the tool is to blame and so exerts
pressure to scrap the existing tool in favor of a ―new-and-improved‖ tool with ―better‖
features. The reality, however, is that the problem is probably not with the tool but rather
with the organization‘s non-existent, inconsistent, insufficient, or not followed processes.
The tool is not a process nor can it fix broken processes regardless of how much it
costs or the sophistication of its features. The automation tool is merely an instrument

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 25


TABOK Segment 2: Macroscopic Process Skills

that can help better implement the activities and efforts that are already working. In
some situations, if the processes are strong enough, a tool may not even be necessary.
This misconception also implies another consideration when implementing a new tool:
planning its implementation should include coordination with the organization‘s business
processes in addition to its application design and development processes.

1.1.2.2 Automation success means achieving 100% automation


Some development teams (including some automated test engineers) believe that test
automation success means achieving 100% automation, that is, 100% of all existing
manual test cases are automated. This is normally unrealistic since test automation is
an expensive undertaking, and some tests are either too complex to automated, or don‘t
need to be executed very often. In addition, organizations typically lack adequate time
or personnel to support the development or maintenance of such a volume of tests. In
most cases, 100% automation is possible only when it is defined to mean 100% of
some subset of the entire test suite, or when the test bed itself is very concise and only
composed of a small number of high-level tests.

1.1.2.3 Automated tests find lots of new defects


Much of an automated testing effort involves creating scripts to be executed for
regression test purposes, meaning the test has already been executed at least once
and will be executed again to ensure that implementation and integration of new system
functionality or components do not affect existing system functionality/components.
Since tests usually uncover defects the first time they are executed, re-executing the
same test – either manually or via an automated script – is much less likely to uncover a
new defect. Moreover the same skill and care in designing solid test cases and scripts
to run manually are the same skill and care required for automated scripts. Good
automation does not replace poor or insufficient test scripts; it just runs them faster.
Regression testing is mostly meant to mitigate the risk of existing functionality
contracting defects as a result of some new change to a different part of the system. If
the test team finds that regression tests are uncovering a relatively large number of
defects, then this may be an indication that there is a significant problem somewhere in
the SDLC processes that needs to be quickly addressed. It will be much cheaper to
address the underlying problem in the SDLC processes than it will be to find and fix
defects that result from that problem.

1.1.2.4 Automated testing replaces manual testing


Much to the disappointment of some project managers and excitement of test
engineers, automated testing typically does not replace manual testing. As pointed out

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 26


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
in the section that describes the differences between manual and automated testing, a
handful of scripts cannot compete with 100 billion neurons. While it is an enhancement
to manual testing, automated testing cannot replace the analytical skills required to fully
conduct testing. Through manual testing, information may be gathered about the
application, and spur-of-the-moment decisions may be made about how to enhance a
test that would be too difficult to program into an automated script. In addition, manual
effort is still necessary for analyzing and maintaining the automated scripts after they
have been run.
The only time that automated testing could in any sense replace manual testing is in the
event management decides that the risk of errors being introduced into the system are
so low that the automated test suite is significant enough to mitigate that risk. An
example scenario in which this situation may occur is where minor patches are applied
to the operating system but no functionality is changed. That said, depending on the
nature of the patches and the application some spot testing may be advised.

1.1.2.5 Record & Playback is an effective automation approach


Record & Playback (also known as Capture/Replay) is a feature made available by
many automated test tools that captures actions performed manually on an application,
and replicates those actions in the form of script code that can be replayed on the
application. The replayed code would then ideally execute the steps in the application
just as they were executed manually. Ideally this sounds like a solid approach but in
reality, it doesn‘t work that smoothly.
When scripts are simply recorded with no forethought or design, what typically happens
is that they won‘t play back exactly as recorded, prompting the need for some time-
consuming patchwork solution that gets the script to behave as desired. Patchwork
solutions do not work reliably and lack durability to perform over time and different
generations of the application, making the script maintenance difficult. Redundancy also
results from this Record & Playback approach. When the same actions appear in
multiple scripts, and those actions change in the application under test (AUT), keeping
those scripts in sync is difficult and error-prone. This also results in excessive rework
and reactive function creation.
Record & Playback can be an effective mechanism for capturing basic automation
activities and object information. In addition, it is also useful for creating an audit trial by
recording manual test activities. So while it has some redeeming qualities, Record &
Playback should not drive the automated test effort as a whole. Automated testing
should be treated like a miniature software development lifecycle, complete with its own
planning and implementation phases. Section 4.2.2: Level 1 Automation carries a more

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 27


TABOK Segment 2: Macroscopic Process Skills

detailed discussion of the advantages and disadvantages that may occur with Record &
Playback.

1.1.2.6 One tool can integrate every phase of testing


The commonly held ―one-tool-fits-all‖ assumption perpetuates the misconception that
one tool will be able to handle every aspect of the testing lifecycle with little to no
manual intervention. In reality, it is almost impossible to create and implement a single
test tool that is compatible with all processes and all software (both application and
environment) used in the testing environment for the application under test (AUT).
Multiple tools are often required to handle various processes and/or various aspects of
the system. In addition, more intrusive measures might be required, such as the
insertion of some special code into the system in order for it to integrate with the
automated testing tool. While test suites can support more comprehensive testing
careful configuration and monitoring their use and results is still necessary

1.1.2.7 Automated tools will immediately decrease the testing schedule


One of the more heart-wrenching misconceptions is that a positive return-on-investment
(ROI) should be expected immediately during the current application release,
particularly a shortened testing schedule. The reality, however, is that automation
performed in the current release often does not yield significant benefits until
subsequent releases; the initial test schedule is rarely shortened. One reason is that the
scripts need to be developed, verified, and debugged, all of which requires time and
effort.
Also, shifting testers to other tasks will influence the application‘s development and
release schedule by reducing the number of hands available to support the automation
effort. With all of this, one may reasonably ask, ―Will automated testing ultimately save
me some valuable time?‖ Another way to look at the question is this: ―What tedious,
routine test tasks can be run automatically to free the team to carry out more complex,
analytical quality assurance work?‖ This is the question that will help assess the
efficiencies that automated testing can bring. Instead of seeing a change in schedule,
the automation benefits may include increased coverage and therefore decreased risk.
Consider, when a tool is first introduced, there is a learning curve associated with that
tool that absorbs time. It is not unrealistic to initially see an increase in the level of effort
required for testing when a tool is first introduced. In fact, as with any skill, learning the
tool‘s capabilities, limitations, and configurations should begin and be practiced before it
is used in a real-time test phase.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 28


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
1.1.2.8 Automation costs = tool software costs + script development costs
This misconception arguably poses one of the greatest dangers to launching and
sustaining an automation effort because this false equation results in underestimating
the automation investment, therefore overestimating the automation return-on-
investment. It is often believed that once a test is automated, the only work that is
required of the automator is clicking a button to run the script. Automated tests require
more than clicking a button.
In addition to the tool, software (tool and license) costs and script development costs,
there are several other costs that must be considered. These costs include but are not
limited to:
Training costs – The cost of providing training to personnel on how to use the
tool.
Script maintenance costs – As time progresses and the system under test
changes, the automated scripts will need to under-go maintenance. Even without
functional changes to the system under test, script execution may experience
sporadic failures due to unforeseen glitches in the test environment. Such
glitches may require script modifications by way of changed script code, or
exception handling routines.
Tool maintenance costs – Automated test tools are themselves software
products that may produce unpredictable behavior and yield defects that require
patches and fixes. These issues require time and effort for troubleshooting and
resolving.
Infrastructure costs – The tool implementation may require the acquisition of
extra, dedicated machines for storing server licenses and databases, and for
automation execution without impacting manual testing efforts. Test data
acquisition and management may also be included among the infrastructure
costs.
Organization costs – Organizational concerns may relate to the allocation of time
for the acquisition of and integration of the tool. Also, additions or modifications to
existing standards, processes and procedures may be required. Not to mention,
people challenges must be faced, such as handling resistance to change, and
ensuring people are not so consumed with the act of automation that they lose
sight of the overall testing goals.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 29


TABOK Segment 2: Macroscopic Process Skills

1.2 Test Tool Acquisition & Integration


Short Answer
Test tool acquisition is the process of identifying the tool goals and objectives, making
an effective business case and narrowing down a group of candidate tools to the
selection(s) that best meet(s) the organizational needs. Test tool integration is,
conversely, the process of gradually addressing tool considerations and widening the
tool‘s acceptance, use and benefits.
Various sources provide extensive directions on how to conduct tool selection and
acquisition process. Most sources agree, however, that this process should, at a
minimum, include the following:
1. Deciding Whether to Automate
2. Conduct a Test Tool Evaluation
3. Consider Tool Implementation
4. Pilot the Tool

1.2.1 Deciding Whether to Automate


Embarking on test automation is not an effort to launch without careful and objective
planning. Introduction of any tool (not just test tools) for the wrong reason or without
understanding its impacts will likely decrease the efficiency and effectiveness of your
current environment. A decision to automate should ideally be accompanied by a well-
reasoned assessment of how, if at all, automation could be most useful in your
environment. In fact, the process of writing a business case to elicit organizational
support is an excellent tool to help clarify the argument for and against automation. This
process involves:
Reviewing alternatives to automation
Identifying tool requirements
Assessing Automated Testability
Acquiring management support

1.2.1.1 Reviewing alternatives to automation


An important first step in making a decision to automation is identifying the existence,
maturity, and outputs of the organization‘s processes or procedures. Generally, this step
is difficult in that few organizations have documented (and thus, accountable)

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 30


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
procedures that are sufficiently mature, woven into the organization‘s culture, and
assessed for compliance to easily map to tool functionality. If this is the case, consider
delaying introduction of test automation. For example, if the project schedule lacks
adequate time for regression testing, rather than assuming that automation should be
implemented as the solution to this problem, the test team should assess whether tests
in the current regression test suite are actually necessary, effective, and efficient.
Perhaps there is not enough time allocated for executing them because they are
redundant, too lengthy, or simply no longer germane to functionality that poses a high
risk to the system. By streamlining the tests in the regression test suite, the time issues
may be resolved without the introduction of a costly automation effort.
The issue of poor communication among stakeholders provides another example.
Before concluding that tools will inevitably streamline team communication, the team
must articulate the causes of the poor communication. Are critical hand-offs of
information and tasks dropped or could automatic notifications enhance the otherwise
sufficient communication plan? Is the communication plan simply not implemented or
non-existent? If there is no plan or the plan is not being implemented, it is unlikely that
automation will yield any positive results. However, improving how the processes and
procedures are implemented and documented may sufficiently enhance the
organization‘s efficiency and accountability without investing in test automation. In other
words, upfront evaluation of processes and procedures will either help to maximize the
effectiveness of the automation implementation or eliminate the need for automation all-
together.
Once the stakeholder team determines that automation has sufficient potential for
success and has identified the thresholds by which implementation and use can be
adjudged as productive, the various automation alternatives must be explored. When
exploring various tool alternatives for the purpose of functional test automation, the use
of Application Interfaces (refer to Section 2.2) should be explored. In addition,
commercial and non-commercial (open source, freeware, etc.) should be considered.
Note that the Criteria Checklist illustrated in Appendix A: Sample Evaluation Criteria has
columns for multiple tools to be compared.

1.2.1.2 Identifying tool requirements


Identifying tool requirements begins with an analysis of the processes or procedures
that you wish to automate. During this analysis, outline the specific issues that should
be resolved by test automation and map them against the candidate tool options. For
example, perhaps there is insufficient time to execute manual regression tests for a
build, so automated testing may help with this. Or there may be a need to introduce
automation to conduct tests that can‘t be feasibly implemented manually. Perhaps poor

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 31


TABOK Segment 2: Macroscopic Process Skills

team communication is fostering the introduction of defects into the system, so a tool to
help streamline collaboration and communication is desired.
A requirements matrix (such as the Criteria Checklist illustrated in Appendix A: Sample
Evaluation Criteria) is a tool to help the application team (which includes project
managers, requirements analysts, developers, and test engineers) capture the
requirements the tool must meet and identify which tools fit the bill. This matrix also
supplements a convincing business case to describe the tools and benefits the
proposed tool solution will provide, and that due diligence has been followed to assure
that proper analysis has driven the decision. See Skill Category 3: Automation Tools for
information on specific features of various tools that support the SDLC.

1.2.1.3 Assessing Automated Testability


Upon identifying tool requirements, the next step is to determine if the system is
automatable (testable via automation). This is often a major concern for automation of
test execution. One potential requirement for application automatability is that the
application allows tools to communicate with its components. Another automatability
factor may be consistency of component properties and methods. If an automated tool
accesses an object‘s properties in order to uniquely identify that object for automation,
those properties must be consistent and or predictable. If the object properties are
consistently changing in an unpredictable manner, then the automated tests won‘t be
able to consistently communicate with the application, rendering the automated tests
useless (refer to Section 9.1 for more information on objects).
At times, it may be necessary to sit down with the application developers and work with
them to make modifications that make the system more automatable.

1.2.1.4 Acquiring management support


Ultimately acquiring management support requires presenting a persuasive, yet realistic
business case for management. This business case should include
The benefits that may be expected through the introduction of test automation
(see Section 1.1.1)
The scope of implementation, such as full integration from requirements through
deployment, specific test activities, integration between code builds, testing, and
configuration management, etc.
Known risks or deficits in introducing automated testing, such as candid
identification of the issues that test automation will not resolve and factors that
inhibit its success, and how they will be addressed

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 32


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
A cost/benefit analysis of implementing automation with defensible figures and
how they were derived. It is often wise to address the cost/benefits in terms of
return-on-investment (ROI) (see Section 1.3).7
While presenting the analysis to management typically requires a well-constructed,
credible business case, a business case alone is often not enough. Business cases
may help to formally present information to managers that are already truly receptive to
the idea of investing in test automation but often do not make the case to unreceptive
decision-makers (such as those who do not fully understand the role of testing and
quality assurance in an application‘s lifecycle, or who have been involved with business
cases that had supported failed projects).
If decision-makers are thoroughly opposed to the introduction of test automation, no
amount of business cases and ROI calculations will change their minds, but a pilot
study that provides a credible proof-of-concept often does. This ―free sample‖ approach
is one in which management gets a glimpse of actual results without a large investment.
Armed with this understanding, decision-makers have a stronger framework to assess
the business case in terms of the scope, investment, and clear expectations it
describes. Depending on the tool to pilot, the ―free sample‖ approach may be
accomplished by taking a few hours to download a free trial of a commercial tool, an
open source tool, or scripting language engine, then implementing it on a very small
scale in the actual test environment. This may be enough to provide decision-makers
with tangible results that will make them more comfortable with the promises of test
automation.

1.2.2 Conduct a Test Tool Evaluation


In order to make a well-informed decision to move forward with test automation, it is
necessary to make a well-informed decision about the tool that will be used for test
automation implementation.8 Tool evaluations – which may or may not be included in
the business case – are a more technical expansion of the business case for decision-
makers. It typically involves the following steps:
Identifying Evaluation Criteria

7
Be sure to be conservative in ROI estimates in an effort to successfully prevent the early inception of
unrealistic expectations, and to allow for escalation in costs.
8
In reality, however, the tool selection often pre-dates and influences the decision to automate. If this is
not the case, however, an evaluation must take place.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 33


TABOK Segment 2: Macroscopic Process Skills

Identifying Evaluation Candidates


Performing the Evaluation

1.2.2.1 Identifying Evaluation Criteria


Numerous tools are available that could potentially meet the needs of the organization‘s
application project. To this end, the development and test teams must identify the
criteria (that is, technical requirements) the tool must support; this will help to compare
and contrast candidate tool options for their viability in the organization‘s particular
application development environment.
The first step in identifying the appropriate evaluation criteria is recognizing the tool
constraints that inherently exist. Much of this information may already be available from
application design documents, interface control documentation, development and
deployment environment specifications, hardware specifications, and the like. There are
probably several constraints that will help to narrow the tool selection playing field,
including but not limited to:
Environmental Constraints – Environmental constraints scope the hardware and
software restrictions that will need to be placed on the tool. For example, if you
are running a Linux operating system, then a restriction may be that you need a
tool that runs on a Linux operating system. In another scenario, the organization
may be considering migrating to some of the emerging technologies and
architectures (such as cloud computing) which require their own plans and tools.
One final example is in the event that the organization requires a new tool to
integrate with existing project tools.
Cost Constraints – Cost constraints are those imposed by the project budget
and restrictions on how much of the budget can be allocated to a testing tool.
(refer to section 1.3)
Quality Constraints – Quality constraints are based on standard quality attributes
used for assessing the quality of software products. Test Engineers often look
only at quality attributes when evaluating the AUT but quality attributes should
also be reviewed when evaluating tools, and when making determinations on the
level at which the automated test tool will be implemented (refer to Skill Category
7: Quality Attribute Optimization). These attributes include, but are not limited to,
maintainability, reliability, usability, functionality, efficiency, and portability.
Functional Constraints – Functional constraints are derived from knowledge of
the process that will be automated. For example, an organization may implement
a defect management process that requires development notification upon

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 34


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
changing the state of a defect. This organization may require all potential tools to
have the ability to assign a state to a defect, and to have an automatic email
notification component that is triggered upon each pertinent state change. Ease
of use and customizable report generation may likewise be required functional
constraints.
Once the constraints have been identified, other standard criteria may also be added
and used in the evaluation in an effort to compare and contrast tool candidates. Criteria
may also be obtained simply by becoming familiarized with the commercial automated
tools on the market. The features that come across as the most important for your
organization should be included in the criteria list (See Appendix A: Sample Evaluation
Criteria).

1.2.2.2 Identifying Evaluation Candidates


Identify a large, comprehensive list of automated test tools that may reasonably meet
the needs of your automation effort, and then if that list has greater than 4 tools, then
begin using the identified constraints to narrow the list down to 2 to 4 tools (See
Appendix A: Sample Evaluation Criteria). If after applying the constraints, no tools are
left on the list, it may then become appropriate to either making the constraints a little
less restrictive or identifying different candidate tools.

1.2.2.3 Performing the Evaluation


Evaluating candidate tools begins with the research that supported identifying
evaluation candidate tools (see Section 1.2.2.2). Presumably this research would have
included independent research and vendor demonstrations, as well as testimony from
independent tool evaluators.
A trial implementation serves as a closely-scoped study to determine how each
candidate tool performs within the application environments. To perform this, it may be
necessary to procure a trial license from the vendor (if a commercial product is
evaluated), which should also include technical support. Keep in mind that whether or
not open source or commercial tools are selected, the trial should include input and
review by the appropriate stakeholders, not just test and automation engineers.
Upon completing the evaluation, a tool is selected, and either a commercial tool
acquisition is made, an open source solution is selected, or the custom tool building
begins.

1.2.3 Consider Tool Implementation


Prior to acquisition, a lot of planning must be carried out that takes into account the
considerations that relate to automation implementation, such as technical

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 35


TABOK Segment 2: Macroscopic Process Skills

considerations (such as loading the test tool and configuring it for the specific
environment), staffing requirements, tool training, processes, roll-out schedule,
evaluation schedule, and the like. Once the tool is acquired, these plans (vetted and
communicated to all appropriate stakeholders) drive the order of activities of its initial
implementation and status of those steps.
Planning should also include
An effective ―public relations‖ campaign within the team and external
stakeholders (as appropriate) to manage the expectations of this new tool‘s
position in the application environment.
Training for appropriate personnel.
Modifying existing processes to accommodate the new tool.
Updating the system under test to accommodate the tool implementation, such
as adding stubs to the code to make the tool automatable.

1.2.4 Pilot the Tool


The pilot effort involves selecting a small scaled project/system for implementing the
tool, assessing the impact that the tool and the associated processes have on the test
process, and evaluating the results of the pilot effort.

1.3 Automation Return-on-Investment (ROI)


Short Answer
Test automation professionals must do more than just develop automated test scripts.
To regularly apprise decision-maker stakeholders of the value of automated testing in
the application‘s lifecycle, the test team must regularly apprise and identify automation‘s
potential return-on-investment (ROI), a ratio of benefits to costs. The communication of
quantifiable benefits may be requested before, during, and/or after automation
implementation. But ROI reporting is always necessary throughout the application‘s
lifecycle as a critical metric to assure that the effort is on track and to, right or wrong,
influence the survival of the automation effort.

Automation test engineers are often tasked with reporting ROI to decision-makers, and
without regularly and candidly reporting the unadulterated ROI, the benefits of testing
may go unnoticed and misconceptions about test automation may begin to creep in.
The test professionals must manage expectations for successful automation by

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 36


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
communicating the incremental status of ROI in terms of realistic benefits, including:
cost savings, increased efficiency, and increased software quality. To further help
manage expectations, automation benefits should be quantified via ROI calculations
before, during, and after automation implementation whether such calculations are
requested or not.
A compelling case can be made that identifying and repairing defects earlier in the
development cycle influences the cost of developing the application. As many defects
are logical in nature, testing requirements prior to actual coding reveals several defects
when they are relatively inexpensive to fix. This also enhances the quality of the product
by reducing the need to jettison functionality in which bugs are found closer to
deployment.
Figure 1-5 illustrates an estimated improvement in an application‘s quality based on the
point at which defects are found. Defects discovered in the requirements phase are
estimated to improve the product by 40 to 100 times. Discovering and attempting to
repair bugs during acceptance testing, however, yields substantially less benefit, 9 and
the cost to repair is much higher. According to a study by Hewlett Packard
―If it costs $1 to fix a defect found in the requirements phase,
it costs $2.40 in the design phase, $5.70 in coding, $18.20 in
testing, and $109 if the requirements defect was not found
until the product was released into operation.‖10

This is well-illustrated in Figure 1-6.

9
Borland Corporation. Successful Projects Begin with Well-Defined Requirements. E-Project
Management Advisory Service Executive Update 2 (7), 2001. Available at
http://www.borland.com/resources/en/pdf/solutions/rdm-success-projects-defined-req.pdf.
10
W. Charles Slavin. Software Peer Reviews: An Executive Overview. Cincinnati SPIN. January 9, 2007.
Available at www.cincinnatispin.com/1_2008.ppt.
From Grady, Robert B. 1999. ―An Economic Release Decision Model: Insights into Software Project
Management.‖ In Proceedings of the Applications of Software Measurement Conference, 227–239.
Orange Park, FL: Software Quality Engineering.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 37


TABOK Segment 2: Macroscopic Process Skills

Figure 1-5: Quality benefits of discovering defects earlier in the development cycle

11
Figure 1-6: Escalating costs to repair defects

11
Karl E. Wiegers and Sandra McKinsey. Accelerate Development by Getting Requirements Right.
Serena Corporation. Available at http://www.serena.com/docs/repository/products/dimensions/accelerate-
developme.pdf.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 38


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
The general formula for ROI is the ―return‖ or benefit from an action divided by the cost
of that action, where the ―return‖ is equal to the cost subtracted from the gain:

ROI = Gain – Investment Costs


Investment Costs
Figure 1-7: ROI Formula

Using this formula, several methods for calculating ROI include:


Simple ROI Calculation – Quantifies benefits in terms of cost savings (Testing
lifecycle costs)
Efficiency ROI Calculation – Quantifies benefits in terms of increased efficiency
Risk Reduction ROI Calculation – Quantifies benefits in terms of increased
quality (Failure Costs)
No single measure is best for every environment so it is important to be aware of all
measures that factor into the application design and development environment. If
possible, multiple ROI calculation methods should be used in order to provide a more
complete picture of test automation progress.

1.3.1 ROI Factors


Each of the methods noted above relies on a combination of general factors, manual
testing factors, and automated testing factors for ROI calculation. Although these
methods also apply to manual testing, this discussion focuses on calculating ROI for
automated test design and implementation.
General Factors
General factors relate to costs that may be associated with both manual and automated
test implementation.
# of Total Test Cases – Total number of tests that exist in the test bed
# of Tests Slated for Automation – Total number of tests that have been identified
as automation candidates
Tester Hourly Rate – Average hourly rate of the test engineers responsible for
test execution
Manual Factors
Manual factors relate to costs that may be associated with manual test implementation.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 39


TABOK Segment 2: Macroscopic Process Skills

Manual Test Execution/Analysis Time – Average time that it takes to execute a


single test manually and analyze the results.
Automation Factors
Automation factors relate to costs that may be associated with automated test
implementation.
Automated Tool and License Cost – Cost of purchasing the automated test tool
software, and licenses for the use of that software
Automated Tool Training Cost – Cost of training test engineers to effectively
implement the tool
Automated Test Machine Cost – Cost of computer hardware purchased for the
sole purpose of implementing the automation solution
Automated Test Development/Debugging Time – Average cost of developing and
debugging an automated test
Automated Test Execution Time – Average time that it takes to execute an
automated test
Automated Test Analysis Time – Average time that it takes to analyze a suite of
automated tests
Automated Test Maintenance Time – Average time that it takes to maintain
automated tests over some specified interval of time

1.3.2 Simple ROI Method


The Simple ROI calculation focuses on the monetary savings achieved through
automated test execution. This calculation is useful in that it describes the net benefits
of test automation in terms of the change (reduced or increased) of costs associated
with test execution. We know from the general ROI equation (see Figure 1-7) that we
must calculate the investment and gain.
The investment cost includes such automation cost as hardware, software
licenses, training, automated script development and maintenance, and script
execution and analysis of both the automated scripts.
The gain is set equal to the cost of continuing to manually execute the tests.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 40


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
Because the Simple ROI calculation is expressed in dollars, it is an easy metric to
calculate and communicate, especially with the management decision-makers who are
primarily interested in the bottom line of any effort.
Simple ROI calculation presents several challenges, however. First, it is not always
easy to get management to reveal tester hourly rates to non-management staff, and
normally the ROI is calculated by a test engineer or lower-level manager. Second, this
calculation tends to oversimplify a more nuanced set of circumstances by assuming that
automated tests completely replace their manual counterparts but that may not always
be the case. In most automated test efforts, some elements still require manual
execution. Third, some automated tests contain elements that would not normally be
executed manually, and therefore provides expanded coverage that this ROI calculation
ignores. For example, to save time, manual execution may only verify a sampling of 10
data values for a field that can hold 200 values. Given that these values may be easily
automated, all 200 values may be added to the automated test. Fourth, this calculation
can be a little misleading, because it gives the impression that the project budget will
decrease but in reality, the project budget seldom decreases due to automation. Funds
are usually redistributed for greater testing on other, higher risk parts of the system, or
newer system functionality. See Appendix D: Sample ROI Calculations for sample ROI
calculations.

1.3.3 Efficiency ROI Method


This calculation is based on the Simple ROI calculation but only considers the time
investment and gains for assessing testing efficiency. In that way, it does not consider
monetary values as factors. This calculation is often better suited for test engineers
because the factors used are more readily available to them.
As discussed with the Simple ROI, project dollar figures are seldom reduced due to
automation so using the Efficiency ROI calculation allows focus to be removed from
potentially misleading dollar amounts. This calculation provides a simple ROI formula
that doesn‘t require sensitive information such as Testers‘ Hourly Rates. This makes it
easier for test engineers and lower-level management to present benefits of test
automation when asked to do so. Many of the other disadvantages noted with the
Simple ROI method still exist, in particular, the assumption that the automated tests
completely replace their manual counterparts. In addition, it assumes that full regression
testing is done during every build even without test automation, which may not be the
case. Full regression may not be performed without test automation, so introducing test
automation actually increases coverage and reduces project risks as opposed to
increasing efficiency. See Appendix D: Sample ROI Calculations for sample ROI
calculations.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 41


TABOK Segment 2: Macroscopic Process Skills

1.3.4 Risk Reduction ROI Method


The Risk Reduction ROI calculation addresses ROI assessments not covered by the
Simple or Efficiency methods. The Risk Reduction ROI method looks at automation
benefits independently of manual testing. Test automation saves time in test execution
which provides testing engineers with more time for increased analysis, test design,
development, and execution of new tests. It also provides more time for ad hoc and
exploratory testing. When well planned, this equates to increased coverage which
reduces the risk of production failures. By assessing the risk of not performing
automation (relative to potential production failures) and calculating the cost to the
project if the risk turns into a loss, this calculation addresses the ROI relative to an
increased quality of testing. Given the need for tester‘s rates, risk analysis, and cost
calculations, this formula may be best suited for calculation by upper level management
when there is a need to understand test automation benefits from an organization
viewpoint.
Again, this method eliminates the comparison between automated and manual test
procedures, and focuses on the effects of increased test coverage. The Risk Reduction
ROI method has its own challenges, however. First, it relies heavily on subjective
information. Loss calculations are not absolutes because it is difficult, if not impossible,
to accurately estimate how much money could be lost. Second, it requires a high
degree of risk analysis and loss calculations. In real world projects, it is often difficult to
get anyone to identify and acknowledge risks, much less ranking risks, and calculating
potential loss. Third, this method, without the manual-to-automation comparison, still
leaves the door open for management to ask, ―why not just allocate more resources for
manual testing instead of automated testing?‖ See Appendix D: Sample ROI
Calculations for sample ROI calculations.

1.3.5 Approaches for Increasing ROI


There are several common factors that affect each of the ROI calculations, so
improvements (in the form of reductions) to any of those factors will certainly improve
ROI, regardless of how it is calculated. These factors are:
Automated Test Development/Debugging Time – can be decreased by
increasing the initial time investment in creating a framework and process in
which the automated test development will be conducted. This will have a big
eventual payoff because it imposes standards that will provide for faster
automation.
Automated Test Execution Time – can be decreased by better exception
handling routines. Failed test steps and failed tests drastically prolong test

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 42


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
execution, because the automated test tool spends time looking for what does
not exist, and spends extra time verifying that it really does not exist. In addition,
automated test failures often compound upon one another meaning that one
failure may cause another even though subsequent failures are not related to
defective application functionality. Another way to reduce test execution time is
by balancing the automated testing load across multiple machines. For example,
if it takes 10 hours to execute a suite of automated tests in serial, then the time
may be cut if half if the tests were split between two machines.12
Automated Test Analysis Time – can be decreased by building better reporting
mechanisms into the tests. This may be accomplished by adding special logging
statements to the automated tests that will print to the report in the event of a
failure. In addition, you may have the test trigger a screen capture in the event of
a failure, so that it is easy to know what state the application was in at the time of
failure.
Automated Test Maintenance Time – may also be decreased by increasing the
initial time investment in creating a framework and process in which the
automated test development will be conducted. In addition, increasing modularity
and adding comments to automated tests is another to make maintenance easier
and faster.

1.4 Resource References for Skill Category 1


Introducing Automation
Elfriede Dustin, Jeff Rashka and John Paul. Introduction of Automated Test to a
Project. Available at
http://www.stickyminds.com/getfile.asp?ot=XML&id=3180&fn=XDD3180filelistfile
name1%2Epdf

12
This approach generally works well with processes – automated testing activities – when those
processes can be run in parallel without concern about dependencies between test activities. It does not
work well, however, when dividing work across multiple test engineers without regard to the
dependencies and order of test protocols. This scenario is referred to as the ―mythical man month‖ and is
analogous to assuming that 9 women can be pregnant for 1 month each and produce a baby.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 43


TABOK Segment 2: Macroscopic Process Skills

Tom Wimsatt. How to Get Into Automation. Available at


http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_June2010.pdf

Business Case and ROI


Joshua Barnes. Creating an ROI assessment for implementing IBM Rational
solutions. Available at
http://www.ibm.com/developerworks/rational/library/content/RationalEdge/oct04/b
arnes/
Dean Leffingwell. Calculating your return on investment from more effective
requirements management. Available at
http://www.ibm.com/developerworks/rational/library/347.html
Dion Johnson. Test Automation ROI. Available at
http://www.automatedtestinginstitute.com/home/articleFiles/articleAndPapers/Test
_Automation_ROI.pdf
Elfriede Dustin. The Business Case for Automated Software Testing. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2009/AutomatedS
oftwareTestingMagazine_May2009.pdf
Dion Johnson. A-Commerce Marketing. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_March2010.pdf
Cem Kaner. Improving the Maintainability of Automated Test Suites. Available at
http://www.kaner.com/pdfs/autosqa.pdf
Douglas Hoffman. Cost Benefit Analysis of Test Automation. Available at
http://www.softwarequalitymethods.com/Papers/Star99%20model%20Paper.pdf

Challenges and Considerations


Randall W. Rice. Surviving the Top Ten Challenges of Software Test Automation.
Available at
http://www.stickyminds.com/getfile.asp?ot=XML&id=6506&fn=XDD6506filelistfilen
ame1%2Epdf
Mark Fewster. Common Mistakes in Test Automation. Available at
http://www.stickyminds.com/getfile.asp?ot=XML&id=2901&fn=XDD2901filelistfilen
ame1%2Epdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 44


TABOK Segment 2: Macroscopic Process Skills
Skill Category 1: Determining the Role of Test Automation in the Software Lifecycle
Cem Kaner. ―Avoiding shelfware: A manager's view of automated GUI testing‖.
Available at http://www.kaner.com/pdfs/AvoidShelfware.pdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 45


TABOK Segment 2: Macroscopic Process Skills

Skill Category 2: Test Automation Types and


Interfaces

Primary role(s) involved: Test Lead, Lead Automation Architect


Primary skills needed: Understanding test types, and application interfaces

Although Test Automation is often lumped into a single category, it is an extremely


diverse discipline with many facets for an automated test professional to master.
Regardless of specialty, however, an automator must understand some of the basic
differences among the different types of test automation that form the full ―test
automation‖ spectrum. This understanding complements the skills that have been
mastered and brokers better communication and understanding within the development
and test team as well as with other stakeholders in the organization. It also supports
more candid information sharing and disambiguates the different test professionals‘
roles within the team. This understanding will also instill management with the
necessary confidence for supporting a new or growing automation effort.

2.1 Test Automation Types


Short Answer
Generally, test automation may be divided into the following categories: unit, integration,
functional, and performance.

The Software Testing profession has many distinct, and widely accepted testing types,
including unit testing, integrating testing, functional testing, and performance testing.
Based on this, it would be reasonable to assert that the automation of a test that fits into
one of those categories would be an automated test by the same name (i.e. a
compatibility test that is automated would fall into the ‗automated compatibility test‘
category). While this assertion would not be incorrect, we suggest that there is another
way to segment automated tests in a manner that more closely reflects how automated
testing is implicitly segmented within the world of testing. Such segmentation produces
the following types:
Unit Test Automation

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 46


TABOK Segment 2: Macroscopic Process Skills
Skill Category 2: Test Automation Types and Interfaces
Integration Test Automation
Functional System Test Automation
Performance Test Automation

2.1.1 Unit Test Automation


The objective in unit testing is to isolate a unit of code and validate its functionality
before that code is deployed as part of a larger segment of the application.
A unit is the smallest amount of code that can be tested. The term, however, means
different things to different sources. To some, a unit is defined as a simple control-flow
construct (i.e., if..then..else, for...next); others define it a little more broadly as a single
class (see Skill Category 8: Programming Concepts for more on this). In the event that a
unit of code is dependent on some other component, library, or block of code, stubs are
typically created to take the place of those dependencies so that the unit remains
isolated from the rest of the application. Unit tests reduce the risk inherent in making
application source code modifications in the development process by providing a means
of quickly identifying some high-level problems resulting from the code change; this
advantage is maximized when unit tests are run as part of the normal application build
process.
Unit testing may be performed by simply visual inspection of a code block to verify that
possible inputs will result in the appropriate and expected output. As the application
increases in complexity, however, this approach alone becomes extremely risky, and
error-prone. Larger and more complex applications are progressively more difficult to
traverse through all important boundary conditions and expressions; doing so manually
and visually is also extremely tedious. This level of complexity and monotony lends itself
well to automated testing. Automating the unit tests – particularly before or in
conjunction with writing the code – provides the ability to focus more attention on the
quality of the system as a whole.
Automated unit tests are typically written in the same language as the source code
being tested, and it is normal practice to manage and version control the unit test code
as a configuration item in conjunction with the application source code. This makes it
simple to check-in and check-out unit tests along with the source code and to make
updates to both in concert, especially since unit tests are often written and maintained
by software developers responsible for the application source code under test. Although
developers often write the unit tests, some methodologies couple developers with
software test engineers during unit test development to help in ensuring adequate test
scenarios are being addressed.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 47


TABOK Segment 2: Macroscopic Process Skills

Applications typically have multiple units that need to be tested at a single time, so unit
tests are often grouped into suites that can be executed with a single command or
trigger. Test harnesses – a framework that calls and executes a unit of application
source code outside of its natural environment or calling context for which it was
originally created – are often used for accumulating tests into test suites so that they
can be easily executed as a batch.

2.1.2 Integration Test Automation


Logically following unit testing, integration testing involves combining individual units of
code together and verifying they properly interface with one another without damaging
functionality developed. In other words, it verifies the data that is passed between code
units. Integration testing also examines parameters passed among units, along with
global parameters used by all units, ensuring the data integrity (data types and validity)
of those parameters.
Also known as interface testing (not to be confused with User Interface testing) and
string testing, integration testing often incrementally ―strings‖ units together such that
the group constitutes a software function. The two major techniques for conducting
integration testing are the top-down and bottom-up techniques.
The top-down technique functions as its name implies. Broad, top-level units are
tested and integrated first. High-level logic and communication is tested early
with this technique, and the need for drivers is minimized. Stubs for simulation of
lower-level units are used while actual lower-level units are tested relatively late
in the development cycle.
The bottom-up approach also functions as its name implies. Lower-level units are
tested and integrated first. By using this approach, lower-level units are tested
early in the development process and the need for stubs is minimized. The need
for drivers to drive the lower-level units, in the absence of top-level units,
increases, however.

2.1.3 Functional System Test Automation


Functional system test automation is what a large majority of the software industry
thinks of when the term ―software test automation‖ is used. Often simply called
functional test automation, system test automation, or regression test automation, it is
important to use both the words ‗Functional‘ and ‗System‘ for clarification. Keep in mind
that functions can be tested during integration testing while the term system testing can
be used to cover any black-box test conducted on a complete, integrated system to
evaluate the system‘s compliance with its specified requirements. Therefore, system

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 48


TABOK Segment 2: Macroscopic Process Skills
Skill Category 2: Test Automation Types and Interfaces
testing may include functional testing, load/performance testing, etc. In addition,
functional system test automation differs from the functional test automation that may
occur during integration testing. During integration testing, functions are addressed from
a white-box perspective, and the focus is on verifying assumptions made about the
communication among units. Conversely, functional system testing validates functions
from a black-box perspective and the focus is placed on investigating the system as a
whole. This investigation assesses whether the system conforms to the documented
requirements as well as undocumented expectations of the system. This requires a
spotlight on positive and negative testing.
Functional system test automation is typically used for application regression testing
and is often performed on an application‘s graphical user interface (GUI). Keep in mind
that the GUI is not the only interface on which automation may occur (refer to Section
2.2 for more information on interfaces). For simplicity, functional system test automation
will be referred to as simply functional test automation throughout this document.

2.1.4 Performance Test Automation


Performance test automation focuses not on finding defects in system functionality but
on identifying bottlenecks that impact application response times and throughput of the
volume of data that the application should effectively process. In an ideal testing world,
the system should be fully stable from a functional perspective before performance
testing occurs.
When conducting performance testing manually, a human controller with a stopwatch
coordinates the actions performed by multiple users, and monitors the application
processing speeds. Besides terribly impractical, the results in timing are generally
imprecise at best. In performance testing tools, the controller is a component of that tool
that is used to set up and coordinate the actions performed by ―virtual users‖ in addition
to recording and presenting data trends relative to the processing speeds being
experienced in the application.
Since applications are composed of multiple layers, performance testing not only
identifies bottlenecks but also identifies in which layers the bottlenecks occur. Basic
layers in an application include:
Presentation Layer – Layer that contains the application‘s feature-specific source
code or web services. Poorly coded algorithms in this layer may result in poor
application performance.
Database Layer – Layer in which the application database exists. Poorly written
queries in this layer may result in poor application performance.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 49


TABOK Segment 2: Macroscopic Process Skills

Operating System Layer – Layer in which the operating system functions. Issues
related to disk space, CPU usage, memory, disk I/O, etc., may result in poor
application performance.
Network Level – Layer in which network communication occurs. Low network
bandwidth may be one cause of poor application performance.
Performance test automation typically starts out measuring response times across the
collection of all layers. Once a bottleneck is identified, layers are stripped out in order to
identify the source of the bottleneck. In addition, the performance test engineer may
employ various monitors at the various levels to help identify specific problem areas.
Performance test automation is often considered to be synonymous with load test
automation (a.k.a. volume testing) and stress test automation but there are subtle
differences. Load test automation is similar to performance testing except that load
testing (also known as volume testing) gradually increases the load on an application up
to the application‘s stated or implied maximum load to ensure the system performs at
acceptable levels. In this respect, load testing is often a part of the larger performance
test strategy. Stress test automation takes this a step further by determining what load
will actually ―break‖ the application. This is accomplished by gradually increasing the
load on the application beyond the stated or implied maximum load to the point the
system no longer functions effectively.

2.2 Application Interfaces


Short Answer
The main application interfaces appropriate for automated testing are the Command
Line Interface, Application Programming Interface, Graphical User Interface and Web
Service Interface. The Graphical User Interface is probably most popular for automation
implementation because it more closely simulates how a user might use the tool. The
Command Line and Application Programming Interfaces, however, are advantageous
given that they make it simpler to implement test automation.

Software applications often provide multiple interfaces for executing application


functionality including:
Command Line Interface (CLI) – provides access to an application via a
command line interpreter. This provides a text-based mechanism for delivering
commands in a very efficient, direct way. Most scripting languages provide a
mechanism for implementing command line interpreters for test automation.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 50


TABOK Segment 2: Macroscopic Process Skills
Skill Category 2: Test Automation Types and Interfaces
Application Programming Interface (API) – is a set of rules, routines, protocols,
and tools for building or exercising software applications by writing function or
subroutine calls that access functions or methods in a library. In other words, it
provides access to the basic feature building blocks that make up an application.
Scripting languages are often able to connect to an application‘s API which
allows the application features to be exercised from a foreign script, thus
automating the testing of that application.
Graphical User Interface (GUI) – provides users with the ability to interact with
application features via graphical icons, indicators, and controls (e.g., textboxes,
pushbuttons, etc.). The underlying goal of a GUI is to increase the usability of the
application while decreasing the steep learning curve associated with some of
the application‘s more complex operations by providing a more user-friendly,
visual layer on top of application features. GUIs are often a little more
complicated to automate due to the technical challenge posed by trying to get
automated test tools to communicate properly with the ever changing GUI. For
this reason, many test engineers turn to CLIs and APIs. Also, the changing GUI
design poses a maintenance challenge for GUI automation as the GUI will
typically be redesigned more than the underlying features of an application.
Despite the challenges, GUIs are arguably still the favorite for functional system
test automation because there is a strong desire to exercise the application in the
way that most closely mirrors the way the user will be exercising the application,
which will most likely be via the GUI.
Web Service – The World Wide Web Consortium (W3C) defines a Web Service
as a software system designed to support interoperable machine-to-machine
interaction over a network13. Web services provide message-based interfaces
among applications based on accepted standards that enable automated
discovery of available services and implementation without requiring code
changes for sender or receiver.

13
W3C. Accessible at http://www.w3.org/TR/ws-arch/#whatis.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 51


TABOK Segment 2: Macroscopic Process Skills

2.3 Resource References for Skill Category 2


Testing Types
Danny R. Faught. Unit vs. System Testing—It‘s OK to be Different. Available at
www.stickyminds.com
Interfaces
Greg Afinogenov. GUI vs. CLI: A Qualitative comparison. Available at
http://www.osnews.com/story/4418
W3C. What is a Web Service? Available at http://www.w3.org/TR/ws-arch/#whatis
Bj Rollison. UI Automation Beneath the Presentation Layer. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_June2010.pdf
Bret Pettichord. Seven Steps to Test Automation Success. Available at
http://www.io.com/~wazmo/papers/seven_steps.html

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 52


TABOK Segment 2: Macroscopic Process Skills
Skill Category 3: Automation Tools

Skill Category 3: Automation Tools

Primary role(s) involved: Test Lead, Lead Automation Architect


Primary skills needed: Awareness of SDLC tool types and features

A key skill in test automation includes understanding, using, and supporting the test
automation tools that support all aspects of the testing lifecycle. It is expected that an
automated test professional have a basic understanding of that wide range of tools,
especially tools that automate processes. Therefore, one must be able to assess tools
for their appropriateness, and know them well enough to implement and customize
them, develop and generate reports, manipulate files, monitor the system, and prepare
data.

3.1 Tool Types


Short Answer
The types of tools that support the testing lifecycle are many and varied. They include
tools that are used in various phases of the software development lifecycle, and are
often controlled by personnel other than testers. Each of the tool types addressed,
however, directly support the testing lifecycle in one way or another so it is important for
testers to be aware of them.

Table 3-1 provides a listing of some of the basic types of the tools that support the
testing lifecycle. In addition the table also provides the lifecycle phase(s) in which the
tool primarily operates.

Table 3-1: Testing Tool Types

Tool Type Primary SDLC Phase(s)


Software Configuration Management All Phases
Business and System Modeling All Phases
Requirements Management Requirements
Unit Testing Development

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 53


TABOK Segment 2: Macroscopic Process Skills

Tool Type Primary SDLC Phase(s)


Test Management Testing
Test Data Generation Testing
Defect Tracking All Phases
Code Coverage Analyzer Testing
Functional System Test Automation Testing
Performance Test Automation Testing

3.1.1 Software Configuration Management


Software Configuration Management (SCM) tools facilitate identification, tracking,
modifying, versioning, and baselining of the collection of project software artifacts,
including application code, documentation, design specifications, test plans and scripts,
project schedules, all test automation components, and the like. SCM is introduced
during the early phases of the SDLC and is used in all subsequent phases to keep all
versions of the application and its supporting documents in sync. This directly supports
automated testing by helping to ensure that the appropriate versions of test scripts are
used to validate and verify their respective code bases and builds. Without SCM, an
automation effort is subject to duplication of effort (lack of sufficient reuse) due to the
fact that different people may each be building similar test scripts. In addition, false
errors may be generated if the wrong version of a test script is used to validate code
against its requirements.
Below is a list of some fundamental SCM tool features.
Table 3-2: SCM Tool Features

SCM Tool Feature Description


Build Auditing Support Records details regarding the components used in
building the software; supports recreating the build
environment.
Integration Interfaces and integrates with other SDLC tools
(e.g., SCM tools, modeling tools, test tools, etc).
Process Management Support Enforces constraints established in the SCM
process defined by an organization via tool
customization. This includes the ability to control

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 54


TABOK Segment 2: Macroscopic Process Skills
Skill Category 3: Automation Tools

SCM Tool Feature Description


who can make changes and the types of changes
they can make, as well as the ability to define and
govern transitions related to process phases.
Reporting Can generate reports relative to releases, builds,
and individual configuration items. Supports
creation of customized reports.
Security Allows set up and control of access rights to the
SCM tool itself, projects, and artifacts within the
tool.
Traceability Establishes links between related artifacts.
Versioning Manages the evolutionary development of a single
configuration item along multiple (and I
simultaneous) development paths.

3.1.2 Business and System Modeling


Business and System Modeling tools support the creation of charts and diagrams that
represent processes, logical system designs, and physical system specifications that
are inherent in all phases of the SDLC. It frequently occurs, however, that many of the
early phases won‘t sufficiently model the system that is developed; shortchanging
modeling at this point generally results in large gaps within the application team and
external stakeholders in how the system is both understood and developed. It is
therefore imperative that software testers, the quality gatekeepers of a project, be
familiar with how modeling and modeling tools work so that they may support and
sometimes lead creation of the models that help ensure the timely production of a
quality application that meets requirements.
Below is a list of some fundamental Business and System Modeling tool features.
Table 3-3: Modeling Tool Features

Business / System
Modeling Tool Description
Feature
Diagram Support Provides templates and symbols for a vast collection of diagram
types (e.g., block diagram, organizational chart, class diagrams,
flow charts, state diagrams, use cases, physical data models,

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 55


TABOK Segment 2: Macroscopic Process Skills

Business / System
Modeling Tool Description
Feature
etc.)
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Model-driven Supports the capture, development, and use of patterns to
development support automatic code generation.
Reporting Can generate reports relative to developed diagrams. Supports
creation of customized reports.
Security Allows set up and control of access rights to the modeling tool
itself, projects, and artifacts within the tool.
Support for Various Supports development of models using standard rules and
Modeling syntax defined within a specific modeling language (e.g., UML).
Languages
Traceability Establishes links between related artifacts. Allows user to link
diagrams with actual code components

3.1.3 Requirements Management


Requirements Management tools support the collection, analysis, and management of
system requirements, whether the requirements are user, business, technical,
functional, non-functional, standards, or other types of requirements used within a
product development lifecycle.
Because requirements are the critical blueprint for the application and at times the
―contract‖, that defines agreement of application functionality across the stakeholders,
they are developed iteratively often by diverse teams. This means that as they change,
their corresponding test scripts must also be updated so that the code is validated
against the associated requirement set.
Many of the defects that are identified during software testing and beyond are
attributable to requirements issues (see Figure 1-5). Finding these issues late in the
SDLC results in a ballooning cost of quality due to drastic increase in failure costs and
rework (see Figure 1-6). Requirements management tools automate, and thus
streamline, the requirements collection, and management processes. When integrated

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 56


TABOK Segment 2: Macroscopic Process Skills
Skill Category 3: Automation Tools
with the automated test tools, changes in requirements can be more easily reflected in
their particular test scripts. This helps ensure a better-quality system is deployed.
Below is a list of some fundamental Requirements Management tool features.
Table 3-4: Requirements Management Tool Features

Requirements
Management Tool Description
Feature
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Import/Export Supports import of requirements from external source; exports
requirements to source documentation.
Source Can store and/or link source documentation to one or more
Documentation requirement stored in the tool.
Management
Customizability Can add existing fields and components to the tool that make
sense for the current organization or project.
History Maintains requirements change history
Reporting Can generate reports and documentation relative to the
existing requirements, requirements prioritization, requirements
history, or relation to outside entities such as test scripts.

3.1.4 Unit Testing


Unit Testing tools are typically called Unit Testing Frameworks. For example, the first
statement on the JUnit (popular unit test tool for Java) website reads, ―JUnit is a simple
framework to write repeatable tests. It is an instance of the xUnit architecture for unit
testing frameworks.‖14 Another example is found of Wikipedia. Under the Unit Testing
entry on Wikipedia, Unit testing frameworks is defined as, ―Unit testing frameworks are
most often third-party products that are not distributed as part of the compiler suite.‖15

14
http://junit.sourceforge.net/
15
http://en.wikipedia.org/wiki/Unit_test

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 57


TABOK Segment 2: Macroscopic Process Skills

Automated unit testing may be performed without a framework by simply writing code
within the same development tool that is used to develop the code that needs to be
tested. This test code calls and tests the code units of system and uses assertion and
exceptions to dynamically identify failures. Unit test frameworks allow unit testers to
perform the same activities but offer more advanced features that facilitate the creation,
execution, and results reporting of unit tests.
Below is a list of some fundamental Unit Testing tool features.
Table 3-5: Unit Testing Framework Features

Unit Test Tool


Description
Feature
Assertions Supports provision of a large number of assertions for validating
a code unit and generating error messages in the event of an
error.
Suites Allows the user to add a unit test to an existing suite of unit tests
and have the results collectively reported.
Test Fixtures Also known as the ―test context,‖ test fixtures represent the
necessary preconditions or the state used as a baseline for
running tests. Test fixtures ensure that there is a controlled,
stable environment in which tests are run so that results are
repeatable. Tools support test fixtures by offering events such as
the following
setup() – creates the desired state for the test prior to
execution
teardown() – cleans up following the test execution

3.1.5 Test Management


Among the most basic approaches to storing and maintaining test cases is to use a
word processor or spreadsheet program for test development while using the file
system for storage. While this approach is basic and uses commonly available tools, it
is not very efficient and provides little-to-no integration with the application, configuration
management, or requirements. For this reason, Test Management tools are often
introduced. Such tools typically provide a user-friendly GUI in which to create test cases
while providing a backend database that greatly streamlines the process of managing
those test cases.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 58


TABOK Segment 2: Macroscopic Process Skills
Skill Category 3: Automation Tools
Below is a list of some fundamental Test Management tool features.
Table 3-6: Test Management Tool Features

Test Management
Description
Tool Feature
Manual Test Supports authoring manual test cases and procedures within
Maintenance the tool.
Automated Test Supports integration with and storage of automated tests.
Maintenance
Distributed Test Allows setup and administering test assignments to remote
Environment machines.
Test Control Supports execution of both manual and automated tests from
the test management tool and stores the results. Tests may be
executed at the current time or scheduled to run at a later time.
Reporting Supports development of customized reports that allow for the
extraction of information relative to the tests and/or results of
test executions.
Import/Export Supports import of tests from, and export of tests to, source
documentation.
Test Organization Allows the test engineer to arrange tests in a folder structure
within the tool similar to how the tests may be arranged within
an operating system file system.
Traceability Allows establishment of links and dependencies between
related entities (e.g., requirements, defects, etc.).
Automatic Email Allows automatic email of test results to identified application
Notification team members.
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Defect Management The ability to report and management defects.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 59


TABOK Segment 2: Macroscopic Process Skills

3.1.6 Test Data Generation


Test Data Generators are tools that create data for testing a system. These tools
typically accept data format constraints and then produce both valid and invalid data for
positive and negative test cases.
Table 3-7: Data Generation Tool Features

Data Generation
Description
Tool Feature
Multiple Formats Offers the ability to generate data in multiple formats (e.g. flat
files)
Data Repository Supports storage of generated data in a specified data base.
Real Data Seeding Offers the ability to generate data based on real application
sample data.

3.1.7 Defect Tracking


Defect Tracking tools are essentially databases of defect reports with an interface that
facilitates actions such as creating new defect reports, changing status (state) of a
defect report to reflect the progress of that defect throughout its life, and the generation
of reports and trends relative to defects within a project and/or organization. Ideally,
defect tracking tools should be initiated during the early stages of the SDLC, most
usefully during requirements gathering and definition, and used in all subsequent
phases. Frequently, however, these tools are mostly used during the testing phase, to
log problems found with developed code. Many test management tools have defect
tracking capabilities but very often such capabilities are housed independently.
Below is a list of some fundamental defect tracking tool features.
Table 3-8: Defect Tracking Tool Features

Defect Tracking
Description
Tool Feature
Automatic Email Allows automatic email of test results to identified application
Notification team members based on specified defect report modification
(e.g. status changes).
Data Storage Supports storing multiple pieces of information about the defect,
such as time/date of creation, severity, priority, status, summary
of problem, problem details, attachments, ticket creator, name

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 60


TABOK Segment 2: Macroscopic Process Skills
Skill Category 3: Automation Tools

Defect Tracking
Description
Tool Feature
or ID of the developer assigned to fix the issue, and estimated
time to fix the problem.
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Process Supports enforcing constraints established in the Defect
Management Tracking process defined by an organization via tool
Support customization. This includes the ability to control who can make
changes and the types of changes they can make, as well as
the ability to define and govern transitions related to the state
currently held by the defect.
Reporting Provides the ability to generate reports relative to defect
statuses and trends.
Security Allows set up and control of access rights to the tool itself,
projects, and artifacts within the tool.
Traceability Allows establishment of links and dependencies between
related entities (e.g., requirements, defects, etc.).

3.1.8 Code Coverage Analyzers


There are two types of test coverage: requirements coverage and code coverage.
Requirements coverage is typically addressed with tools such as requirements
management tools and test management tools. Code coverage, however, is typically
addressed in a separate set of tools known as Code Coverage Analyzers. Code
coverage analyzers reveal how much of the code is being implemented while the code
is being functionally exercised, mostly during Testing Phase‘s test execution.
Below is a list of some fundamental Code Coverage Analyzer features.
Table 3-9: Code Coverage Analyzer Features

Code Coverage
Analyzers Feature Description
Flexible Coverage The ability to measure code coverage in various ways
Measurements including: Statement Coverage, Condition Coverage, Path
Coverage, Decision Coverage, Module Coverage, Class

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 61


TABOK Segment 2: Macroscopic Process Skills

Code Coverage
Analyzers Feature Description
Coverage, Method Coverage, etc.
Reporting Provides the ability to generate reports relative to code
coverage.

3.1.9 Functional System Test Automation


Functional System Test Automation tools are used for automating the functional
testing of an application. These tools may be what are considered GUI drivers; the tool
interacts with the application‘s graphical icons, indicators, and controls (e.g., textboxes,
pushbuttons, etc.) thereby communicating with the application in a way that closely
resembles actual user interactions. The tools may also have the capability of driving the
application without the GUI (see Section 2.2 for more information).
Below is a list of some fundamental Functional System Test Automation tool features.
Table 3-10: Functional System Test Automation Tool Features

Functional System
Description
Test Tool Feature
Customization Provides easy customization of the tool‘s features.
Cross Platform Allows the tool to function across different platforms or
Support browsers.
Test Language Supports ability to script tests in a standard programming
language.
Record & Playback Supports ability to dynamically capture actions that are
performed manually on an application, and replicating those
actions in the form of code that can be replayed on the
application.
Test Control Allows the user to manage when and by what trigger
automated test(s) are run and the results stored.
Distributed Test Allows setup and administering test assignments to remote
Execution machines.
Test Suite Recovery Indicates the ability of the tool to recover from unexpected
errors.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 62


TABOK Segment 2: Macroscopic Process Skills
Skill Category 3: Automation Tools

Functional System
Description
Test Tool Feature
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Reporting Provides the ability to generate reports relative to defect
statuses and trends.
Vendor Support Indicates the availability of technical support from the tool‘s
vendor.
Licensing The vendor provides different licensing solutions that meet the
needs on the customer.

3.1.10 Performance Test Automation


Performance Test Automation tools are used for automating the performance testing
of an application. These tools are typically used for load and stress testing as opposed
to single person performance (which can be accomplished via a Function Test
Automation tool). These tools help to identify bottlenecks relative to application
processing times, as well as the volume of data an application can effectively process.
Below is a list of some fundamental performance test automation tool features.
Table 3-11: Performance Test Automation Tool Features

Performance Test
Automation Tool Description
Feature
Customization Provides easy customization of the tool‘s features.
Dynamic Virtual User Indicates the ability to dynamically adjust the number of virtual
Adjustments users accessing some component or the entire system at any
given time.
Cross Platform Allows the tool to function across different platforms or
Support browsers.
Standard Test Supports ability to script tests in one or more standard
Protocols languages.
Record & Playback Supports ability to dynamically capture actions that are
performed manually on an application, and replicating those
actions in the form of code that can be replayed on the

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 63


TABOK Segment 2: Macroscopic Process Skills

Performance Test
Automation Tool Description
Feature
application.
Test Control Allows the user to manage when and by what trigger
automated test(s) are run and the results stored.
Configurable Think Designates the amount of time each virtual user ―waits‖ before
Times performing the next action.
Distributed Test Allows setup and administering test assignments to remote
Execution machines.
Session Handling Tracks performance of session handling techniques such as
cookies and URL rewriting.
Test Suite Recovery Indicates the ability of the tool to recover from unexpected
errors.
Integration Interfaces and integrates with other SDLC tools (e.g., SCM
tools, modeling tools, test tools, etc).
Reporting Provides the ability to generate reports relative to defect
statuses and trends.
Vendor Support Indicates the availability of technical support from the tool‘s
vendor.
Licensing The vendor provides different licensing solutions that meet the
needs on the customer.

3.2 Tool License Categories


Tools may be parsed into many different license categories but the most common
categories include:
Freeware
Shareware
Open Source
Commercial

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 64


TABOK Segment 2: Macroscopic Process Skills
Skill Category 3: Automation Tools
Freeware, shareware, and open source are all, at some level, available for free usage
but each has distinct characteristics. Freeware is software which can be downloaded,
used, and copied at no cost, with no time restrictions, but is typically closed source.
Shareware is similar to freeware in that the user can download and try the software for
free. The difference, however, is that the free use is normally accompanied by
restrictions. The restriction may be a time restriction which makes the software available
for a set amount of time before requesting payment for a registration number. The
registration number typically unlocks unlimited use of the tool‘s full functionality. Without
obtaining a registration number, however, restricted or partial functionality is often still
available.
Open source software as computer software for which the source code is made
available under a copyright license that typically allows users to view and/or change the
software. Many open source licenses meet the requirements of the Open Source
Definition16 defined by the Open Source Initiative. The users are free to use, modify,
and re-distribute the source code.
Commercial software is sold as a retail product under restricted licenses. It normally
comes with retail software, documentation, and installation instructions. The user must
be aware of schedules for updates, restrictions in distribution, maximum concurrent
number or users, privacy provisions, and other constraints. Reading the fine print in the
license may be difficult and confounding but it is necessary.

3.3 Resource References for Skill Category 3


Test Tools and Features
Elisabeth Hendrickson. Making the Right Choice. Available at
http://testobsessed.com/wp-content/uploads/2007/01/mtrc.pdf
Randall W. Rice. Free and Cheap Test Automation. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_June2010.pdf
ATI. Automation Honors Awards. Available at
http://atihonors.automatedtestinginstitute.com

16
http://opensource.org/docs/osd

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 65


TABOK Segment 2: Macroscopic Process Skills

ATI. Automated Test Tool Index. Available at


http://tools.automatedtestinginstitute.com
Test Tool License Types
AST Magazine Open Sourcery Department. Ajar Source: It‘s Free…For a Fee.
Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_December2010.pdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 66


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks

Skill Category 4: Test Automation Frameworks

Primary role(s) involved: Lead Automation Architect, Cross Coverage Coordinator,


Automation Engineer
Primary skills needed: Assessing the project environment, setting automation scope,
understanding automation frameworks

Automated testing is most effective when implemented within a framework. Frameworks


are often thought of as the collective structures that compose unit testing tools. This
section focuses on frameworks that may be defined as the physical structures used for
test creation and implementation, as well as the logical interactions among those
structures. This definition also includes the set of standards and constraints that are
established to govern the framework‘s use.
Over the years, automated testing (and thus automated test frameworks) have evolved,
becoming increasingly defined and sophisticated with each new phase. Each phase
offers a new level of structure and includes framework types with individual advantages
and challenges, thus each remains relevant and should be assessed based on the
environment in which they are to be implemented. These levels generally apply to
Functional Test Automation but in some instances may expand to address Unit Test
Automation.
The reason for the growth in automation frameworks is in the fact that the total cost of
an automation effort can greatly increase if the framework in which automation occurs is
not appropriately defined. Figure 4-1 reveals that the total automation cost is composed
of both development costs and maintenance costs. As the automation framework
becomes more defined, scripting increases in complexity. While this causes
development costs to increase, it also causes maintenance costs to decrease provided
the personnel responsible for maintaining and supporting the framework are sufficiently
versed in that framework.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 67


TABOK Segment 2: Macroscopic Process Skills

Equilibrium

Total
Automation

Development Cost
Cost

Ce

Maintenance
Cost
Framework AF
Definition

17
Figure 4-1: Automation Cost

This skill category addresses the three framework levels, the different types of
frameworks associated with each level, and how your choice of framework may be
affected by the scope that is defined for automation in a given organization. The level of
expertise required for implementing an automation framework may vary depending
upon whether the framework is built in-house or acquired (commercial or open source),
but the same basic concepts and understanding remain important.

4.1 Automation Scope


Figure 4-1 illustrates that as maintenance becomes increasingly important, the
automation framework will need to be more advanced (refer to section 4.2 for more
information on making a framework more advanced) in order to reduce total automation
costs. The equilibrium point, Ce, in the figure is the point at which the perfect balance of
development and maintenance costs is reached, thereby producing the lowest possible
total automation cost. Af is the general equilibrium point; more discussion on this follows
in this section.

17
Adapted from a diagram in Fewster, Mark, and Dorothy Graham‘s Software Test Automation: Effective
use of test execution tools, p68.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 68


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks
Very often, organizations choose the framework that requires the least investment in
terms of initial development, regardless of the scope and long-term goals the
organization has for test automation. With only development costs being considered, it
becomes apparent that the decision-makers equate the lowest cost framework with the
simplest, least complex automation framework: linear frameworks often driven by
Record & Playback. The logic in the decision is flawed, however. The primary decision
point should be: what framework is most appropriate, effective, and productive in the
organization‘s application development environment and processes. Once the decision-
makers and the test team understand that maintenance costs must also be thrown into
the mix, the framework selection process becomes more involved.
The level of framework definition at which Ce occurs is identified as AF on the
automation framework definition axis of the figure. AF is the framework definition
equilibrium point: the point that identifies how defined and structured the automation
framework should be in order to reach the point of minimum cost, that is, investment <
benefit. The framework complexity equilibrium point can be defined in terms of the
scope of the automation effort because the scope is what dictates how important
maintenance is to the automation effort. The automation scope may be defined in terms
of several basic metrics, and therefore, AF may also be defined by the factors shown in
the following equation:

AF = AN + VN + BN + TN + CN + EN + Ti + P + R

AN = Number of applications expected to be tested by your organization


VN = Number of versions/releases that each application is expected to have
BN = Number of builds/nature of application changes that each application build
is expected to have
TN = Number of tests that you‘re expecting to automate for each application
CN = Number of configurations that an application may have to test
EN = Number of environments in which tests will be executed
Ti = Time period over which the tests will need to be maintained
P = Organizational process maturity
R = Test team‘s technical level

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 69


TABOK Segment 2: Macroscopic Process Skills

Do not assume the factors are numbers to be plugged in; this equation illustrates the
relationship of the framework choice to automation scope. The greater the scope items
– number of applications, versions, tests to be automated, configurations, environments,
the number and natures of builds, the time period over which automated tests will need
to be supported, the organizational maturity, and test team‘s technical level – the more
precisely defined the framework may need to be. It is up to the test team to analyze this
information, and then make a determination about the type of architecture needed in
order to minimize automation costs. And this choice should be supported by addressing
many if not most of these factors.
Let us take a moment and examine each of the factors.
Number of applications under test (AUT) (AN)
A software project may consist of more than one AUT. When responsible for automating
tests for several applications, it is important to take into consideration the characteristics
– functionality, environment, users, etc. – that the applications have in common and
how they are related, and design the automated test framework accordingly.
Number of AUT releases and versions (VN)
Automated testing is often introduced into the testing life cycle to verify application
functionality across multiple application versions. It is therefore important to make the
automated tests scalable, flexible, and modular enough to minimize the impact of
changes in application functionality.
Number and nature of AUT builds (BN)
Just as it is important to make the automated test scalable, flexible, and modular
enough to withstand code and environment changes introduced in new application
versions, it is important to make sure the tests can also withstand changes introduced
by multiple builds within a single version.
Number of tests to automate (TN)
The larger the number of tests that will be automated and maintained, the more robust
and flexible the automated test framework has to be. The framework may need to be
flexible enough to quickly group and execute tests by functional area or priority. In
addition, the framework will need to be robust enough to prevent a failure in one test
from negatively impacting the entire suite of tests being executed.
Number of application test configurations (CN)
Theoretically, if an application is required to be compatible with several different
browser/operating system configurations, it should be the same on all of those
browser/operating system configurations. In reality, however, there may be some subtle

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 70


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks
and not-so-subtle differences among various configurations that the automated test
environment should be equipped to handle.
Number of test environments (EN)
A software project may require testing in the different application development and
deployments environments. The most common environments include environments for
development, functional, staging, and production, and each has its particular
dependencies and configurations.
Each environment may be on a different server, connect to a different database, and
limit the types of testing that can be executed. Setting up tests for each environment
manually is inefficient and is likely to inject errors into the test protocols. The
expenditure in time and energy to reconcile the inconsistencies that could be avoided
with automation is a significant example of cost avoidance. With that said, the
automation environment should account for the variations in an efficient way.
Time period over which tests are maintained (TN)
Often, time is often one of the biggest enemies of test automation. Time causes test
engineers to forget how scripts work as well as the details of the applications they
validate. In addition to developing the proper documentation, it is important to design
the automated test framework with some basic standards (see Skill Category 5:
Automated Test Framework Design for more information).
Organizational process maturity (P)
The maturity of the organizational processes, including communication and
documentation processes, are extremely important for making decisions about the
overall scope for test automation. All of the other elements of the scope may lean
towards a very defined framework but if the organization is not sufficiently disciplined
and mature enough to work in a very defined, complex framework, then it may be
advised to limit the scope of the automation effort, and use a simpler framework.
Test engineer’s technical level I
The test engineer‘s technical level carries the same weight in developing and executing
the automated test processes and tasks. Despite what the other scope elements
predict, if the test engineers do not have an adequate technical background to support
the framework of choice, then it is wise to reduce the scope and implement a less
defined framework.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 71


TABOK Segment 2: Macroscopic Process Skills

4.2 Framework Levels


Short Answer
Unit test frameworks often follow a basic structure used for executing tests against
small blocks of code. This section largely focuses, however, on the levels of frameworks
that are largely associated with functional test automation.
Level 1 automation is best known for having no strong framework. Driven by a ―linear‖
approach to script development, the Record & Playback feature is often the hallmark of
this level. As an approach, it is highly unreliable and due to the amount of rework, not
too efficient.
Level 2 learns from the problems of the first level. It involves creating a framework in
which to automate tests. It also incorporates the concepts of functional decomposition,
and separates test data from automation code through greater use of parameterization.
Level 3 of automation builds upon the concepts of the second level. It strongly depends
on functional decomposition but evaluates the functions at a much higher level of
abstraction. This is accomplished by creating highly technical automated scripts and
components that interpret less technical scripts and components.

4.2.1 Unit Test Frameworks


Unit test frameworks typically support the unit testing of application code although they
are also used for automation of functional tests. Popular incarnations of unit test
frameworks are those that are collectively known as xUnit frameworks which are
available for many programming languages and testing platforms. xUnit frameworks
typically have the following components:
Test Fixtures – The set of preconditions that must be met prior to execution of
the body of the test (see Section 3.1.4 Unit Testing for more information).
Test Suites – A set of tests that share the same fixtures.
Test Body – The portion of the test that executes the main test steps.
Assertions – Functions that verify the behavior of the unit under test.
Unit test frameworks that follow these xUnit basics are typically composed of classes
that are structured as follows:

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 72


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks
setup();
<body of the test>
teardown();

4.2.2 Level 1 Automation


The first level of test automation simply includes the linear framework to automate test
development. The linear framework typically yields a one-dimensional set of automated
tests in which each automated test is treated simply as an extension of its manual
counterpart. Driven mostly by the use of the Record & Playback, all components that
are executed by a Linear script largely exist within the body of that script. Linear scripts
include little or no modularity, reusability, or any other quality attribute. Record &
Playback should, in most cases, not be treated as a framework or serious approach for
an automation effort meant to span any real length of time. Linear scripts not solely
driven by Record & Playback, however, may be useful in environments with a very small
scope.
While Record & Playback is not reliable as an approach to building a framework, it may
be useful as a technique. For example, by doing some simple recordings in the
application, test engineers can automatically capture objects and evaluate the object
map (see Section 9.2 Object Maps for discussion on this) to understand the properties
that are used to uniquely identify each object. By playing back those recordings under
various conditions, much may be ascertained about the various dynamics associated
with those objects. A test automator may find out that certain properties require the use
of regular expressions, or that different properties need to be used. Based on how
objects are recognized, there may be a need to alter the automation approach in order
to establish proper verification points. In this situation, using Record & Playback helps
the test automator quickly understand the object dynamics so that many of the object-
related problems encountered during test automation may be avoided.
Thus, Record & Playback may be used within a proper framework to capture most of
the basic actions that will take place in the application. By automatically capturing these
actions and placing them in reusable procedures (as opposed to just capturing the
information directly to a linear script), the test automator has a much greater ability to
implement good coding standards, exception handling, and modularity. The automated
tests may then be built in a robust manner by combining the procedures to produce the
desired test sequence.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 73


TABOK Segment 2: Macroscopic Process Skills

Linear Script Advantages


Quick development – Little planning and consideration of quality attributes goes
into creating linear scripts. Couple this with the fact that Record & Playback is
often the driving force behind linear script development, it is easy to see that cost
of development in time, staff, and money could be relatively low.
Smaller tool learning curve – Test automation requires knowledge of the
automation tool features and language, as well as the knowledge of how the tool
handles and interacts with the AUT. The use of Record & Playback in the
creation of Linear Scripts dynamically creates code that marries to actions
performed in the application. This code may then be studied to clarify and
understand the tool‘s language syntax as well as how the automated test tool‘s
interaction with the AUT.
Quickly gather information about application dynamics – This advantage is
closely related to the smaller tool learning curve advantage. The use of Record &
Playback provides information about the application dynamics, such as how
object properties (see Skill Category 8: Programming Concepts, particularly
Section 8.6 for more information on object properties) change as activity in the
application is altered.
Script Independence – Given that all the components executed by a script exist
within that script, there is little concern about making script modifications that
may unknowingly affect some other script.
Simplifies error location – The more advanced an automation framework
becomes the more complex locating the source of the error within the code
becomes, particularly for those that are unfamiliar with the makeup of the
automation framework. Given the fact that all components of a linear script exist
within that script, there is little question about where the error occurred within the
code.
Linear Script Disadvantages
Improper playback – When relying heavily on Record & Playback, recorded
scripts will often not playback properly. This is because the Record & Playback
feature is not analytical. It does not evaluate the application objects and their
behaviors, or make decisions on how best to interpret and manage them. Nor
does it effectively handle timing and synchronization concerns. Usually test
engineers introduce a series of patchwork solutions to fix the playback issues;
these rarely provide any level of reusability and are usually require more effort to
implement and maintain than the value they provide.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 74


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks
Redundancy – Linear scripts do not take advantage of reuse. Therefore, if more
than one script performs the same or similar functionality, the functionality will be
duplicated in each respective script. When the application changes over
subsequent releases, excessive maintenance will be the result.
One dimensional scripts – Little flexibility is offered in altering the way linear
scripts may be executed. Scripts are only available to be run at one level, in one
location and in one way. If a test manager wants the test automator to execute a
subset of the total test suite based on defined priorities and risks, or execute the
tests in a different environment, or in a different sequence, the automator would
have to perform a time consuming, run-time analysis of the scripts to make it
happen. This necessarily decreases the amount of time available for script
execution and analysis.
Script is difficult to read – With the absence of reusable components, linear
scripts are replete with code or automation statements. This will make the script
difficult to ready and therefore tedious to analyze and maintain. If a statement in
the script fails, it is often difficult to determine what application functionality was
affected by failure. Likewise, when it becomes necessary to make changes to the
script, it is a little more tedious to determine where the changes need to be
made.
Requires higher degree of application knowledge – In order to maintain linear
scripts, a higher degree of application knowledge is required. This is because
linear scripts include little modularity. Modularity helps to clarify the overarching
functionality of blocks of statements. The sequential, unblocked statements in
linear scripts require more careful study to use and manage them. So when it
comes to analyzing and debugging a script, the test automator must first invest a
substantial amount of time gaining understanding of what a given block of
statements does in order to effectively maintain the script.

4.2.3 Level 2 Automation


This is the middle ground for test automation frameworks, and may be simple or quite
well-defined. This framework level should be considered whenever maintenance is a
factor. It is important to have a strong understanding of this level since level 3
frameworks are also based on level 2 concepts. The two frameworks that fit into this
level are:
Data-driven
Functional Decomposition

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 75


TABOK Segment 2: Macroscopic Process Skills

Most Level 2 frameworks are a hybrid of both data-driven and Functional


Decomposition but because they can serve independently of each other, they should be
discussed independently.

4.2.3.1 Data-driven Frameworks


Frameworks built on data-driven scripts are similar to those built on linear scripts in that
most of the components that are executed by a data-driven script largely exist within the
body of that script. The difference is seen in how the data is handled. The data used in
data-driven scripts are typically stored in a file that is external to the script. By
parameterizing application input fields and associating the external data to its respective
parameter, the run-time data is no longer hard-coded in the script but rather obtained
from the external file. This allows for the same blocks of code to be used for different
data. For example, Figure 4-2 illustrates a data construct that reads data from the data
table in Table 4-1. The data construct executes three times – once for every row in the
data table – using different data each time. The first execution inputs ―John‖ at line 3,
while inputting ―johnPassword‖ at line 4. The second execution inputs ―Lee‖ at line 3,
while inputting ―leePassword‖ at line 4. The third execution inputs ―Mattie‖ at line 3,
while inputting ―mattiePassword‖ at line 4.

1 Open Data File


2 Execute Data rows to end of file (EOF)
3 Input <Username> table data into the Login application Textbox
4 Input <Password> table data into Password application textbox
5 Click Login button
6 End Loop
7 Next Data File row

Figure 4-2: Data-driven Construct Example

Data-driven automation is often considered a technique as opposed to a framework.


When used as a technique, test engineers may include several data-driven constructs
within a single test script. Using the data-driven approach as a framework doesn‘t fully
address most of the challenges that exist in linear scripts, however. Reusability is
introduced and is useful under certain circumstances but the reuse is restricted to a
single script with functionality that will be implemented the same way, only with different
data.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 76


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks

Table 4-1: Sample Data Table

Username Password
John johnPassword
Lee leePassword
Mattie mattiePassword

Data-driven Script Advantages


Reusability – Data-driven scripts offer a relatively simple way to introduce
reusability to automated tests. The same block of code can be reused with
different data, making it possible for many test cases to be executed with minimal
scripting.
Simple to implement – Data-driven scripts require minimal updates to existing
scripts. It often involves parameters in place of hard-coded values and placing
looping structures around the code.
Data-driven Script Disadvantages
Redundancy – While data-driven scripts allow for some degree of reusability,
there is still a large degree of redundancy across multiple scripts that perform
similar actions. As with Linear Scripts functionality may be duplicated in each
respective script, making the scripts susceptible to excessive maintenance.
One dimensional scripts – Little flexibility is offered in altering the way linear
scripts may be executed. Scripts are only available to be run at one level, in one
location and in one way. If a test manager wants the test automator to execute a
subset of the total test suite based on defined priorities and risks, or execute the
tests in a different environment, or in a different sequence, the automator would
have to perform a time consuming, run-time analysis of the scripts to make it
happen. This necessarily decreases the amount of time available for script
execution and analysis.
Script is difficult to read – With the absence of reusable components, linear
scripts are replete with code or automation statements. This will make the script
difficult to ready and therefore tedious to analyze and maintain. If a statement in
the script fails, it is often difficult to determine what application functionality was

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 77


TABOK Segment 2: Macroscopic Process Skills

affected by failure. Likewise, when it becomes necessary to make changes to the


script, it is a little more tedious to determine where the changes need to be
made.
Requires higher degree of application knowledge – In order to maintain linear
scripts, a higher degree of application knowledge is required. This is because
linear scripts include little modularity. Modularity helps to clarify the overarching
functionality of blocks of statements. The sequential, unblocked statements in
linear scripts require more careful study to use and manage them. So when it
comes to analyzing and debugging a script, the test automator must first invest a
substantial amount of time gaining understanding of what a given block of
statements does in order to effectively maintain the script.

4.2.3.2 Functional Decomposition Frameworks


Functional decomposition refers broadly to the process of producing modular
components (i.e., user-defined functions) in such a way that automated test scripts can
be constructed to achieve a testing objective by combining these existing components.
The modular components are often created to correspond with application functionality
but a lot of different types of user-defined functions can be created. These function
types include:
Class-level functions – Class level functions perform pointed activities on a
specific type (or class) of object. For example, a function called, ―Editbox_Input‖
may be created to simply enter data into an application textbox. These functions
can be identified by examining the application‘s object class list.
Utility functions – These are functions that represent fundamental framework
actions or the basic functionality of a specific application under test. For example,
a function that is responsible for logging into the application would be considered
a utility function. These functions can be identified from the requirements or from
the manual test cases related to the groups of tests are slated for automation.
Navigation functions – Most applications have several main areas to which the
test engineer navigates many times during testing. For example, an application
may have a Main (or home) page, a FAQ (frequently asked questions) page, an
Account Services page, and several Orders pages. Navigation for these different
areas includes these expected paths. Of course, these paths may be recursed,
that is, navigated backwards:
Table 4-2: Navigation Function Commonalities

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 78


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks

Page Navigation
Main
FAQ Main Account Services FAQ
Main FAQ
Main Account Services FAQ
Main Orders FAQ
Account Services Main Account Services
Orders 1 Main Orders Orders 1
Orders 2 Main Orders Orders 2

This example reveals two navigation paths that are used multiple times:

Main Account Services


Main Orders

These paths are fairly basic and could very easily have been identified at the beginning
of the automation process. Building a similar table and using that table to point out the
most redundant navigation paths can identify the basic navigation functions in the
environment for which functions need to be created.
Error-handling functions – Error-handling functions are functions created to
inform the test script (and test engineer) how to respond when certain
unexpected events that may occur during testing.
Miscellaneous functions – These are functions that don‘t fall under any of the
other categories.
The Functional Decomposition framework can vary from relatively low to high
complexity based on the level at which functions are created. Functions may be created
for simple tasks such as menu clicks, or may be created for complex functional
activities, complex error handling routines, and complex reporting mechanisms. (See
section 6.2.1 for more information.)

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 79


TABOK Segment 2: Macroscopic Process Skills

Figure 4-3: Functional Decomposition

Functional Decomposition Script Advantages


Increased Reusability – Reusability is greatly increased with the introduction of
functional decomposition, because reusable functions may be created and made
available for use by multiple tests. Redundancy may be decreased for all tests
within a given application, and may also be reduced across multiple applications,
depending on the level of abstraction at which the function is created. With
increased reusability maintainability is greatly increased.
Script Independence – Although the Functional Decomposition Framework may
use multiple external components, the tests themselves are not being reused.
This support reusability while maintaining script independence.
Earlier script development – Functional decomposition makes it possible to in
some instances begin developing automated tests even before the application is
delivered. Using information gathered from Requirements or other
documentation, placeholder components can be created and used for developing
automated tests in a top-down development approach (see Section 8.6 for more
information). Once the application becomes available the components can be
appropriately updated.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 80


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks
The script is easier to read – Automated tests are easier to maintain when
broken into small components because it is simpler to visually determine what
the script is trying to accomplish.
Increased standardization – Increased numbers of reusable framework
components yields increased standardization. Standardization aids in script
development, because it decreases much of the guess work involved in creating
scripts. In addition, standardization helps with script maintenance, because it
decreases guess work involved in figuring out what a script does and how best to
fix it.
Decreases technical nature of application automation – Subject matter experts,
business analysts and expert users are able to develop useful automated tests
without acquiring a high degree of technical coding or tool skills.
Error handling is easier to introduce – Patch work error handling solutions on a
script by script basis are difficult to introduce and maintain. With reusable
components, error handling that reaches multiple scripts can be introduced. This
will ultimately improve the effectiveness of unattended execution of the test
scripts.
Functional Decomposition Script Disadvantages
Required technical expertise – While the technical nature of creating automated
tests within the framework is decreased, the technical skills required to create
and maintain the framework itself must be stronger than those required for less
defined frameworks. There are numerous dependencies and relationships that
must be understood and maintained in addition to the advanced tool components
and structures.
Intuition is less useful – In order to effectively implement an advanced
framework, reliance on intuition must be reduced while reliance on standards
must be increased. While standardization is a positive by product of Functional
Decomposition, it also poses a challenge in ensuring personnel are aware of
standards, understand them, and are able to effectively implement them.
Increased documentation will probably be required to identify framework
features, particularly documentation that chronicles the functions that exist as
part of the framework.
Maintenance becomes more complex – As the complexity of the framework
increases, so does the amount of maintenance that is required. With Linear
scripts, maintenance is confined solely to a script that breaks. While, this may
result in excess maintenance for Linear Scripts (because a single application

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 81


TABOK Segment 2: Macroscopic Process Skills

change can make multiple scripts susceptible to failures), it also helps make
maintenance a little less complex. With Functional Decomposition, maintenance
is often required for both the framework and specific scripts. While this may
reduce the amount of maintenance required, it also makes maintenance a little
more complex.

4.2.4 Level 3 Automation


A Level 3 framework is the most defined framework, and should be considered when
the automation scope is relatively high, and when at least one of the automation
personnel has strong technical and logical skills, and a high degree of proficiency in the
automation tool being used to develop the framework. The two frameworks that fit into
this level are:
Keyword Driven
Model-based

4.2.4.1 Keyword-Driven Frameworks


Often called ―table-driven,‖ the keyword-driven framework tends to be more application-
independent than other frameworks. This framework processes automated tests that
are developed in data tables with a vocabulary of keywords that are independent of the
automated test tool used to execute them. The keywords are associated with
application-independent functions and scripts that interpret the keyword data tables
along with its application-specific data parameters. The automated scripts then execute
the interpreted statements in the application under test. Keyword-driven frameworks rely
heavily on functional decomposition and data-driven framework concepts. (See section
6.2.1 for more information.)
Keyword Driven Advantages
Increased Reusability – Reusability is further increased with a keyword driven
framework because most of the functions are created in such a way to not only
be reusable for multiple tests within a single application but also for tests across
multiple applications. Redundancy may therefore be decreased across all
applications that an organization may be responsible for automating,
Earlier script development – This framework increases the ability for automation
to begin before the application is delivered. Using information gathered from
requirements or other documentation, keyword data tables can be created that
mirror corresponding manual test procedures.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 82


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks
The script is easier to read – Keyword data tables are often easier to read than
regular test scripts because the keyword data tables mirror manual test
procedures. The keywords are typically verb-like phrases that make reading a
keyword data table similar to reading a collection of sentences, which is easier
than reading code statements that don‘t mirror natural language.
Increased standardization – The increase of reusable framework components is
followed by increased standardization. The added advantage that the Keyword
Framework has over Functional Decomposition, however, is that standards are
by default imposed through the implementation of a keyword framework.
Provided that the framework components are created with appropriate standards,
they will be invoked in the keyword data tables with every use of the framework‘s
keywords. Standardization helps with script maintenance because it decreases
guess work involved in figuring out what a script does and how best to fix it.
Error handling easier to introduce – Patch-work error handling solutions on a
script by script basis are difficult to introduce and maintain. With reusable
Keyword framework components, error handling can be introduced that reaches
multiple scripts across multiple applications. This will ultimately improve the
effectiveness of unattended execution of the test scripts.
Decreases technical nature of application automation – Working with keyword
data tables for everyday application automation is substantially technical than
working with code statements. Therefore individuals that are less technical skill
(coding, in particular) can be brought onto the team to help create automated
tests.
Greater traceability to manual test cases – Given the fact that keyword data
tables so closely resemble a manual test procedure, it becomes simpler to trace
actions in automated tests to actions in manual tests. In addition, there will be
greater reuse of manual test procedures.
Keyword Driven Framework Disadvantages
Required technical expertise – While the technical nature of creating automated
tests within the framework is decreased, the technical skills required to create
and maintain the framework itself must be stronger than those required for
functional decomposition frameworks. There are numerous dependencies and
relationships that must be understood and maintained in addition to the
advanced tool components and structures.
Intuition is less useful – In order to effectively implement a keyword framework,
reliance on intuition must be reduced while reliance on standards developed by

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 83


TABOK Segment 2: Macroscopic Process Skills

the team to support the use of the framework must be increased. While some
standards are automatically imposed with this type of framework, many
standards are not; standards such as how and when certain keywords are
created and used, and how objects are named and identified. This then requires
that the test team commit to ensuring that its members (as well as the
development team) are aware of standards and their sources, understand them,
and are able to effectively implement them. Increased documentation will
probably be required to identify framework features, particularly documentation
that chronicles the keywords that exist as part of the framework that may be
used.
Increased management support – Management support is a challenge for any
automation effort but it is particularly difficult with keyword-driven frameworks.
For this framework, committed management support is imperative to assure that
the time, and staffing necessary for creating and maintaining the structures,
documentation, and personnel (both technical and non-technical) are available.
Restrictive for technical staff – For technically adept test team members who are
tasked with day-to-day automation of a software application, keyword-driven
frameworks may be overly restrictive. They may be perceived as an entity that
―ties their hands‖ into automating in a ―standard‖ way at the expense of
automating in the most efficient way for a particular application, or particular
feature within an application. Keyword applications typically require increased
―public relations‖ work to sale the approach to both stakeholders and
management.

4.2.4.2 Model-based Framework


Model-based frameworks go beyond creating automated tests that are executed by the
tool. These frameworks are typically ―given‖ information about the application, and the
framework ―creates‖ and executes tests. Test automators describe the features of an
application, typically through state models which depict the basic actions that may be
performed on the application, as well as the broad expected reactions. Armed with this
information, the framework dynamically implements tests on the application.
Model-based Advantages
Increased coverage over time – With a minimum of scripting, a lot of the
application may be tested. The coverage may not be extremely high per
execution but given the random nature of the testing, the application test
coverage will increase over time. The testing is random given the fact that
scenarios are created and thus changed each time a test is executed.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 84


TABOK Segment 2: Macroscopic Process Skills
Skill Category 4: Test Automation Frameworks
Increased application exploration – Test automation is typically created to
perform a specific set of test sequences with each test execution. Model-based
test automation frameworks contain a certain degree of artificial intelligence
which allows them to perform a quasi-exploratory type of testing, similar to how a
manual tester might explore the application.
Increased potential for defects discovery – Test automation often doesn‘t
uncover a lot of new defects, given the fact that it is typically used to test parts of
the system that have already been shown to be functional. The exploratory
nature of model-based frameworks, however, increases the chances of new
defects being uncovered, given that different scenarios may be covered with
each automated test run.
Model-based Disadvantages
Requires a higher degree of application knowledge – In order to maintain model-
based frameworks, a higher degree of application knowledge is required. This is
directly related to the need for less maintenance of automated tests (which is
largely a technical activity) and more maintenance of application models that
define application behavior.
Required technical expertise – The technical skills required to create and
maintain Model-based frameworks are relatively high. Numerous dependencies
and relationships exist between the test framework and its parameters and the
application and its environments that must be understood and maintained, in
addition to the advanced tool components and structures.
Increased management support – Management support is probably the most
challenging with model-based frameworks mainly because it simpler to calculate
and communicate ROI with other frameworks. Model-based ROI calculations are
more than likely based on risks rather than on a comparison of the cost of the
same tests being performed manually. Risk ROI calculations alone are often very
difficult to convey to management as a justification for the time and resources
necessary for creating and maintaining the structures, documentation, and
personnel (both technical and non-technical).

4.3 Resource References for Skill Category 4


Automation Scope and Frameworks

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 85


TABOK Segment 2: Macroscopic Process Skills

Doug Hoffman. Advanced Test Automation Architectures: Beyond Regression


Testing. Available at
http://www.softwarequalitymethods.com/Slides/TestAutoBeyondX2-CAST07.pdf
Dion Johnson. ―5 Steps To Framework Development‖. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2009/AutomatedS
oftwareTestingMagazine_August2009.pdf
Harry Robinson. Intelligent Test Automation. Available at
http://www.harryrobinson.net/intelligent.pdf?attredirects=0
John Kent. Generations of Automated Test Programs Using System Models.
Available at http://www.simplytesting.com/Downloads/Test-Automation-
Generators.pdf
Cem Kaner. Improving the Maintainability of Automated Test Suites. Available at
http://www.kaner.com/pdfs/autosqa.pdf
Cem Kaner. ―Avoiding shelfware: A manager's view of automated GUI testing‖.
Available at http://www.kaner.com/pdfs/AvoidShelfware.pdf
Choosing a Framework
Michael Kelly. Choosing a Test Automation Framework. Available at
http://www.michaeldkelly.com/pdfs/Choosing_a_Test_Automation_Framework.P
DF
John Kent. Generations of Automated Test Programs Using System Models.
Available at http://www.simplytesting.com/Downloads/Test-Automation-
Generators.pdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 86


TABOK Segment 2: Macroscopic Process Skills
Skill Category 5: Automated Test Framework Design

Skill Category 5: Automated Test Framework


Design

Primary role(s) involved: Lead Automation Architect


Primary skills needed: Framework design, process and standards creation,
parameterization, configuration management

The process of designing an automated test framework is not an exact science and is
therefore difficult to pinpoint in such a way that most of the industry experts agree. The
most important thing however, at this point in automation history is to ensure that a well
thought out approach based on common, successfully implemented industry practices is
used and honed within a given organization. This skill category identifies an approach,
from a high enough level that it will fit with where the IT industry currently is relative to
test automation while still remaining low-level enough to be useful for implementation.
This approach involves the following steps:
1. Select a Framework Type
2. Identify Framework Components
3. Identify Framework Directory Structure
4. Develop Implementation Standards
5. Develop Automated Tests (refer to Skill Category 6: Automated Test Script
Concepts)
Prior to the actual design of the automated test framework, it is a good practice to
develop a Test Automation Implementation Plan. This plan will provide guidance for the
creation of the framework (See Appendix F: Test Automation Implementation Plan
Template for an outline of what goes into an implementation plan).

5.1 Select a Framework Type


Short Answer
When in doubt, the easiest choice to make is a Functional Decomposition Framework.
This type of framework can be as simple or as complex as desired.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 87


TABOK Segment 2: Macroscopic Process Skills

Selecting an automated test framework largely depends on two factors:


Automation scope
Desired Automation Quality Attributes
An informal approach to making a decision is to make sure relies on satisfactory
answers to these questions:
Is the scope large enough to justify the level of investment that creating and
maintaining a specific type of framework will require?
Does the framework address the desired Quality Attributes in a satisfactory
manner?
Does the organization have the level of technical expertise required to maintain
the framework and the tests that will be developed with the framework?
Is the organization providing sufficient time and resources necessary for effective
maintenance of the framework?
Are the organizational processes (communication, documentation, configuration
management (CM), hand-off procedures, etc.) strong enough for the framework
to thrive?
A structured approach to obtaining satisfactory answers to these questions may involve
the following steps:
1. Revisiting each framework type in terms of Quality Attributes
2. Identify key organizational Quality Attributes based on scope
3. Select framework type based on Quality Attributes
Skill Category 7: Quality Attribute Optimization provides more information on these
activities as they are directly related to Quality Attributes.

5.2 Identify Framework Components


Short Answer
The components within a framework get more defined as the frameworks get more
mature.

Upon making a determination about the framework that will be used, identifying the
specific components that will compose that framework is the next logical step. Figure

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 88


TABOK Segment 2: Macroscopic Process Skills
Skill Category 5: Automated Test Framework Design
5-1 reveals a generic set of components that are typical of most automation
frameworks, and provides some insight into the interactions among those components.
The framework type and level of complexity will dictate which components will actually
be used and at what level they will be developed. A Linear framework will likely only
have the Driver Segment and the Test Segment. Likewise, the Data-driven Framework
will likely have the Driver Segment, Test Segment, and Data File elements as part of its
framework. All other framework types will likely have some version of all of the
components.

Figure 5-1: Generic Framework Component Diagram

5.2.1 Driver Script


A driver script coordinates test activities by calling a sequence of automated test scripts.
It controls everything that takes place during the test run, which allows the unattended
execution of several scripts. It may run initialization scripts, set up configuration
parameters, verify the starting and ending state of each test, and call error recovery
when necessary. Many organizations may use an Execution Level File that
parameterizes the calling of scripts, as opposed to hard-coding each script call
individually. For increased flexibility and test execution control, a multi-leveled Execution
Level File may be used.
For example, an organization may determine that the tests need to be separated into
three different priority levels: Tier 1 (highest priority), Tier 2, and Tier 3 (lowest priority),
and that the organization needs to be able to efficiently execute tests based on priority
level. An Execution Level File for such a set up may appear as shown in Figure 5-2.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 89


TABOK Segment 2: Macroscopic Process Skills

TestScriptName Tier
SystemLogin 1
PlaceAnOrder 1
ViewOrderStatus 3
CancelOrder 3
UpdateOrder 2

Figure 5-2: Execution Level File Example

A parameterized driver script that reads the Execution Level File described in Figure 5-2
may be structured as illustrated in Figure 5-3.

TestRunLevel = 1
Open Execution Level File
Execute Data rows to end of file (EOF)
If <Tier> == TestRunLevel Then
Set Initialization and Configuration
Call <TestScriptName>
Evaluate Pass/Fail Status of <TestScriptName>
Implement Error Handling Routine as necessary
End If
Next Data File row
Call Reporting Utility to generate logs and reports

Figure 5-3: Driver Script Example

At the top of the script, a variable by the name of TestRunLevel may be used to set the
desired execution level for the given test run. The driver script then opens the Execution
Level File, reads a row and gets values for the <Tier> and <TestScriptName>
parameters. All scripts that have a Tier (priority) equal to that set at the top of the driver
script in the TestRunLevel variable will be called and executed. All other scripts will
remain unexecuted.
Note: Many organizations use a test management tool to execute test sequences in lieu
of a driver script.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 90


TABOK Segment 2: Macroscopic Process Skills
Skill Category 5: Automated Test Framework Design
Note: The keyword-driven framework may use additional driver scripts) for executing
sequences of steps in a keyword file in addition to a driver script that executes
sequences of scripts (See Section 7.2).

5.2.2 Initialization Scripts and Parameters


An initialization script sets parameters that are used throughout the test run and bring
the test environment to a controlled, stable state. This initialized environment greatly
improves the chances of test scripts running properly and without false negatives by
reducing unpredictable values during tests. When not initialized, in fact, test problems
that could have been averted normally produce test failures which will ultimately
compromise the integrity of the test results. In a real sense, initialization assures the test
engineer that efforts are not directed towards ―a moving target.‖
Parameters in initialization scripts remain fairly constant throughout the project life
cycle. Generally, there are two basic types of initialization scripts:
Automation Tool Initialization Scripts – Set up the automated testing tool and
framework before a test run by doing several tasks, including
─ Setting directory path variables
─ Setting search paths
─ Customizing the object map configuration
─ Loading compiled modules, such as function libraries.
AUT Initialization Scripts – are responsible for bringing the AUT to a controlled,
stable point, and are therefore application-specific. Rather than just defining
parameters, these scripts will perform actions in the application just as a test
script would. For example, in a web-based application, some of the most
common initialization tasks include
─ Clearing the cache
─ Turning off auto-complete
─ Turning off browser animation effects
─ Setting cookies
─ Setting frame format

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 91


TABOK Segment 2: Macroscopic Process Skills

Figure 5-4: Initialization Script Example

5.2.3 Configuration Scripts and Parameters


There is a fine line between what are considered initialization and configuration
parameters. The main difference between the two is that configuration parameters are
those parameters that are changed relatively frequently. This is done through a
configuration script. Typical configuration parameters include:
User IDs and passwords
Pointers to databases
Public variables and constants
Web URLs

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 92


TABOK Segment 2: Macroscopic Process Skills
Skill Category 5: Automated Test Framework Design

Figure 5-5: Example Test Environment Configuration and Iterations

To help better understand configuration parameters, let‘s examine a scenario


represented by Figure 5-5. The automated tests in this environment may be executed
on several different machines, and by several different people. Sue may log-in on one
machine to run the tests while her colleague John may log into another. Individuals may
have their own set of input data files while each machine may require different settings.
In addition, each person has to choose at run time what front-end server to connect to,
then what back-end data store to connect to. Having to make such determinations every
time one sits down to run their automated test suite may be extremely tedious.
Configuration parameters may be set up to maintain these configurations.
For example,
Sue logs into server ―s30998‖ with the ―SueUser‖ ID, and connects to the back-
end data server ―db1‖ and uses data from the ―Sue_Data.xls‖ data file. All of
these parameters can be set under an environment variable called ―Sue_Env1‖.
John logs into server ―s30901‖ with the ―JohnUser‖ ID. He also connects to the
back-end data server ―db1‖ but uses data from the ―John_Data.xls‖ data file.
John‘s parameters can be set under an environment variable called
―John_Env1‖.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 93


TABOK Segment 2: Macroscopic Process Skills

At the beginning of a test run, simply specifying the desired environment variable sets
all of the necessary parameters greatly simplifies automated test execution. This
environment variable may be declared and set in the driver script.

Figure 5-6: Configuration Script Example

5.2.4 Test Scripts


Most frameworks contain test scripts that contain the logic that verifies specific
application functionality. The functional decomposition framework calls reusable
functions to help compose the scripts, yet the types of functions and complexity of the
functions depends on the complexity of the framework. The keyword-driven framework
uses data tables for test scripts; these data tables are executed by a Scenario Driver
Script. The model-based framework may not actually include test scripts but may
instead just have drivers that interpret the model and executes test scenarios based on
the model.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 94


TABOK Segment 2: Macroscopic Process Skills
Skill Category 5: Automated Test Framework Design

5.2.5 User-defined Functions


User-defined functions are custom, modular components created for reuse and ease of
maintenance, and may be created at a level that they are reusable for multiple
applications, or may be created in such a way that they are reusable for just tests within
a single application. Typical types of functions include:
Utility Functions
Navigation Functions
Error-handling Functions
Miscellaneous Functions
See Section 4.2.3.2 Functional Decomposition Frameworks for more information
frameworks. See Section 8.6 Custom Function Development for more information on
function development.

5.2.6 Error Handler


The Error Handler is responsible for handling unexpected occurrences during the test
execution, such that the test run continues and/or concludes in the most graceful
manner possible. See Skill Category 11: Error Handling for more information.

5.2.7 Reporting Utility


The Reporting utility is responsible for collating the results from the run and presenting
them as reports and logs that are designated parts of the framework. See Skill Category
12: Automated Test Reporting & Analysis for more information.

5.3 Identify Framework Directory Structure


Short Answer
A framework needs a common location for storing framework components. Ideally, the
structure will exist in a way that will make the framework easily portable.

Before actually constructing the framework it is important to have a pre-planned


directory structure, for several reasons. First, it is important because one obviously
needs to know where things are going to go. Second, it is important because the
framework may need to be ported to another machine, environment, folder or drive.
Third, a standard structure makes it easier to place the automated framework under

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 95


TABOK Segment 2: Macroscopic Process Skills

configuration management. Creating a directory structure makes the automation


environment portable.

Framework Root
Directory

Init Config Function Driver Test


Scripts Scripts Library Scripts Scripts

Figure 5-7: Automated Framework Directory Structure Example

Figure 5-7 illustrates how components may be arranged in the framework. This diagram
shows how the components are physically stored while Figure 5-1: Generic Framework
Component Diagram reveals how the framework components may interact with one
another.

5.4 Develop Implementation Standards


Short Answer
Standards should be created for the development of automated tests that create a
common environment in which test automation may become very efficient.

Standards should be created by the automated test team to govern the implementation
of an automated test framework in order to help ensure the success of that framework.
The framework structure is itself a standard, as are the defined component interactions,
but additional supporting standards must still be defined, communicated, respected, and
followed. Without standards, the automated test development will undoubtedly be badly
fragmented, difficult to manage, and unreliable in its effectiveness. Each test will
potentially have different conventions, which means that each test will need to be
maintained differently. Without supporting standards to function as the glue that keeps
the framework together, the framework structure will not be properly enforced, and will
ultimately collapse.
Some of the standards that may need to be considered include:

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 96


TABOK Segment 2: Macroscopic Process Skills
Skill Category 5: Automated Test Framework Design
Coding Standards
Configuration Management Standards
Manual-to-Automation Transition Standards
Results Logging Standards
All standards, as well as the framework as a whole, should be documented as part of a
larger Automation Implementation Plan (See Appendix F: Test Automation
Implementation Plan Template).
Adopting, implementing, and following standards carries significance in the area of
application compliance. In some customer bases, it is not enough to develop and deploy
an excellent, reliable product. In some domains (e.g., public, security, or defense, in
particular), the processes, tools, standards, and the qualifications of the staff and
contractors who designed and developed the application are just as important. In
circumstances such as these, a perfectly acceptable application or system may be
restricted from use at an agency not because it does not perform a needed task or is
cost-effective, but because the manufacturer did not document its forensics.
For example, if the target audience is primarily a United States federal government
agency, then adherence to open standards is consistent with federal mandates and
helps to ease interoperability across federal agencies, and with non-government
clients.18 Thus, building compliance to the targeted standard (e.g., International
Organization for Standardization – ISO, Federal Information Security Management Act –
FISMA, and the like) has implications for the manufacturer, not just the application. 19
Even for wholly commercially- and non-profit-oriented applications, a look at the
standards followed by similar or competing applications may open opportunities for
further integration and partnership work.

18
Tiemann, Michael (2006). An objective definition of open standards. Computer Standards and
Interfaces (20): 495-507. Also OMB Circular A-119 (1998) and NTTAA.
19
It is beyond the scope of this Manual to create a comprehensive discussion of application and system
compliance standards; that is its own course of study. But standards are a form of requirements that the
test plan must address as diligently as functional, performance, and non-functional requirements.
See http://www.whitehouse.gov/omb/assets/memoranda_2010/m10-15.pdf,
http://csrc.nist.gov/groups/SMA/fisma/compliance.html, and http://csrc.nist.gov/drivers/documents/FISMA-
final.pdf for some example.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 97


TABOK Segment 2: Macroscopic Process Skills

5.4.1 Coding Standards


Coding standards dictate the principles involved in visually displaying an automated
test. This involves conventions for naming scripts and variables, as well as conventions
for how automated test statements must be structured and how comments are to be
used. See Appendix B: Sample Coding Standards Checklist for additional standards.

Script Naming Conventions


Scripts that are automated companions to a manual test procedure should ideally have
the same name as the associated manual test case. This will help to maintain a one-to-
one mapping from each manual test case to its automated test counterpart. This helps
to ensure that the traceability links between requirements and tests are not lost with the
introduction of test automation. An exact name may not always be possible, however,
specifically if there‘s no one-to-one mapping of manual-to-automated tests. In this
situation, a similar name may be sufficient. The key is to establish some rule for naming
scripts that makes sense in your organization so that they may be effectively managed.

Variable Naming Conventions


Naming standards for variables help to increase script maintainability by helping to
make the script easier to read thus helping automators make sense of the code that
exists within each script. It also helps to make the framework more cohesive by
ensuring variables are consistent within and across that framework.
Variable names should be descriptive and have a consistent notation. A notation is the
physical structure of the variable that helps to provide insight into the variable‘s type or
intended use. Examples of unhelpful variable names include Var1 and counter1.
Why are these poor choices? First, they are not descriptive and therefore unable to be
deciphered upon visual inspection. The second variable seems to be a counter of some
sort but what is it counting? The second reason that those variables are poor choices is
because they do not seem to follow any standard notation.
Examples of good variable names include sCustomerName and nCustomerCounter.
These are superior choices for several reasons. First, they are simple and descriptive.
One may easily infer that the first variable holds a customer name while the second
holds a value that counts customers. Another reason that these variables names are
good is because the notation used help provide additional information about the
variable. The ‗s‘ prefix denotes a string value while ‗n‘ denotes a numeric value. In
addition, the notation calls for the first character to be lowercase while the rest of the

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 98


TABOK Segment 2: Macroscopic Process Skills
Skill Category 5: Automated Test Framework Design
variable is sentence case (the first word is a capital letter). This orthography also helps
make the variable easier to read. Following this declared standard, it is fully apparent at
a quick script inspection that the format indicates a variable.
Script Structural Standards
Script structural standards determine the formation of the code in the script. Often script
structural standards begin with a header, and then dictate the format used within the
body of the script.
The script header is usually composed of several comment lines that appear at the top
of the script and provide useful information about the script. Information such as:
Test Script Name – The name of the script
Test Script ID – An relatively short script identifier
Test Script Author – The original script developer
Test Script Creation Date – The date on which the script was developed
Test Script Modification Date – The date the script was last modified
Test Script Description – A few sentences that describe what the script does
The body of the script may dictate such structural standards as indentation for
branching and looping constructs (see Appendix B: Sample Coding Standards
Checklist for more information).
Comments
A comment character in a programming language is a character used in front of a line in
a script to signify that the line is actually a note to be read by the developer rather than
code to be executed by the programming language. Comments provide much useful
information about the script‘s purpose, what is being processed at specific points in the
script, the source and development date of the script, and limitations. Standards may
also need to be identified for when and where comments need to be used.

5.4.2 Configuration Management Standards


A Software Configuration Management (SCM) system is important to help prevent
duplication of effort among test team members and developers, and to ensure proper
version control and implementation of all framework components. This system could be
as simple or as involved as the test team deems necessary but the complexity of this
SCM system should ultimately be dictated by the level of sophistication and definition of
the framework. Ideally, the SCM system will accommodate all artifacts, such as code,
documentation, requirements, test plans, test scripts, and reports.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 99


TABOK Segment 2: Macroscopic Process Skills

5.4.3 Manual-to-Automation Transition Standards


Manual-to-Automation Transition refers to the process used to migrate a test from
manual to automated implementation and vice versa. This procedure is important to
ensure the automation process is smooth, and to ensure that the overall testing effort is
properly conducted with the introduction of test automation. This process should include
procedures for:
Determining whether or not a manual test case is ready for automation – Defining
such a procedure up front helps to ensure that time is not wasted by recreating a
new set of criteria every time automation begins.
Flagging a manual test case as ready for automation – This is particularly
important when the effort involves multiple test procedures and multiple test team
members. Without flagging the tests, it may be difficult to remember what tests
have been identified as automatable.
Identifying that an issue with a manual test case has been found during
automation – This helps to properly audit the automation effort because when
issues impede automation, the test team must understood that the elapsed
automation time is not solely due to the scripting portion of the automation
process. If this is not understood, misperception may negatively reflect on the
automation process and return-on-investment calculation. In addition, identifying
issues will help to improve overall testing processes and help during the lessons
learned phase of the project. During discussions of changes to implement to
improve automation return-on-investment, early detection of specific manual test
case issues may be pointed to as a possible solution.
Identifying that a manual test case is automated – This is important to prevent
the duplication of effort involved in execution of tests. When a test is automated,
it normally does not need to be executed manually.
Identifying that an automated test needs repair – When a faulty automated test is
not clearly identified for repair, it may be overlooked or not executed manually.
This is because the assumption exists that once a test is automated, it does not
need to be executed manually again. When an automated test is flagged for
repair, the team better understands that it needs to be executed manually until
the repair is completed.
See Appendix C: for a sample Manual-to-Automation process.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 100


TABOK Segment 2: Macroscopic Process Skills
Skill Category 5: Automated Test Framework Design

5.4.4 Results Logging Standards


A common and accepted format for logging results is fundamental to making results
analysis an efficient processes for all stakeholders on the application team. Standards
should identify what information should be logged, such as only failures, the pass/fail
status of every check, or the result of every line of code executed, the report layout, the
file type to which the log file is stored, where the log files are stored, etc. In addition, a
determination must be made about using separate log files for debugging and results
reporting, or a single file equipped with filters. All of this information needs to be planned
in advance (see Skill Category 12: Automated Test Reporting & Analysis). As with all
other standards, the structure of the results logging will largely be dictated by the type of
framework. For example, very structured functional decomposition frameworks (see
Section 4.2.3.2) as well as the level 3 frameworks (see Section 4.2.4) will typically have
advanced results logging utility functions that handle most aspects of error logging and
results reporting.

5.5 Resource References for Skill Category 5


Framework Types, Components and Structure
Doug Hoffman. Advanced Test Automation Architectures: Beyond Regression
Testing. Available at
http://www.softwarequalitymethods.com/Slides/TestAutoBeyondX2-CAST07.pdf
Dion Johnson. ―5 Steps To Framework Development‖. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2009/AutomatedS
oftwareTestingMagazine_August2009.pdf
Harry Robinson. Intelligent Test Automation. Available at
http://www.harryrobinson.net/intelligent.pdf?attredirects=0
Choosing a Framework
Michael Kelly. Choosing a Test Automation Framework. Available at
http://www.michaeldkelly.com/pdfs/Choosing_a_Test_Automation_Framework.P
DF
John Kent. Generations of Automated Test Programs Using System Models.
Available at http://www.simplytesting.com/Downloads/Test-Automation-
Generators.pdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 101


TABOK Segment 2: Macroscopic Process Skills

Skill Category 6: Automated Test Script Concepts

Primary role involved: Test Engineer, Lead Automation Architect, Cross Coverage
Coordinator, Automation Engineer
Primary skills needed: Selecting automation candidates, understanding automated
script elements, constructing an automated test

Upon deciding on a framework (or on no framework), the business of automating tests


must commence. This normally involves the following activities:
Test Selection
Automated Test Design and Development
Automated Test Execution, Analysis, and Reporting

6.1 Test Selection


Short Answer
Select tests to automate based on (among other criteria) how tedious they are to
execute manually, how easy they are to automate, how frequently they need to be
executed, whether automation will increase coverage beyond what manual testing can
realistically cover, and most important, whether the automation supports the goals of the
organization.

Selecting tests for test automation involves determining which manual tests should be
automated, and when those tests should be automated.
A more detailed explanation of what should be automated is directly tied to the goals of
the organization. The organizational goals fit into the same categories discussed in the
Section 1.3 Automation Return-on-Investment (ROI): risks, costs and efficiency.
Reducing risks often involves increasing coverage. Several common areas for
automation include:
Automating some subset of regression tests. This will usually free manual testers
to test other parts of the project or even other projects.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 102


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts
Automating regression tests using addition data items that may not be efficiently
executed manually (i.e., testing all 50 states in a state dropdown)
Randomizing data entries and selections in automated tests
Automating new functional scenarios (i.e., following a model-based approach)
Cost and efficiency are closely associated to each other because the automation of
tests should increase efficiency, which, at times, should reduce costs. Typically, when
decision-makers state that their goal is reduction in cost, this indicates a reduction of
costs associated with the testing cycle, or project as a whole, as automation is
introduced compared to what the cost would be if specific tests were done manually. In
addition, cost reduction may refer to the ―bottom line‖ cost of the pre-production
implementation phase. The bottom line is different, however, because instead of
comparing the cost of automating tests to the cost of executing them manually, this
actually compares what the project cost was previously to what it is now. As discussed
in Section 1.3, this is challenging because seldom is the project budget ―bottom line‖
reduced. Very often, test staff is reallocated as opposed to off-loaded, which results in
no affect on the project budget.
If cost reduction is the goal, it can be typically achieved by:
Automating primarily some subset of regression tests that are simple to maintain,
being sure to automate exactly what is executed, not necessarily what is
documented in test cases
Automating data manipulation, setup, and comparison
Automating sanity tests – also known as smoke tests – that are executed as part
of the build cycle
Note that cost/efficiency should be considered at least in part, because it is a lot easier
to show tangible results in the form of ROI. Being able to show results is important for
being able to sustain an automation effort over time.
Since each organization is slightly different, however, it is important to establish a
checklist that contains important criteria for making the automation determination.
Figure 6-1 reveals a sample checklist that may be used.

Criteria
The test is executed multiple times.
The test is executed on multiple machines, configurations and/or environments.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 103


TABOK Segment 2: Macroscopic Process Skills

Criteria
The test is not feasible or is extremely tedious to perform manually.
The test is not negatively impacted by no longer being executed manually (does
not requires human analysis during run-time).
The test is able to be executed using a consistent procedure.
The test covers relatively stable application functionality.
The test covers a portion of the application that has been deemed automatable.
The estimated ROI from automating the test is positive and within a desirable
range.
The test inputs and output are predictable.
The test is non-distributed or the tool is able to handle distributed testing.

Figure 6-1: Automation Criteria Checklist

This checklist in Figure 6-1 does not present an all-or-nothing proposition, so not all
items need to be checked for test automation to commence. Instead, this checklist is
meant to guide the decision to automate. If one or more of the items in the Automation
Criteria Checklist is checked, then this signals that the test may be a good candidate for
automation.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 104


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts

6.2 Automated Test Design and Development


Short Answer
Automated test design and development are tied directly to the automated test interface
in which the tests will be created, the way quality attributes are applied at the test level,
the elements that are common to automated tests, and the standards used for creation
of tests.

Automated test design and development are typically most effective when approached
methodically. One way to begin the design process is by developing an algorithm, then
using quality attributes to help determine the level of detail and structure the test should
have. Next, the algorithm can be translated into actual automated test syntax (based on
the automated test interface used) based on the team‘s development standards and
using the framework‘s defined automation modes.

6.2.1 Automated Test Interface


The automated test interface is largely a function of the type of automation framework
and automated tool that is being used. For linear scripts and data-driven frameworks,
the tests are normally code-based, as illustrated in Figure 6-2, and therefore require
some technical skills. These tests are composed of code that is very specific to the test
in which it exists. Some tools offer a tree structure and/or ―keyword‖ interface that adds
a more graphical and icon-based view of code. But when these tools are used in a
linear script or data-driven framework, the tests still follow the basic precepts of a code-
based test.

1 Input “John” into Username textbox


2 Input “Jpass” into Password textbox
3 Click Login button
4 If “Welcome Screen” exists then
5 Pass the test
6 Else
7 Fail the test
8 End If

Figure 6-2: Code-based Interface

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 105


TABOK Segment 2: Macroscopic Process Skills

Tests in functional decomposition frameworks are still code-based but as the framework
becomes more defined, the tests become slightly less technical. This is because the
tests are largely created by stacking reusable components.

1 Login(“John”, “Jpass”)
2 Verify_Screen(“Welcome”)

Figure 6-3: Functional Decomposition Interface

Figure 6-3 reveals how the statements in Figure 6-2 might be written in a functional
decomposition framework. Statements 1 through 3 in Figure 6-2 have been
parameterized and placed in a function called ―Login‖ while steps 4 through 8 have
been parameterized and placed in a function called ―Verify_Screen.‖ The Functional
decomposition framework test is designed to exist in a format as shown in Figure 6-3.
The level 3 frameworks typically have tests designed in a less technical format, such as
a table. For example, the keyword equivalent of the statements illustrated in Figure 6-2
might appear as illustrated in Figure 6-4.

Screen Object Action Value Recovery Comment


LoginScreen Username Input “John”
LoginScreen Password Input “Jpass”
LoginScreen Login Click Abort_Test
Welcome Verify_Screen Abort_Test

Figure 6-4: Keyword Driven Interface

The columns in this illustration represent different portion of the test:


Screen – The screen on which the automated test step occurs
Object – The object on which the automated test step operates
Action – The step‘s primary keyword that is tied to a reusable function and
identifies what activity will take place on the test step
Value – The values that may be entered into the application

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 106


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts
Recovery – The error handling actions for this step (refer to Skill Category 11:
Error Handling for more information in error handling)
Comment – A note about the step‘s main purpose, used to provide helpful
information to anyone reading the keyword file
The keyword file is interpreted and executed using a keyword driver script (see
Appendix G: Sample Keyword Driver Script for more information).

6.2.2 Quality Attributes Applied to Tests


It was previously pointed out that quality attributes should be evaluated for developing
the automated test framework. Quality attributes must also be evaluated at the
automated test script design level in order to make real-time decisions on how the tests
should be created in light of changing environment constraints such as schedules,
budgets, and staffing resources. For more information on quality attributes, refer to
Section 7.1.

6.2.3 Test Data


Test data is a major concern not just for test automation, but for testing in general. A
specific concern for automated test execution, however, with respect to data is whether
the script will create new data each time a script run or whether existing data will be
accessed and used. If existing data is used, then the initialization and clean-up steps
will have to be focused on ensuring the data is in the required initial state and that it has
been sufficiently cleaned-up following the completion of the run.

6.2.4 Automated Test Elements (Anatomy)


Most automated tests are composed of following broad elements:
1. Initial Condition Setup
2. Test Actions
3. Verification Points
4. Cleanup Steps
Error handling is also included in each of the above elements and will be discussed in
Skill Category 11: Error Handling.

6.2.4.1 Initial Conditions and Cleanup Steps


When automating tests, it is critical to design them to include initial conditions and
cleanup steps. Initial conditions and cleanup steps essentially initialize the starting and

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 107


TABOK Segment 2: Macroscopic Process Skills

ending state, respectively, of the AUT and automation framework at runtime. This helps
to ensure that the test is able to successfully run through multiple iterations, and that –
when run within a batch – one test doesn‘t adversely affect the execution of subsequent
tests. Initialization scripts and parameters (described in Section 5.2.2) typically bring the
environment or overall test run to a controlled stable point. Initialization conditions at the
script level are normally more specific to the test, such as focusing on initializing test
specific data.
Cleanup steps are responsible for activities that bring the AUT and framework back to
an initialized state to ensure subsequent tests are not adversely affected and to ensure
the AUT is back to a required state. Cleanup may perform activities such as closing an
AUT and disposing of variable and objects used by the test.

6.2.4.2 Test Actions


The test actions are the steps that automated tests use to exercise specific AUT
functionality. In many cases, the tested functionality maps to the application‘s business
and functional requirements. This portion of the automated test design may use the
design of the manual test procedure that is being automated (assuming that this exists)
as a guide. Lines 1 through 3 in Figure 6-2: Code-based Interface provide an example
of what constitutes test automations.

6.2.4.3 Verification Points


Automating tests is not simply a matter of exercising application functionality via a
script. The correctness of that functionality must also be considered. Verification points
provide a means of doing this. A verification point is a statement or set of statements
that work to indicate whether some expected result matches some actual result.
Verification points for functional test automation may be divided into the following
verification categories:
Text Verification – Checks the correctness of text in an application page
Object Verification – Verifies the correctness of specific properties of an element
within an application
Field Verification – Ensures the correctness of values contained with a field
(often a specific form of object verification)
Screen Verification – Verifies that the application has displayed the correct
screen
Data Verification – Verifies that the front-end correctly reflects what is in the
database.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 108


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts
Bitmap Verification – Verifies that an image on a screen compares to a
previously captured image.
Many automated test tools provide built-in verification points but that notwithstanding, it
is often necessary to create custom verification points. Assertions are an excellent way
to implement verification points. Assertions are constructs that provide a mechanism for
concisely identifying a condition that must be checked and presenting an error message
if the condition fails. At their simplest, assertions may be created using simple branching
constructs (refer to Section 8.5.1 for more information on branching constructs). Some
of the most basic and most useful assertions include:
Condition Assertions
Equality Assertions
Inequality Assertions
Condition Assertion
A condition assertion checks to ensure that some Boolean condition is true, and sends
an error message in the event that it is false.
The structure of a condition assertion may be as follows:
If (condition = “True”) Then
Generate ‘Pass’ message
Else
Generate ‘Fail’ message
End If
The condition may, for example, be a statement that checks the existence of a screen. If
it is true that the screen exists, the assertion passes, otherwise, the assertion fails.

Equality Assertions
An equality assertion checks to ensure that some expected result matches an
associated actual result. If the expected result does not match the actual result an error
message is generated.
The structure of an equality assertion may be as follows:
If (expected == actual) Then
Generate ‘Pass’ message

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 109


TABOK Segment 2: Macroscopic Process Skills

Else
Generate ‘Fail’ message
End If
This may, for example, be used to check that a data element that actually exists in a
text field in the application matches what is expected to be in the text field.

Inequality Assertions
An inequality assertion checks to ensure that some expected result does not match an
associated actual result. If the expected result does match the actual result an error
message is generated.
The structure of an equality assertion may be as follows:
If (expected Not = actual) Then
Generate ‘Pass’ message
Else
Generate ‘Fail’ message
End If
This may, for example, be used to verify that an updated AUT data element does not
still maintain its old value.

Assertion Functions
Since the basic structure of an assertion is reused, it is often useful to place the
assertion in a reusable function. For example, a condition assertion function may
appear as follows:

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 110


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts
Function assert(condition, passMsg, failMsg, abort)
If (condition = “True”) Then
Generate passMsg message
Else
Generate failMsg message
If (abort = “True”) Then
Abort the test
End If
End If
End Function
See Section 5.2.5 for more information on functions.

6.2.5 Automated Test Modes


The modes used for test automation greatly depend on the type of application interface
that is being automated (refer to Section 2.2). When testing a GUI interface, however,
three major automation modes occur frequently:
Content Sensitive (Analog)
Context Sensitive (Object Oriented Testing)
Image-based
Content Sensitive (Analog)
The Content Sensitive test mode (also referred to as analog design and development)
refers to the creation of automated test steps based on screen coordinates. Automated
tests created in analog mode dictates mouse clicks, keyboard input, and screen
coordinates to be traveled by the mouse as illustrated below.

Invoke_Application(“C:/testapp.exe”)
Mouse_click(33,25,Left)
Type(“John”)
Mouse_click(45,23,Left)
Keyboard_input(<ENTER>)
Bitmap_check(expImage,actualImage)

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 111


TABOK Segment 2: Macroscopic Process Skills

It also performs much of its verification via bitmap image comparisons. This mode,
which does not take into account the objects and its properties (refer to 8.6 for more
information on objects), is normally more volatile than the Context Sensitive approach,
due to the fact that a slight change in location or screen size may cause an analog test
to fail.

Context Sensitive (Object Oriented Testing)


Context Sensitive is based on objects and their properties without regard to the
positions of those objects on the screen. This mode tends to be less volatile and is
therefore more often used. Analog statements are often used in conjunction with
Context Sensitive statements in the event there are some objects that can‘t be uniquely
identified based on its properties (refer to 8.6 for more information on objects).

Image-based
Image-based automation, often based on Virtual Network Computing (VNC), relies on
image recognition as opposed to object recognition (Context Sensitive) or coordinate
recognition (Content Sensitive). VNC is a platform-independent graphical desktop
sharing client-server system that uses the RFB protocol to allow a computer to be
remotely controlled by another computer. The controlling computer is the client or
viewer, while the computer being controlled is the server, and the server transmits
images to the client.
Automated test tools that use an image-based automation approach typically rely on
VNC and thus follow the two-computer system. The automated tool resides on the client
machine and functions as the VNC client. The server machine on which the AUT is
installed will run a VNC server that communicates with and transmits images to the
VNC client. These tools, therefore, recognize application objects based on analysis of
the transmitted images.
This mode is more closely related to context sensitive automation than content
sensitive, because it is not completely coordinate-based. These images may still be
located if they are moved to a different screen position.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 112


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts

20
Figure 6-5: VNC Illustration

6.2.6 Automated Test Development Standards


Automated tests should be created with some degree of consistency guided by the use
of a set of development standards. For more information on development standards
refer to Section 5.4.

6.3 Automated Test Execution, Analysis, and


Reporting
Short Answer
Automated execution should not be a time consuming and tedious task. Tests should be
grouped and ordered in a way that facilitates easy execution and analysis.

Once an automated test has been developed, the next step is to organize and prioritize
it within the framework (see Skill Category 5: Automated Test Framework Design). Then
based on the test plan, execute the tests against the AUT and analyze the results that
are reported from the script runs (see Skill Category 12: Automated Test Reporting &
Analysis).

20
RealVNC. Available at http://www.realvnc.com/vnc/how.html

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 113


TABOK Segment 2: Macroscopic Process Skills

6.3.1 Automated Test Grouping


When the number of automated tests remains relatively small, so does the importance
of grouping those tests. As the number of tests grows, so does the need to better
categorize those tests; not only for the purposes of test management, but also for ease
of execution. While there may be times all tests will need to be executed in a single run,
it is likely that many runs will only call for the execution of a select group of tests.
Therefore, the tests should be grouped in a manner that streamlines this selection
process. Tests may be grouped in multiple ways, including by:
Priority (Tiers) – A ranking of importance
Functional Area – The main application subsystem the test covers
Test Purpose – The main objective of the test (sanity, regression, etc.)
The framework should accommodate maintaining and executing tests by groupings (see
Section 5.2.1), and providing reports based on those groups (see Skill Category 12:
Automated Test Reporting & Analysis).

6.3.2 Automated Test Ordering and Execution


Once the tests have been grouped, they are ready to be executed. To ensure the
execution is as successful as possible, it should be done in a very deliberative fashion
so that execution and analysis times are minimized, and the highest priority information
is yielded in the shortest amount of time.

Figure 6-6: Test Execution Domino Effect

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 114


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts
The scenario depicted in Figure 6-6: Test Execution Domino Effect helps to relay the
importance of appropriately ordering tests. In this scenario, the automated test suite
consists of 100 tests. Tests 1 through 4 execute appropriately, and although Test 2
failed, its failure has no impact on the remaining tests in the suite. Test 5, on the other
hand, fails and corrupts the system resulting in a domino effect that causes all
remaining tests to fail. While appropriate exception handling (see Skill Category 11:
Error Handling) sometimes helps mitigate this domino effect, it can‘t always prevent it,
thus a lot of time could be lost in analysis and re-running of these failed tests.
Some approaches that may help to make test execution more effective and efficient
include:
Running sanity tests first
Running shorter, less volatile tests first
Running tests in parallel on multiple machines
Running tests during low periods of system activity
Running tests as part of a Continuous Integration process

Running sanity tests first


Sanity tests are high-level test meant to verify application stability prior to more rigorous
testing. By executing sanity tests firsts they provide insight into whether or not additional
automated tests will be executed. If the additional tests were already executed (in the
event that all tests were scheduled to be executed unattended), then a decision can be
made based on the sanity test results which, if any, results from the other tests will be
analyzed.

Running shorter, less volatile tests first


This is often a good strategy simply because long tests often perform many transactions
or invoke long processes, which may present more opportunities for catastrophic errors
to occur. By running shorter, less volatile tests first there is often an increased
probability of the run completing more of its tests without a major incident that could
negatively impact the run.

Running tests in parallel on multiple machines

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 115


TABOK Segment 2: Macroscopic Process Skills

Dividing the load among several machines is a useful approach for reducing the overall
execution time of the automated test suite. For example, Figure 6-7: Parallel Test
Execution illustrates 12 tests that have been divided among 4 machines and executed
in parallel. If each of these tests takes 5 minutes to run, the total serial execution on a
single machine would take
12 x 5 = 60 minutes
Parallel execution as illustrated in the figure divides this 60 minutes across four
machines (3 tests on each machine), resulting in a total elapsed execution time across
all machines of
3 x 5 = 15 minutes

Figure 6-7: Parallel Test Execution

Figure 6-7: Parallel Test Execution illustrates how parallel test execution might be
accomplished. In this illustration, the Controller is responsible for orchestrating the test
execution on all machines. The Execution Machines have the automated test
tool/framework installed on it and is used by the Controller to execute a select list of
Automated Tests.
Three mechanisms for accomplishing this type of parallel test execution are:
Manually accessing multiple machines

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 116


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts
Distributed testing
Server virtualization
For organizations that utilize computer labs with several physical machines, manually
accessing these machines may be a common approach for executing tests in parallel.
In this scenario, the Controller represents a person that accesses each of the physical
Execution Machines one-by-one, running a different list of tests on each.
Distributed testing, conversely, helps to streamline parallel execution slightly by
eliminating the need for a person to access each machine one-by-one. Distributed
testing is the execution of tests across remote locations from one central location. Many
automated test tools offer this feature, which not only offers the ability for the runs on
each machine to be started simultaneously, but also often provides a means for the
tests on separate machines to communicate with each other. In the distributed testing
scenario, the Controller represents the machine that remotely connects to the Execution
Machines and executes its respective list of tests.
Finally, server virtualization is another option. In reality, with server virtualization, you
can implement either the manual approach or the distributed approach, but the
difference is that the Execution Machines are not physical machines. Server
virtualization is the provisioning and use of several servers on a single piece of
hardware. So, in a parallel automated test execution scenario that utilizes server
virtualization, the Execution Machines are actually virtual machines – software
representations of machines – which likely all reside on the same physical server
machine. Since the virtual machines behave just like physical machines, each one still
has the automated test tool installed and may be used to run a separate group of
automated tests as if they were physical machines.

Running tests during low periods of system activity


When the test system has a lot of users on it or a lot of processes running, everything
on that system can slow down. The automated tests are no exception to that rule. For
this reason, it is often useful to schedule the automated tests for execution when most
people are out of the system. This is very often in the evening when the work day has
concluded for most personnel.

Running tests as part of a Continuous Integration cycle


Continuous Integration (CI) is term used to describe a frequent, automated build
process that also integrates automated testing (typically, sanity tests). This build is often

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 117


TABOK Segment 2: Macroscopic Process Skills

set to trigger when new code or updated code is checked into configuration
management, or is scheduled to execute at least once a day. This frequent and
automated process helps to identify bugs the moment they enter the system, and given
the fact that the number of introduced code changes is relatively small the test results
analysis may be easier and more effective.

6.3.3 Automated Test Results Analysis


Test execution typically produces one or more of the following artifacts:
Test execution logs
Test execution reports
Defect reports
Defect reports are often developed manually instead of automatically, but generally test
execution logs and reports are generated automatically. These items are analyzed to
get an understanding of how the AUT is functioning. For more information on reporting
and results analysis, visit Skill Category 12: Automated Test Reporting & Analysis.

6.4 Resource References for Skill Category 6


What to Automate
Brian Marick. When Should a Test Be Automated. Available at
http://www.exampler.com/testing-com/writings/automate.pdf
Brett Pettichord. Success with Test Automation. Available at
http://www.io.com/~wazmo/succpap.htm
Image-based Tools, VNC and Image Comparison
Real VNC. Available at http://www.realvnc.com/vnc/index.html
Wikipedia. Eggplant (GUI testing tool). Available at
http://en.wikipedia.org/wiki/Eggplant_(GUI_testing_tool)
http://www.t-plan.com/robot/docs/v2.2ee/gui/comparison.html
Virtualization
Dan Downing and Dion Johnson. Six Critical Success Factors for Performance
Testing Virtualized Systems webinar. Available at
http://webinars.automatedtestinginstitute.com

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 118


TABOK Segment 2: Macroscopic Process Skills
Skill Category 6: Automated Test Script Concepts
Continuous Integration
Martin Fowler. Continuous Integration. Available at
http://www.martinfowler.com/articles/continuousIntegration.html
Eric Pugh. Continuous Integration and Automation Go Together Like Peanut
Butter and Jelly. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2009/AutomatedS
oftwareTestingMagazine_September2010.pdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 119


TABOK Segment 2: Macroscopic Process Skills

Skill Category 7: Quality Attribute Optimization

Primary role involved: Test Lead, Lead Automation Architect


Primary skills needed: Understanding quality attributes, process improvement, making
automation decisions based on quality attributes

In automated testing, as with testing in general, there are a multitude of ―best practices.‖
With that said, a ―best practice‖ may not necessarily be a best practice for all
organizations. All best practices need to be evaluated in the context of the environment
in which they are to be applied. The ultimate goal for test automation is to have a quality
set of automated scripts that meet the testing needs of the organization while
realistically working with the constraints inherent in that organization.
Quality Attributes in test automation are those characteristics deemed important for a
particular test automation effort. At first glance, one might be tempted to judge all of the
quality attributes with equal importance but that is as unhelpful as it is unrealistic. Such
an approach will result in a cost-intensive, time-intensive, and resource-intensive
automation effort that will not guarantee increased quality. It will, however, almost
guarantee a failed automation approach. It is necessary to assess the environment, and
make informed, thought-out decisions that result in a series of trade-offs that determine
which quality attributes receive greater focus, and which will receive lesser focus.

7.1 Typical Quality Attributes


Short Answer
There are numerous quality attributes but among the most important in the context of
test automation are maintainability, portability, flexibility, robustness, scalability,
reliability, usability, and performance.

7.1.1 Maintainability
Maintainability represents the ease with which the automation framework or scripts can
be modified to correct errors, improve components, or adapt to changes in the AUT. It is
important to understand that maintainability is not just a property of the automated test
framework to which it belongs and the mechanisms that lend themselves to

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 120


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization
modifiability; it is also relative to the skill level and experience of the test team members
in the environment. For example, for extremely technical test engineers, some scripts
may be easier to maintain than they would be for less technical test engineers. In
addition, some scripts may be easier to maintain if the automator has significant
experience with the AUT. These examples speak to the abstract concept of
understandability. Framework components in one test organization may be extremely
understandable, and likewise very maintainable. Placed in another test organization,
that same framework may be less understandable, and therefore not very maintainable.

Table 7-1: Measuring and Improving Maintainability

Measuring Maintainability Improving Maintainability


Maintainability is typically Increase understandability – Highly
measured by calculating the maintainable frameworks take into account less-
average amount of time it takes skilled test engineers as well as those who are
to update a test. In the event that highly skilled and experienced. Techniques to
the update is due to a failure, this increase understandability may include:
may be called ‗mean-time-to-
─ Introducing a formal automation framework
repair (MTTR). This average may
training plan. More structured frameworks
be an overall average for any
tend to require more training on how to
type of change, or there may be
implement the framework.
several averages categorized by
the type of change. If there is a ─ Producing detailed framework and script
change that is extremely documentation. The more structured the
infrequent, and unlikely it may framework, the greater the amount of
make sense to exclude that documentation required to effectively
change from the average. maintain that framework.
Increase modifiability – Some mechanisms that
can make script and framework modifications
easier include:
─ Decoupling data from code by using data-
driven scripts – Separating data from code
has a positive effect on modifiability, given
the fact that it is often easier to deal with
data than with code. In addition, when data
modifications are necessary, there is no
need to search through the code in order to

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 121


TABOK Segment 2: Macroscopic Process Skills

Measuring Maintainability Improving Maintainability


find the necessary data that needs to be
modified.
─ Using descriptive object map logical names –
In the event that an object map is used, the
object map logical names are easier to
maintain when using descriptive logical
names. Tools that provide an object map
feature typically offer a default logical name
for an object based on one of the physical
properties of that object. Unfortunately, the
default name is not always descriptive
enough to provide an understanding of what
object the logical name represents.
Changing the default name to something
more descriptive may increase modifiability.
See Skill Category 9: Automation Objects for
more information on object maps.
─ Increasing modularity – Modular
programming is the term used to describe
the creation of a single block of code
(normally in the form of a function) that
performs a commonly done task. This
promotes reusability, and thereby reduces
the amount of code that needs to be
maintained. Maintainability is also increased
because it is typically easier to work with
smaller blocks of code. Modularity can work
against modifiability, however. If there are
too many individual components that need to
work together in order to conduct a single
task, making modifications becomes
increasingly difficult because other
components may need to be modified in
conjunction with the initial changed
component.
─ Simplifying code interface – More advanced

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 122


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization

Measuring Maintainability Improving Maintainability


frameworks tend to support simplifying the
script interfaces by default. This normally
improves maintainability. For example, the
keyword framework makes scripts easier to
read and easier to produce through a
simplified interface that uses more natural
language. Modularity tends to simplify the
language used for test development (see the
examples in Section 6.2.1).
─ Increasing comments – Since comments are
coupled with the actual code, they make it
easier to find, understand and modify
specific blocks of code.
─ Adding detailed automation logs – Logs that
provide details on automation actions and
results can be very helpful in debugging and
modifying scripts (See Skill Category 12:
Automated Test Reporting & Analysis for
more information on logs).

7.1.2 Portability
Porting is the process of adapting the automated test framework (or scripts) so that it
may be implemented in an environment that is different from the one for which it was
originally designed. This environment change may be a change in application
technology, servers, automation script programming language, automated test tools,
and the like. Portability is a result of factoring redevelopment and porting costs.

Table 7-2: Measuring and Improving Portability

Measuring Portability Improving Portability


Average amount of time to port – Separate test logic from test scripts – Highly
Portability may be measured by defined functional decomposition and keyword
calculating the average amount of frameworks do this very well. In these

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 123


TABOK Segment 2: Macroscopic Process Skills

Measuring Portability Improving Portability


time it takes to successfully frameworks, automated tests are mostly created
implement the framework (or by stacking together several modular
automated scripts) in a different components. The highly reusable modular
environment. This average may components are responsible for executing the
be an overall average for any test steps. This improves portability by
type of environment, or there may improving the chances of a seamless porting of
be several averages categorized the test logic (the bulk of the framework),
by the type of environment. accompanied by minimal updates to a handful of
Porting vs. Redevelopment – reusable components.
Portability may also be discussed Greater use of initialization and configuration
in comparison to the cost of parameters – Porting is often a simple matter of
redeveloping the automation altering the way an application is called and/or
framework. This may be the paths used to reference the application or
calculated in a percentage via the framework components. By increasing the use
following equation: of initialization and configuration parameters to
reference items such as directory paths, web
URLs, and search paths the automation
(Redevelopment framework may be more portable across
Cost – Porting Cost) technologies and drives.
x 100%
Redevelopment Implement in a root directory – Using a single
Cost directory to house all of the automation
Example: If redevelopment cost is components, along with well-defined
$10 while porting cost is $9, initialization parameters, will ensure the
porting cost is 10% more cost framework is portable across hardware.
effective.

7.1.3 Flexibility
Flexibility refers to the testing framework‘s ease of execution. Not all test builds contain
the same types of test protocols, test for the same types of conditions, or evaluate the
same environments or applications. Depending upon the nature of a build, the time
given to test it may vary from a few minutes to a few weeks or more. Due to time or
resource constraints, full regression may not always be an option. A flexible framework

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 124


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization
conforms to the varying schedule that testers are given from release to release, and
even from build to build and accommodates various execution levels.

Table 7-3: Measuring and Improving Flexibility

Measuring Flexibility Improving Flexibility


Flexibility may be measured by Identify different testing levels and priorities – By
calculating the average amount of identifying the different types of builds that are
time it takes to select a subset of common in the organization and associating a
tests to be executed for a specific set of tests to be executed for each type of build
build. This figure may be an for each application, a set of configuration
overall average for any type of parameters and/or driver files can be set up to
build or several averages easily run a selected groups of tests based on
categorized by the type of build. the current build.
Ensure 1-to-1 mapping from manual to
automated tests – When multiple test cases are
handled by a single automated test script, it
becomes increasingly difficult to pick specific
test cases for execution based on the goals of
the build. This can be remedied by having a 1-
to-1 mapping from manual to automated tests.

7.1.4 Robustness
Robustness indicates the quality of an automated test with respect to its ability to
withstand system changes, both predictable and unpredictable, with minimal interruption
to execution and reporting mechanisms associated with automated test implementation.
The more robust the framework is, the greater the number and types of changes the
framework can effectively handle.

Table 7-4: Measuring and Improving Robustness

Measuring Robustness Improving Robustness


Number of failed automated tests Separate object names from object descriptions
– Robustness may be measured – Most commercial automated test tools provide
by summing the number of an Object Map feature (see to Section 9.2 for

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 125


TABOK Segment 2: Macroscopic Process Skills

Measuring Robustness Improving Robustness


automated tests that fail due to a more information) that creates a logical
single application defect. The representation for each physical description that
lower the number, the more is used for identifying objects in the AUT. This
robust the framework. feature is most useful when the Object Map has
Mean-time-between-failure a 1-to-1 mapping between a logical
(MTBF) – MTBF is the average representation and an object in the application.
When no Object Map is used, or when multiple
time during which an automated
logical representations for a given object exist in
test execution runs without
the application, modifiability is decreased. This
inappropriately halting and
is because an application‘s object may change
without failing to handle
unexpected application properties frequently. A properly used Object
Map can shield the framework from being
occurrences. MTBF applies to
greatly affected by these changes.
robustness when failures are
legitimate application or system Increase exception handling – The more
failures (defects, disconnected extensive the exception handling, the more
network, etc). When the failures likely the framework will be able to adequately
are related faults in the handle unexpected events.
automated framework or
Narrow test scope – This goes for both manual
automated test itself, MTBF
and automated tests. When a test has a single
relates to reliability. The higher
objective, a test is less likely to fail due to
MTBF, the more robust the
peripheral functionality changes. If a single
framework.
automated test is tasked with verifying a wide
Number of uninterrupted test runs array of items, it becomes very easy for a script
– The number of consecutive to fail, which may result in a large number of
batch test runs that execute script failures reached during each test run. This
successfully without improperly will, in turn, make it difficult to assess the cause
terminating is an indicator of of script failure. Limiting a script to a single
robustness. The higher the objective makes it less likely large numbers of
number, the more robust the tests will fail during a test run, and make it
framework. easier to quickly pinpoint the cause of failures.

7.1.5 Scalability
A scalable framework – one that can support testing when the size and scope of the
AUT (or component) expands or decreases – allows an automation framework to
seamlessly handle varying degrees of work and/or readily add or subtract framework

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 126


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization
components. A highly scalable framework that functions properly as its scope of testing
changes brings value to the test effort by reducing the amount of changes needed in the
framework itself to accommodate changes in the AUT.

Table 7-5: Measuring and Improving Scalability

Measuring Scalability Improving Scalability


Scalability may be measured by Update the object identification parameters –
calculating the average amount of Most commercial automated test tools provide
time it takes to successfully an Object Map feature (see to Section 9.2 for
increase the workload of the more information) that creates a logical
framework or add/subtract representation for each physical description that
framework components. is used for identifying objects in the application
under test. An object identification feature
provides the ability to set the properties that will
be used to uniquely identify an object that will be
stored in the object repository. If the properties
are not properly set up, it becomes difficult to
add and maintain new, unique objects to the
Object Map.
Increase data separation – The more
extensively the data-driven technique is used,
the easier it will be to scale the framework
components up and down. Scaling data-driven
components is a matter of adding and taking
away data from a data table. For example, the
use of a driver file makes it easy to scale the
script execution up or down by simply adding
additional scripts to a table.
Increase parameterization – Parameters have a
positive effect on scalability. For example, if a
framework needs to be scalable enough to allow
new applications to work with it, parameters will
make it possible to execute a different set of
framework components based on the specific
AUT.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 127


TABOK Segment 2: Macroscopic Process Skills

7.1.6 Reliability
Reliability indicates the ability of the automated test framework to consistently perform
and maintain its intended functions in routine circumstances, as well as hostile or
unexpected circumstances. Just as with the AUT, automated test frameworks are
software products and thus, not immune to defects. A reliable framework has a minimal
number of defects that negatively impact the ability to dependably verify the application
functionality. In addition, a reliable framework offers a high degree of results integrity
over repeated test executions.

Table 7-6: Measuring and Improving Reliability

Measuring Reliability Improving Reliability


Number of failed automated tests Increase exception handling – Unexpected
– Reliability may be measured by events in the application that are not handled
summing the number of properly often result in false positives and/or
automated tests that fail due to a false negatives. Increasing error handling may
single framework/script defect. make the framework more reliable.
The lower the number, the more
Narrow test scope – Tests with multiple
reliable the framework.
objectives tend to have a lot of failures. Many of
Mean-time-between-failure these failures are often deemed unimportant
(MTBF) – MTBF is the average and are therefore overlooked, given the fact that
time during which an automated they are not always at the heart of what the test
test execution runs without is really most concerned with verifying. Making a
inappropriately halting and habit of purposely overlooking errors increases
without failing to handle the risk of overlooking a legitimate error, and the
unexpected application automated test results lose reliability. Limiting a
occurrences. When the failures script to a single objective makes it less likely
are related faults in the this becomes a problem.
automated framework or
Increased framework testing – Because the
automated test itself, MTBF
framework is itself a software product, testing
relates to reliability. (MTBF
may help to assess its quality like any other
applies to robustness when
software product. Of course, the tests performed
failures are legitimate application
on the framework will not equal the amount of
or system failures.) The higher
testing performed in the AUT but simple positive
MTBF, the more robust the
tests, negative tests, and batch tests may help
framework.
to improve reliability.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 128


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization

Measuring Reliability Improving Reliability


Number of uninterrupted test runs
– The number of consecutive
batch test runs that execute
successfully without improperly
terminating due to
framework/script defects is an
indicator of reliability. The higher
the number, the more robust the
framework.
Number of false results – The
number of false positives (a pass
on a verification point that should
have failed) and false negatives
(a failure where a verification
point should have passed) is an
indication of reliability. The lower
the number of false results, the
more reliable the framework.

7.1.7 Usability
Usability signifies the ease with which test engineers can employ the automated test
framework for their intended purposes. If the framework has a well-defined separation of
roles, usability judges the ease with which each role is able to employ the tasks for
which the role is responsible.

Table 7-7: Measuring and Improving Usability

Measuring Usability Improving Usability


Time per task – Usability may be Identify different testing levels and priorities – By
measured based on the amount identifying the different types of builds that are
of time that it takes to effectively common in the organization and associating a
perform a specific task within the set of tests to be executed for each type of build
framework. for each application, a set of configuration
Training time – Highly usable parameters and/or driver files can be set up to

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 129


TABOK Segment 2: Macroscopic Process Skills

Measuring Usability Improving Usability


frameworks are fairly intuitive. easily run a selected group of tests based on
One indicator of how intuitive the the current build. Executing tests in the
framework is the amount of framework is therefore easier so the measure of
training time required before a usability is increased.
test engineer can use the
Simplify code interface – More advanced
framework effectively. The lower
frameworks tend to do this by default. By
the training time, the more
making the script interface a little simpler,
intuitive and usable the
maintainability is normally improved. For
framework is. Keep in mind that
example, the keyword framework makes scripts
this figure does not include
easier to read and produce through a simplified
advanced training used to help
interface that uses more natural language. The
increase tool proficiency.
more natural the language used for automated
test development, the easier maintenance tends
to get. Modularity also tends to simplify the
language used for test development. See
Section 6.2.1 for more information on code
interfaces.

7.1.8 Performance
Just as software performance is treated as a non-functional requirement for
applications, performance standards and requirements of automated tests should also
be carefully considered. Certainly, one of the advantages of automating tests is that the
execution is typically faster than manual execution. If the performance of automated
tests is compromised by such factors as insufficient environment (e.g., network
performance, optimized test scripts, inefficient test protocols, errors in configuration
management or version control), the time saved through test automation may be
drastically reduced.

Table 7-8: Measuring and Improving Performance

Measuring Performance Improving Performance


Performance may be measured Increased exception handling – Unexpected
in terms of the average execution events that are not properly handled tend to
time per script in the framework, breed errors. These cascading script errors are

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 130


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization

Measuring Performance Improving Performance


or it may be calculated the total normally more time- and resource-intensive on
execution time per test suite. the automated test tool/framework than a
passed step is. Increasing the amount and
depth of exception handling may increase
performance.
Improve manual test design – Duplication in
manual tests tends to find its way into
automated tests when the recommended 1-to-1
mapping from manual to automated test is
followed. Eliminating unnecessary duplication in
manual test design will ultimately eliminate
duplication in automated tests.
See Section 6.3.2 for more addition
performance improvement suggestions.

7.2 Ranking Frameworks in Terms of Quality


Attributes
Short Answer
Each of the different types of frameworks has its own set of inherent strengths and
weaknesses. Typically, as the framework increases in maturity, quality attribute
strengths relative to automated test development tend to increase.

Table 7-9: Framework Quality Attribute Rankings revisits each of the automation
framework types, and identifies the inherent strengths and weakness of each.
Table 7-9: Framework Quality Attribute Rankings

Framework Potential Strengths Weaknesses


Type
Linear (Record Maintainability (when Maintainability (when scope is
& Playback) scope is extremely small) moderate to complex)
Usability (when scope is Portability

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 131


TABOK Segment 2: Macroscopic Process Skills

Framework Potential Strengths Weaknesses


Type
extremely small) Flexibility
Performance (when Robustness
functionality is simple)
Scalability
Reliability
Usability (when scope is
moderate to complex)
Performance (when
functionality is moderate to
complex)
Data-driven Maintainability (when Maintainability (when scope is
scope is extremely small) moderate to complex)
Scalability (with respect to Portability
test specific data)
Flexibility
Usability (when scope is
Robustness
extremely small)
Scalability
Performance (when
functionality is simple) Reliability
Usability (when scope is
moderate to complex)
Performance (when
functionality is moderate to
complex)
Functional Usability when: Usability when:
Decomposition
─ Scope allows simple ─ Scope requires numerous
script development script development
standards standards
─ Organizational ─ Organizational processes
processes such as are weak
documentation and ─ Automation framework
communication are at development personnel

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 132


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization

Framework Potential Strengths Weaknesses


Type
least moderate (lead automation
architects) and/or
─ Personnel are
automated test script
sufficiently technical
development personnel
Maintainability (automation engineers) are
Portability not sufficiently technical

Flexibility
Robustness
Scalability
Reliability
Performance
Keyword Maintainability Usability when:
Driven
Portability ─ Organizational processes
are moderate at best
Flexibility
─ Automation framework
Robustness
development personnel
Scalability (lead automation
Reliability architects) are not
sufficiently technical
Usability when:
─ Automated test script
─ Organizational development personnel
processes such as (automation engineers) are
documentation and relatively technical
communication are
strong Performance

─ Automation
framework
development
personnel (lead
automation architects)
are extremely
technical

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 133


TABOK Segment 2: Macroscopic Process Skills

Framework Potential Strengths Weaknesses


Type
─ Automated test script
development
personnel
(automation
engineers) are
relatively non-
technical
Model-based Special Case
─ Framework is best when technical personnel are strong
─ Scope of Automation is exploratory testing

7.2.1 Linear (Record & Playback)


The Linear framework is most useful when the automation scope is relatively small.
Maintainability, Usability, and Performance are all strengths for this type of framework
when a small scope is involved. The low number of components and component
interactions inherent in the Linear Framework coupled with the low number of tests (and
other elements of scope) that are involved with a small scope automation effort makes
maintenance relatively simple. This simple nature also attributes to the ease of use, and
to the high performance of the scripts. The Linear Framework doesn‘t have much of a
structured set of components.
As the scope gets larger, Maintainability, Usability and Performance suffer. Without
significant increases in modularity and other well-defined framework attributes, it will be
difficult to maintain a group of scripts that attempt to address a large automation scope.
Usability and Performance suffer due to similar reasoning.
The other quality attributes – Scalability, Portability, Flexibility, Robustness, and
Reliability – tend to be weak in Linear frameworks regardless of the size of the scope.

7.2.2 Data-driven
Data-driven frameworks tend to be largely based on the Linear framework. Therefore,
they have similar strengths and weaknesses. The main difference is that a modest level
of reuse is introduced by using parameters for script data being stored in an external
file. This distinction tends to not only make Maintainability a little stronger but it also

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 134


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization
causes Scalability to become a factor with respect to data. Scalability tends to be strong
when data is concerned but still tends to be weak overall.

7.2.3 Functional Decomposition


Functional Decomposition may be considered the default framework type. If unsure
about which framework to use, the function decomposition is probably the best place to
start. The functional decomposition framework has the ability to address all quality
attributes but how they are addressed depends on the components that make up the
framework. In its simplest form, this framework type is strong with respect to Usability
when the scope allows for simple standards and when the organization has a sufficient
maturity level (especially in documentation and communication processes). In addition,
test engineers must have sufficient technical abilities.
As the standards grow and/or the maturity level of the organization drops, usability
suffers.

7.2.4 Keyword-driven
Most of strengths and weakness associated with functional decomposition also apply to
the keyword-driven framework. The primary difference is that the strength of the
usability quality attribute is not hinged on the moderate technical proficiency of all
resources. The technical resources responsible for developing tests can have moderate
technical skills but the resources responsible for maintaining the framework typically
need much stronger technical skills. The keyword-driven framework tends to be more
usable, however, than the functional decomposition framework when there is significant
division of responsibilities and skill levels. When there are resources with strong
technical skills, and some resources with low to moderate technical skills, and all need
to operate in test automation, the keyword-driven framework tends to be more usable.
This is due to the fact that the highly technical resources build and maintain the
framework while the less technical resources implement the framework to create
application specific tests. Without such a division of labor, the keyword-driven
framework can tend toward being overkill, and not very usable.

7.2.5 Model-based
The model-based framework cannot really be discussed in the same terms as the other
framework types because its purpose and objective are completely unique. The scope
of the model-based framework is normally to explore the application, as opposed to just
automating existing manual tests. So when the scope of automation is to introduce
automated exploratory testing, and technical resources are strong, model-based is the

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 135


TABOK Segment 2: Macroscopic Process Skills

appropriate framework. Model-based is often used as a supplement to one of the other


framework types.

7.3 Identifying Key Organizational Quality Attributes


and Selecting a Framework Type
Short Answer
Selecting a framework based on an organization‘s quality attributes is not an exact
science. Therefore, the best way to relay an approach for accomplishing this task is
through scenarios.

One of the main challenges in selecting a framework is in identifying what the desired
quality attributes mean and indicate for test automation within a given organization.
Developing a methodical approach for associating automation framework quality
attributes to a particular automated test framework is important for successful
automation implementation within an organization. While Section 7.1 provides a
description of typical quality attributes, and some guidance for how to artfully make real-
time, optimal automation implementation choices in a constrained environment, this
section addresses the actual categorization of quality attributes from a high-level and
how it may be accomplished by reviewing the overall automation scope. In addition, the
use of the quality attributes for selecting a framework type is touch upon.

21
Figure 7-1: Quality Attribute Optimization Examples

21
Fewster, Mark, and Dorothy Graham. Software Test Automation: Effective use of test execution tools.
Reading, MA: Addison-Wesley, 1999

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 136


TABOK Segment 2: Macroscopic Process Skills
Skill Category 7: Quality Attribute Optimization

At this point in automation history, this process is much less about following a scientific
approach, and more about making educated, analytical decisions based on information
provided. Such analytical decision making may best be illustrated via a scenario.

Quality Attribute Scenario


Assigning values to the scope elements identified in Section 4.1 may yield the following
scenario:
AN = Framework must support one application
VN = The application is expecting two major releases a year
BN = The application is expecting a weekly build over two months (for each major
release) with mainly bug fixes and minor functional changes
TN = 75 tests will be automated, then maintained and updated over time
CN = Testing will be conducted on two configurations (platforms) – Windows
Vista and Windows 7
EN = Testing will occur in two environments – Test and Development
Ti = Testing will be done indefinitely (as long as project continues to receive
funding)
P = Organizational process are moderate
R = Resource technical level is moderate

In this environment, maintainability, and portability are ranked high given the fact that
the project will go indefinitely, and tests will be executed on multiple configurations and
environments. Reliability is also going to be ranked high given the fact that the window
of test execution is small (each week for 2 months). Flexibility will be given a medium
ranking, because although 75 tests is not that small of a number, in this environment, it
shouldn‘t be too difficult to pick out tests on a whim that need to be executed in the
event of an abbreviated testing cycle being required. Usability and Performance will
be given a medium ranking, because the test number is moderate, and not expected to
expand too significantly but there is a need to be able to quickly execute and analyze
results given the tight execution window. Scalability will probably get a low ranking,
because there will be little need to regularly add components.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 137


TABOK Segment 2: Macroscopic Process Skills

Based on this information, and what we know about framework strengths, a moderately
complex Functional Decomposition framework is recommended. Heavy focus should
be placed on developing strong components for maintenance, portability and reliability.

7.4 Resource References for Skill Category 7


Quality Attributes
Bret Pettichord. Seven Steps to Test Automation Success. Available at
http://www.io.com/~wazmo/papers/seven_steps.html
David Fern. The Port Authority. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2009/AutomatedS
oftwareTestingMagazine_November2009.pdf
Fredrick Rose. Be The Energy Star of Test Automation. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_March2010.pdf
Choosing a Framework
Michael Kelly. Choosing a Test Automation Framework. Available at
http://www.michaeldkelly.com/pdfs/Choosing_a_Test_Automation_Framework.P
DF
John Kent. Generations of Automated Test Programs Using System Models.
Available at http://www.simplytesting.com/Downloads/Test-Automation-
Generators.pdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 138


TABOK Segment 3

Microscopic Process Skills

This section discusses in detail the low-level process knowledge required for
successfully implementing a test automation effort.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 139


TABOK Segment 3: Microscopic Process Skills

This page intentionally left blank

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 140


TABOK Segment 3: Microscopic Process Skills
Skill Category 8: Programming Concepts

Skill Category 8: Programming Concepts

Primary role involved: Lead Automation Architect, Cross Coverage Coordinator,


Automation Engineer
Primary skills needed: Logic, understanding the different types of languages,
coding/programming

Whether using a tool with a scripting language, tree structure and/or keywords,
fundamental programming concepts remain paramount in effectively automating tests
and increasing the effectiveness of the test through increased system coverage and test
flexibility. Concepts such as variables, control flow statements (if..then..else, for..next,
etc.), and modularity are discussed in this category.

8.1 Developing Algorithms


Short Answer
If an algorithm can be identified for repeatedly accomplishing a single task, there is a
good chance that task can be automated. Therefore, understanding the basics of
developing algorithms is essential for test automation.

An algorithm is a set of rules or steps aimed at solving a problem, and it is at the heart
of computer programming and thus software test automation. Algorithms are the
blueprint for developing an effective automated solution so it is imperative that
automators have a basic understanding of how to create one. The understanding and
development of algorithms has itself been the subject of many books and classes, so it
will obviously not be covered ad nauseam in this section. There are a basic set of steps,
however, that can be useful for developing an effective algorithm for test automation
scripting:
1. Identify the problem – A good understanding of the overall purpose and goal of
the script including its inputs and desired outputs is an important first step.
2. Analyze the problem – The problem must next be logically deconstructed, so that
the solution may later be reconstructed in the form of an automated script. This
involves identifying relationships between the inputs and outputs, understanding
constraints of the target system, automated test framework and automated test

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 141


TABOK Segment 3: Microscopic Process Skills

tool. This also involves understanding the differences between how the system is
accessed manually versus how it is accessed via the framework and test tool.
3. Create a high-level set of steps to accomplish the stated goals in the form of a
flowchart (See Figure 12-4: Automated Test Results Analysis for a sample
flowchart) or pseudocode (See Figure 8-1: Sample Pseudocode for Data-Driven
Invalid Login Test for sample pseudocode).
4. Walkthrough the algorithm using one or more real life scenarios (both negative
and positive), and add additional detail as necessary until the walkthrough
reaches a successful end.
Much of the work involved in developing an algorithm for automating a functional test is
accomplished during the development of manual test procedures. This provides a good
justification for having a well defined set of manual test procedures prior to automating
tests against an application.

Open AUT to Login screen


Open Data File
Execute data rows to the end of file
Input <LoginName> data table value into Login Name textbox
Input <Password> data table value into Password textbox
Click Login button
Verify error message appears
Loop to next data row

Figure 8-1: Sample Pseudocode for Data-Driven Invalid Login Test

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 142


TABOK Segment 3: Microscopic Process Skills
Skill Category 8: Programming Concepts

8.2 Scripting vs. Compiled Languages


Short Answer
Programming languages generally fall into one of two categories: compiled or scripting
languages. Compiled language code is converted at design-time to a set of machine-
specific instructions and saved as an executable file. Scripting languages (also known
as interpreted languages) are typically saved in a text file at design time, and converted
at run-time to a set of machine specific instructions. Compiled language code typically
runs faster than scripting language code because compiled languages are designed for
performance while scripting languages are designed for easy development.

One difference between scripting languages and compiled languages is in the


compilation process. Many automated test solutions use a scripting language. Simply
put, scripting languages (also known as interpreted languages) are essentially code that
is stored in text files and converted to machine language at runtime. VBScript,
JavaScript and Ruby are three common scripting languages. After a programmer writes
the script code, the code is compiled by a script engine when the script is run; this is
known as runtime compilation.
By contrast, compiled languages are essentially programming language code that is
compiled (converted) into binary code. When using a compiled language, such as C# or
Visual Basic, a programmer writes code, and then compiles it prior to the code being
run. This is known as design time compilation.
Another difference between the two types of languages is in the way code is developed.
Compiled languages typically require a relatively complex Integrated Development
Environment (IDE) while scripting language code may be developed and maintained in
a simple text editor, such as Notepad, and later run in a script host.

8.3 Variables
Short Answer
A variable is a container for storing and reusing information in scripts. Referencing a
variable by name provides the ability to access its value or even change its value.

Variables store information that may change dynamically during the running of the
program or by the automator during the program design time.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 143


TABOK Segment 3: Microscopic Process Skills

Typically there are three steps to using a variable:


1. Declaring variables
2. Assigning values
3. Referencing variables
Declaring Variables
Declaring a variable involves designating the name of the variable and the type of data
that may be stored in that variable. For example, some variables can only store
numbers while some variables can only store alpha characters.
Some languages do not require an explicit declaration. This means that the data type is
assigned automatically when a value is assigned to it; the variable name must still be
designated, however.
Assigning Variables
The specific language and environment dictate how variables are assigned. Typically a
value is assigned to a variable by using the equal sign (‗=‘) as follows:

nameCounter = 4

Referencing Variables
In this example, the number 4 is assigned to the variable called nameCounter.
Whenever the nameCounter variable is referenced by the script, the number 4 is
returned. Therefore, the variable may be used in the same way that the number 4 may
be used.
For example, the nameCounter variable may be used in a multiplication problem as
shown below:

nameCounter * 2

This is equivalent to:

4 * 2

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 144


TABOK Segment 3: Microscopic Process Skills
Skill Category 8: Programming Concepts
If the variable nameCounter is reassigned to a new value, that new value will be used
in subsequent processing.

8.4 Object-oriented Concepts


Short Answer
Object-oriented concepts form the heart of automated testing since objects are the
building blocks of many AUTs. Many of the tools and languages used for test
automation are based in object-oriented concepts.

Object-oriented programming is a type of programming in which developers can define


classes and objects that contain properties, operations (also known as methods that can
be applied to the object), events, and collections.
Classes are templates for multiple objects with different features or characteristics.
Properties are simply object variables that may be maintained throughout the life of the
object.
Methods are repeatable actions that may be called by the script. There are two types of
methods: functions and sub-procedures. Functions typically return values while sub-
procedures do not. Events are also repeatable actions but, unlike methods, are called
based on an action performed on the AUT by a user (e.g., mouse click, scroll, etc.). A
collection is a set of data or objects that are of the same type.
To help explain this further, let us examine the class illustrated in Figure 8-2.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 145


TABOK Segment 3: Microscopic Process Skills

Properties:
Shape
Text
Class: Button
Methods:
Click

Object 2: O_Button
Object 1: R_Button

Properties: Properties:
Shape: Rectangle Shape: Oval
Text: R_Button Text: O_Button

Collection: Buttons

Figure 8-2: Class, Object, Properties, Methods, Collections Illustration

The class in the figure is a Button class and has two properties that define the features
of all buttons: Shape and Text. The two objects (i.e., instances of the class) both have
shape and text properties but the values for shape and text are different for each
button object. One button object (Text: R_Button) is rectangular in shape; the other
button object (Text: R_Button) is oval in shape. Both button objects respond to the
method Click; this is inherited from the overarching class: Button. The two button
objects collectively make up what is called the Buttons collection. The collection allows
the automator to refer to each button based on its position in the group. Therefore,
Object 1 can be referred to in two ways:
Its collection location – Buttons Collection object 1
Its properties – The button that has a rectangular shape, and is labeled
R_Button.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 146


TABOK Segment 3: Microscopic Process Skills
Skill Category 8: Programming Concepts

8.5 Control Flow Functions


Short Answer
Control Flow functions provide the automator with greater control over how a script is
executed. Each line of an automated script is normally executed according to the order
in which these lines exist within the script. Control Flow functions allow for these lines to
be executed in an alternate way based on the needs of the test.

Control Flow functions provide control over the order in which the code is executed.
They also determine whether or not specific blocks of code get executed at all. This is
critical for many functions, such as exception handling. Normally, a script executes
every line in sequence. Often, however, this sequential execution is not desirable.
Sometimes, certain code should only be executed under specific conditions and at other
times some lines of code should be executed multiple times.
The two main categories of constructs that provide the automator with the ability to
control how the code is executed are Branching Constructs (e.g., if-then, case-select)
and Looping Constructs (e.g., while, for).

8.5.1 Branching Constructs


Also known as conditionals, branching constructs allow for the execution of different
automated test statements depending on whether an evaluated condition is found being
true or false. The most basic branching constructs that are critical to effective test
automation are:
If-then construct
Case construct
The syntax of each depends on the tool and language used for test automation. That
said, each may be described in terms of their basic structure.

If-Then Construct
If-Then statements use a Boolean condition to determine what code blocks to
perform. It is structured as follows:

If (condition) Then

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 147


TABOK Segment 3: Microscopic Process Skills

<statements>
Else
<statements>
End If

The condition can be any statement that is evaluated as either true or false. For
example, the condition results of 4 > 5 would evaluate to false, because 4 is not greater
than 5. When the condition is true, the first set of statements (those following the Then
keyword) is executed. Otherwise, the next set of statements (those following the Else
keyword) is executed.
Keep in mind that the Else keyword is optional. If the Else is not included, then only
the If-Then statements are executed. When the condition is not true, execution ends
and the control continues to the next line of code after the construct.
The If-Then construct may be further expanded by introducing the ElseIf keyword.
The ElseIf keyword makes it possible to build a nested set of conditions to evaluate..
The structure of the If-Then statement with ElseIf included is as follows:

If (condition) Then
<statements>
ElseIf (condition) Then
<statements>
ElseIf (condition) Then
<statements>
...
Else (condition) Then
<statements>
End If

Only the statements following the first true condition are executed while all other
statements within the construct are skipped. The statements of the final Else will be
executed if none of the conditions are true.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 148


TABOK Segment 3: Microscopic Process Skills
Skill Category 8: Programming Concepts
Case Construct
The Case Construct may also be known as the Switch or Select statement. It allows the
value of a variable or expression to determine what automated statements will be
executed.
In most implementations, the Case construct begins with the Select (or Switch)
keyword. This is followed by either a variable or test expression – known as a control
variable – to be evaluated. The value of the controlVariable dictates which Case
statement set executes. For example, if the value of the controlVariable equals
the value of expression2, the second set of Case statements executes. If the
value of the controlVariable does not match any of the expressions, the
statements in the Case Else block (if present) are executed.
The Case construct may be structured as follows:

Select Case (controlVariable)


Case <expression1>
<statements>
Case <expression2>
<statements>

Case Else
<statements>
End Select

Case statements are typically used in lieu of using an If-Then construct that has
numerous ElseIf branches.

8.5.2 Looping Constructs


Looping constructs, also known as iterators, provide a mechanism for a single set of
statements to be executed multiple times. The statements within the loop are executed
a specified number of times or until some condition is met.
The basic looping constructs include:
For construct

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 149


TABOK Segment 3: Microscopic Process Skills

While construct
The syntax of each depends on the tool and language used for test automation. That
said, each may be described in terms of their basic structure.
For Construct
A For loop construct has an explicit loop counter or loop variable and is typically used
when the number of iterations is known before entering the loop. The structure of the
For construct is as follows:

For i = 1 to 20
<statements>
Loop

In this example, the variable i takes on the values 1, 2, 3…20, until the loop has been
executed 20 times. When i exceeds 20, the statements following the For loop are
executed.
While Construct
The While construct executes the loop based on a Boolean condition. It is often
structured as follows:

While (condition)
<statements>
Loop

The condition is a statement that is evaluated as true or false. When the condition is
true, the statements within the While construct continue to be executed. Otherwise, the
execution moves on to the statements following the looping construct.

8.6 Custom Function Development


Short Answer
Custom functions are important for making an automated testing framework more
modular and reusable.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 150


TABOK Segment 3: Microscopic Process Skills
Skill Category 8: Programming Concepts

When creating automated scripts, there will often be activities that need to be executed
multiple times in varying locations. These activities may be application agnostic
activities such as calculating the difference between two numbers obtained from the
system at runtime. The activities may also be application specific activities such
entering information that will log the user into the application. If the programming
language or tool being used doesn‘t offer a function to accomplish the specific activity at
hand, it normally will provide the capability of creating a custom, user-defined function.
A function is a block of code within a larger script or program that executes a specific
task, but while it is part of the larger script it operates independent of that script. It is
executed not according to where it is located in the script, but rather based on where it
is ―called‖ within a script and it typically allows for arguments and return values.
Arguments are values that are entered into the function and the function has the liberty
of using and even altering the values. Return values are data that come out of the
function and may be used by the calling script.
Functions are often structured as follows:

Function AdditionFunction(digit1, digit2)


retValue = digit1 + digit2
Return(retValue)
End Function

AdditionFunction is the function name, while digit1 and digit2 are the
function arguments. The return value is represented by retValue. This function may
be used by the calling script in the following manner:

sumValue = AdditionFunction(3, 2)

The AdditionFunction will use the number 3 in the variable digit1 and the
number 2 in the variable digit2. It will calculate 2 + 3 and return 5 to the variable
sumValue.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 151


TABOK Segment 3: Microscopic Process Skills

8.6.1 Development Approaches


Generally, there are two basic development approaches to developing test scripts that
utilize several functions: top-down and bottom-up.
Top-down script development involves creating a script that sequences all the major
functions to accomplish its overall objective. Later, as more is known about the
application and/or more of the application is available to an automator, the major
functions are created and stored to be called by the previously developed script.
Bottom-up script development is the opposite. It involves the creation of the major
functions first, and then the code components are combined in the main script to
achieve the overall test objective.

8.7 Resource References for Skill Category 8


Flowcharts and Algorithms
Wikipedia. Flowchart. Available at http://en.wikipedia.org/wiki/Flowchart
Disposal and Memory Cleanup
J.L. Perlin. Automation Supercharged. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_December2010.pdf
Scripting Languages
Dion Johnson. The Magic Of Automating With Scripting Languages (June 2005).
Available at http://www.softwaretestpro.com/Publication/p/STPM/2005

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 152


TABOK Segment 3: Microscopic Process Skills
Skill Category 9: Automation Objects

Skill Category 9: Automation Objects

Primary role involved: Lead Automation Architect, Cross Coverage Coordinator,


Automation Engineer
Primary skills needed: Object-oriented programming concepts, implementation of
regular expressions, an understanding of objects, object maps and object models

Objects are a central element in automating applications. If the tool or script cannot
access objects, there‘s a good chance that the application cannot be effectively
automated. A lack of proper understanding of objects and object behavior may at times
result in automation issues and failures. This section builds on the foundation of key
terminology and concepts introduced in Section 8.4.

9.1 Recognizing Application Objects


Short Answer
Once upon a time, functional software test automation primarily used an ―analog‖
approach that relied on specific coordinate locations of a screen or application window
for performing mouse and keyboard operations that simulated user activities on the
application. This was usually unreliable and difficult to maintain: any change in the
locations of screen, window or application components would cause the test to fail
because the conditions under which the objects were identified no longer existed.
Modern test automation locates the objects on the screen based on properties that
define that object and then performs the desired operations once the object is found.

The object-oriented approach to test automation helps increase the reliability and
robustness of automated tests. It also increases the responsibility of the test automator.
Since most application objects have multiple properties that can be used to uniquely
identify an object on the screen, it is the responsibility of the automator to determine
which properties to use for test development. Using the example illustrated in Figure
8-2, the Shape property, the Text property, the Class property, the Collection property,
or some combination of all of these may be used to reference the specific button on the
screen. The key for test automators is analyzing the AUT and determining which
properties are necessary to uniquely identify a particular type of object. Using the wrong
set of properties (or too few properties) will not sufficiently identify a unique object while

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 153


TABOK Segment 3: Microscopic Process Skills

using too many properties will make the automated tests too susceptible to application
changes thus decreasing the framework‘s robustness. Whenever the property values of
objects in the application change, the object properties used in the automated script for
identifying application objects must also change. If the object property values are not
consistent between the AUT and the automated script, failure will result when the script
attempts to identify and automate the object in the application. For each type of object
that may be automated, an automator must make an assessment of which properties to
use for uniquely identifying the object.
Some general guidelines for identifying object properties include the following:
Choose object properties that are descriptive. For example, if an object has a
property called ID that equals 5jf4f and a Name property that equals
AddressField, the Name property is a better choice.

Choose object property combinations that are likely to uniquely distinguish the
object from all other objects on the screen.
Choose object property combinations with properties that have as little dynamic
behavior as possible. It is not always possible to escape dynamic behavior and
there are ways to handle object properties with dynamic behavior (see section
9.4 Dynamic Object Behavior). But whenever possible, dynamic object properties
should be avoided.
Choose object property combinations with properties that are not likely to be
affected by cosmetic changes (e.g., object positioning, color, etc), since cosmetic
changes are likely to occur frequently.
Communicate with development about object naming conventions.
Understanding how developers handle object properties (e.g., how they name
the objects, how often particular object properties are altered, etc.) will often
provide insight into how automators should handle object properties. In addition,
in an effort to make the application more automatable, developers may alter the
way they name objects, how they update object properties, or even how often
they update object properties.
An example of how to choose object properties for identifying objects within an
application may be discussed using the two objects from Figure 8-2. If these two objects
exist on the same screen, the following may hold true:

Table 9-1: Good vs. Bad Object Properties Combinations

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 154


TABOK Segment 3: Microscopic Process Skills
Skill Category 9: Automation Objects

Poor Property Combinations Better Property Combinations


Class Property alone – Using the Class Class and Text Property – Whether or not
Property alone is not sufficient for uniquely this is a good combination of properties
identifying a button because there are two largely depends on the development
objects that have a Class property of standards used by the application
Button developers. More than likely, developers
do not create two buttons on the screen
Shape Property alone – Although the
that have the same text. If for some reason
Shape property is sufficient for
they do (such as having the same button
distinguishing between the two buttons, it
at the top and bottom of the screen for
is likely that there may eventually be other
easy access), the Collection property may
objects on the screen that have the same
be added to distinguish between the two
Shape property as one of the existing
buttons
buttons
Class and Shape Property – Collectively
using the Class and Shape properties
increases the chances of uniquely
identifying an object over just using one of
the properties alone but it is still likely that
more than one button object will exist on
the screen that have the same shape
Collection alone – This is more reliable
than the others for uniquely identifying a
button object but if a new button is added
to the screen in a position that will alter the
button collection index values, this
property may return a reference to the
wrong button. For applications that have
little chance of changing, however, this
may be sufficient

This property data may be used in an automated script in a statement that resembles
the following:
GetElement(“Class:=“Button”, “Text:=“O_Button”).Click

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 155


TABOK Segment 3: Microscopic Process Skills

This statement gets an element on the screen that has a Class property equal to
Button, and a Text property equal to O_Button, then clicks the elements.

9.2 Object Maps


Short Answer
One of the simplest ways to boost the maintainability and robustness of an automated
test suite is through the introduction of Object Maps. Object Maps reduce the amount of
information that is being maintained in an automated test by storing object properties in
an external file and referencing objects according to variable names (also known as
logical names) that are then associated with the properties.

An Object Map (also known as a GUI map) is a file that maintains a logical
representation and physical description of each application object that is referenced in
an automated script. Object maps allow for the separation of application object property
data (known as the physical description) from the automated script via a parameter
(known as the logical name). This approach is similar to the data-driven technique (see
Section 4.2.3.1) in that some implementations of the object map occur in a table format.
Object maps are unique, however, in that they deal with object properties that are used
by the script for identifying objects as opposed to data that is entered by the script into
an object.
Most commercial automated test tools have an object map feature to sustain and
configure object property information in a maintainable way. This object map may
maintain and display information in a plain text format as shown in Figure 9-1:

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 156


TABOK Segment 3: Microscopic Process Skills
Skill Category 9: Automation Objects

RectangleButton
{
Class: Button
Text: “R_Button”
Logical Name
Shape: “Rectangle”
}
OvalButton Physical Description
{
Class: Button
Text: “O_Button”
Shape: “Oval”
}

Figure 9-1: Object Map Example

Figure 9-1 illustrates how the objects from Figure 8-2 may be displayed in an object
map. The advantage of an object map is that all of the object properties are stored in
one location. Thus, when an object is referenced in a script, instead of identifying the
physical description as in the following statement:

GetElement(“Class:=“Button”, “Text:=“O_Button”).Click

the logical description will be referenced as in the following statement:

GetElement(“OvalButton”).Click

This separation helps make the framework more robust because it is likely that this
object may be referenced in several locations within a single or several different
automated test scripts. If the physical description is included in every automated test
statement that references the same object and one of the object properties in the
application changes, every one of the statements would need to be altered to keep the
script from failing to recognize the object. By using the logical name, an object property

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 157


TABOK Segment 3: Microscopic Process Skills

change would only prompt a single change to the object map. Once the object map is
updated, every statement that references the changed object will be able to successfully
reference the newly altered object.
Object Map Maintenance
Object maps add significant power to the test automation effort but only if the object
maps are properly maintained. Many of the tools that provide an object map feature also
provide the Record & Playback feature, and this normally adds objects to the object
map automatically during recording sessions. While this may be useful, it also increases
the risk of bloating the object map with duplicate objects and garbage information that
cancels out many of the advantages offered by the object map. Thus, if not properly
maintained, the object map may make the automation effort even more cumbersome
and costly. To preserve the benefits of the object map, its maintenance should be
included as a regular part of automation framework maintenance. Maintenance may
include the following:
Configure the object map object identification feature to use the minimum
number of properties for uniquely identifying objects in the application, and select
properties that are less likely to regularly change. This will be most useful during
Record & Playback session or during the use of other features that automatically
―learn‖ object properties into the object map.
During Record & Playback sessions, ensure duplicate and/or other unnecessary
objects are not being added to the object repository.
When a script fails, work to assess whether the failure is due to the ability to
successfully identify an object and whether the issue can be fixed by adjusting
the properties in the object map.
When the application removes objects, remove the corresponding objects from
the object map to reduce clutter.

9.3 Object Models


Short Answer
Many software applications are composed of several interrelated objects, and an object
model is an abstract representation of the hierarchical group of related objects that
define an application and work together to complete a set of application functions.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 158


TABOK Segment 3: Microscopic Process Skills
Skill Category 9: Automation Objects
An object model is an abstract representation of the hierarchical group of related objects
that define an application and work together to complete a set of application functions.
The advantages offered by object models include:
Increased application understanding
Increased scripting power

9.3.1 Increased Application Understanding


Object models may provide information to automators about an object‘s hierarchical
position, properties, collections and methods. This information may be used for learning
how to properly refer to and perform actions on application elements from within
automated scripts.
Let us look at a typical example. Figure 9-2 illustrates the object model for Internet
applications known as the Document Object Model (DOM). The items designated with
an asterisk (*) are collections while the other items are single objects. Let us start with
the key relationships between the Window, Document and Element objects. The
Window object is at the top of the hierarchy, and is represented by the browser window.
The Document object represents the web page that is actually loaded inside of the
browser when navigating to an internet URL. Inside of the Document object may be
several Elements, such as buttons, text boxes, dropdown lists, etc.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 159


TABOK Segment 3: Microscopic Process Skills

*
*
*
*

*
* *
*
*
*
*
*
*
*
22
Figure 9-2: Document Object Model

The pictorial view of the object model reveals that the Window object is at the top of the
hierarchy, which makes the browser Window the parent of the HTML Document. The
HTML Document is, in turn, the parent of the HTML Elements. Understanding this
hierarchy makes it possible to properly refer to and perform actions on an object in the
application. The hierarchy of a statement that performs an action (such as a method
that performs click) on an element may resemble the following:

Window.Document.Element.Method

Not shown in the illustration is the other information that comes along with the DOM.
The DOM also provides textual information about all of the properties, methods, and
collections that may be used on the Window, Document, and Element objects. This
information is useful for performing automated test actions or verification on various
objects in the application, because automating actions against an object often requires
setting the object‘s property values or executing the object‘s methods. Therefore, the

22
Access eLearning. Document Object Model. Available at
http://www.accesselearning.net/mod10/10_07.php

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 160


TABOK Segment 3: Microscopic Process Skills
Skill Category 9: Automation Objects
more you know about these properties and methods, the more effective you can be at
automating.

9.3.2 Increased Scripting Power


It is worth noting that commercial tools tend to build a custom object model that alters
the way the AUT is accessed via the tool‘s scripting language. This custom object
model may exist at a different level of abstraction than the application‘s object model but
it provides a mechanism for all objects built on a similar technology to be handled by the
automation tool in a uniform fashion. So instead of accessing the application objects
based on the hierarchy and properties provided in the application object model, the
information is accessed based on the hierarchy and properties provided in the
automated test tool‘s object model. For example, the hierarchy of a DOM statement
may be:
Window.Document.Element.Method

But an automated test tool may instead represent the same action with the following
hierarchy:

Window.Element.Method

In some instances, however, the tool still provides the ability to script the application
based on the original application object model. This can be useful, given that the
automated test tool cannot always sufficiently automate some application objects.
Having the option of creating scripts based on the application‘s object model provides a
way of handling these objects. In addition, understanding object models can add
additional scripting powers beyond basic AUT scripting statements. Understanding
object models may allow automation of tedious process tasks including data extraction
and manipulation, file manipulation, report generation, and the like. In addition, powerful
scripts may be written to perform multiple activities with only a few statements.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 161


TABOK Segment 3: Microscopic Process Skills

9.4 Dynamic Object Behavior


Short Answer
One of the biggest challenges faced by automated test engineers is that of dynamic
object behavior. People process information about the application based on visual
inspection while automated tests use object properties. People can, therefore, easily
make adjustments when there is a slight change from a visual standpoint, and can
ignore many property changes. The computer, however, cannot adjust as easily. The
necessary adjustments have to be anticipated up front and programmed.

Many AUTs have objects that are either created or modified at run-time. Therefore, the
properties of those objects are very rarely static.

Dynamic Links

Figure 9-3: Dynamic Object Illustration

For example, Figure 9-3 reveals a Welcome screen that may be displayed upon logging
into an application. The dynamic links are generated based on the profile associated
with the username of the person who logs into the application. The links Profile, Home,
and Messages are customized to reflect that user, in this case, John.
Assuming that these objects were static, the object map may log these items thus:

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 162


TABOK Segment 3: Microscopic Process Skills
Skill Category 9: Automation Objects

ProfileLink
{
Class: Link
innerText: “John Profile”
}
HomeLink
{
Class: Link
innerText: “John Home”
}
MessageLink
{
Class: Link
innerText: “John Messages”
}

Figure 9-4: Dynamic Object Map

The innerText property represents the text of the link shown on the screen and is the
primary property used for identifying the links. The values associated with the
innerText property in the object map for each of these links passes testing when the
logged in user is ―John‖. If ―Sue‖ logs in, however, the automated tests will fail to
appropriately recognize the link objects, because the links in the AUT will now read,
―Sue Profile,‖ ―Sue Home,‖ and ―Sue Messages‖ while the object map still looks for
―John Profile,‖ ―John Home,‖ and ―John Messages.‖
As previously noted, object properties are essential for automating tests to effectively
communicate with the AUT, so the ability to manage and test dynamic objects is also
essential. Two ways in which dynamic objects may be handled are:
Dynamically construct property values
Use regular expressions

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 163


TABOK Segment 3: Microscopic Process Skills

9.4.1 Dynamically Construct Property Values


Dynamically constructing property values means exactly what it implies and that is the
creation at run-time of property values to use for identifying objects in the AUT. Using
the example illustrated by Figure 9-4, this may be accomplished by concatenating a
static value with a dynamic value to create a value for the innerText property.
The ―John‖ portion of the innerText property of each of the links is dynamic and
based on the user that is logged in. Therefore, the following variable can be declared
prior to the automated steps that log into the application:

Username = “John”

The rest of the innerText value is static for each of the links. Therefore, the
innerText property value may be constructed to handle the dynamic nature of these
links in the following manner:

profileInnerText = Username & “ Profile”

homeInnerText = Username & “ Home”

messageInnerText = Username & “ Messages”

These statements allow the object property values used by the automated tests to be as
dynamic as the actual property values in the AUT. Whether these property value
variables are applied to the object map or to the automated test statements will depend
on automated test tool being used.

9.4.2 Regular Expressions


A regular expression is a character expression that is used to match or represent a set
of strings, without having to explicitly list all of the items that may exist in the set. For
example, the following character set:

{cat, that, hat, rat, sat}

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 164


TABOK Segment 3: Microscopic Process Skills
Skill Category 9: Automation Objects
could be represented by the following regular expression:

.*at

Actual regular expression syntax varies depending on the tool and language used for
test automation. Most commonly, some implementations use the period (.) as a wildcard
to represent any single character while the asterisk (*) is used to represent 0 or more of
the character(s) it follows. Given this, the property value variables in the previous
section may be handled by regular expressions in the following manner:

profileInnerText = “.*Profile”

homeInnerText = “.*Home”

messageInnerText = “.*Messages”

Essentially, the regular expressions ignore the username in the links, and only look for
links that contain the terms ―Profile‖, ―Home‖ and ―Message,‖ respectively. Since the
Welcome screen contains only one link each that has these words, the regular
expression statement should work fine.
As discussed earlier, the syntax for regular expressions largely depends on the
automation tool and language being used for test automation but there are some basics
that are fairly common to most tools and languages, including syntax for:
Alternation – deals with choosing between alternatives, similar to the way an Or
statement does in logic. The common syntax is to separate the alternatives with
a vertical bar (|). For example, pin|pen is syntax used to match either ―pin‖ or
―pen‖.
Grouping – addresses the assemblage of characters together to define the scope
and precedence of other regular expression operators. Grouping is commonly
accomplished using parenthesis ( ( ) ). For example, p(i|e)n is syntax used to
match either ―pin‖ or ―pen‖.
Single Character Match – allows nearly any single character to be represented
using the period (.). For example, p.n is syntax used to match ―pin‖, ―pen‖, ―pan‖,
etc.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 165


TABOK Segment 3: Microscopic Process Skills

Character Sets – accepts any character within a defined collection of characters


- Character list – Explicitly lists each character in a group to be considered for a
match. For example, p[ie]n is the syntax used to match either ―pin‖ or ―pen‖.
- Range – Identifies a range of characters in a group to be considered for a
match. For example, p[a-z]n is used to match ―pan‖, ―pbn‖, ―pcn‖, etc.
Quantification – (also known as Repetition) allows the regular expression to
expand to match strings of varying lengths. A quantifier (also known as a
repetition operator) after a character or group of characters determines how
many times the preceding character is allowed to occur. There are several
quantifiers including:
- Question mark (?) – Indicates that there may be zero or one of the preceding
character or group. For example, be?t is syntax used to match ―bet‖ or ―beet‖.
- Asterisk (*) – Indicates that there is zero or more of the preceding character or
group. For example, 19* is syntax used to match 1, 19, 199, 1999, etc.
- Plus (+) – Indicates that there is one or more of the preceding character or
group. For example, 19+ is syntax used to match 19, 199, 1999, etc. Notice
that it is similar to the asterisk, except for the fact it doesn‘t allow for zero of the
preceding character.

9.5 Resource References for Skill Category 9


Object Models
Access eLearning. Document Object Model. Available at
http://www.accesselearning.net/mod10/10_07.php
Dynamic Object
AST Magazine. Object in the Corner Pocket. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_March2010.pdf
Regular Expressions
Regular-Expression.info. Available at http://www.regular-
expressions.info/reference.html
Wikipedia. Regular expression. Available at
http://en.wikipedia.org/wiki/Regular_expression

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 166


TABOK Segment 3: Microscopic Process Skills
Skill Category 10: Debugging Techniques

Skill Category 10: Debugging Techniques

Primary role involved: Lead Automation Architect, Cross Coverage Coordinator,


Automation Engineer
Primary skills needed: Problem anticipation, problem solving, synchronization, an
understanding of different types of errors

Regardless of how well automated tests are designed and created, problems will occur.
Sometimes the problems are related to the scripts, and sometimes they are related to
the application. Regardless, the root cause is not always simple to find. The inability to
effectively debug scripts can severely delay schedules, and can even bring automation
to a screeching halt.

10.1 Types of Errors


Short Answer
The first step in script debugging is in understanding the types of errors that may be
encountered. These generic types of errors are most common of those that may be
encountered upon script execution:
• Syntax Errors
• Run-time Errors
• Logic Errors
• Application Errors

The most common types of errors, such as those discussed here, account for the many
of the bugs or anomalies that appear during automated test execution.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 167


TABOK Segment 3: Microscopic Process Skills

Figure 10-1: Script Error Types

Syntax errors typically occur when an automated test script statement is written
incorrectly. Usually, some element is missing, out of place, or deviating from the
grammar rules that define the language being used. When compiled languages are
used, syntax errors will prevent the successful compilation of the automated script.
When scripting languages are used, syntax errors will prevent the script from executing
any of the lines in the script and will typically result in some language- or tool-specific
syntax error being displayed.
Run-time errors are the result of some improper action taken by the script as opposed
to some improper grammar within the script. These errors will halt the script during
execution, at the point in which the error occurs. For example, an automated test may
have the following equation:

intergerVar1
result =
intergerVar2

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 168


TABOK Segment 3: Microscopic Process Skills
Skill Category 10: Debugging Techniques
This equation takes the quotient of the variables intergerVar1 and intergerVar2
and assigns the value to the result variable. The intergerVar1 and
intergerVar2 variables are values captured from the application during runtime.
There is absolutely nothing syntactically wrong with this equation so it will not produce
any syntax errors. During runtime, however, there could be some problems that occur
under the wrong set of circumstances. One such circumstance might occur in the event
that integerVar2 ends up being the number 0 at some point in time. Division by 0 is
an improper operation that will cause an error during run-time. Run-time errors are often
the result of objects not being found or data not being found or improper operations.

Logic errors are the result of automator error. Typically they are not related to any
syntax or run-time problems (although logic errors may contaminate other data used by
the script and result in a run-time error elsewhere in the script). Instead, they simply
result in undesirable and often incorrect results. An example of a logic error may be
seen by examining the ‗=‘ operator along with the ‗==‘ operator. In many automation
languages, the ‗=‘ operator is meant to assign value to a variable as in the following
statement:

result = 5

In this statement, the number 5 is stored in the result variable.


Conversely, the ‗==‘ operator is meant to assess the value of a variable as in the
following set of statements:

If result == 5 then
<PassScript>
End If

In the above statement, the result variable is evaluated to see if the number 5 is
stored in it. If the condition is true, then the script passes. Otherwise the script will fail. A
logic error would occur if the two operators were mixed up to produce the following
statement:

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 169


TABOK Segment 3: Microscopic Process Skills

If result = 5 then
<PassScript>
End If

In many languages, this is deemed a perfectly valid statement that results in 5 always
being stored in the result variable. Therefore, even if some number other than 5
existed in the variable prior to the If statement, the script would still pass because the
number 5 gets stored in the variable within the If statement. So instead of getting an
appropriate script failure, the script improperly passes (known as a false Positive).
These types of logic errors can be particularly dangerous due to the fact that they have
the appearance of everything being fine, when in actuality they are not. Excess logic
errors are the enemy of automation reliability.
Application errors are a result of the application functioning contrary to what is
expected. These errors are the reason that automated testing is being performed to
begin with. In some situations, application errors may result in run-time errors but in
other situations, there will be no indication of an application error unless a verification
point is specifically used to detect the error.

10.2 Debugging Techniques


Short Answer
The challenging part of debugging is finding out the true source of an error. This
involves ensuring the error is reproducible, and then the process of finding the main
source of the error must begin. Very often what seems to be the source of the error is
actually just a symptom of the true error, so there are several techniques that may be
employed to find the source of an error. At that point a determination may be made on
whether or not the error is due to an application failure or a script issue. Then solutions
for fixing the error may be addressed.

Effective debugging typically mirrors the following process:


1. Identifying error existence
2. Reproducing the error
3. Localizing the error
4. Fixing the error

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 170


TABOK Segment 3: Microscopic Process Skills
Skill Category 10: Debugging Techniques

10.2.1 Identifying error existence


This section is purposely called ―Identifying error existence‖ and not ―Identifying the
error‖ for a reason. The reason being that very often, you don‘t know an error exists until
you deliberately seek one out, then even when you do find what you think is the error,
you‘ve actually only found a symptom of the true error.
Automated tests are a software product, just as the AUT that is tested by the automated
tests. It is therefore important that some sort of verification activities be put in place to
test the test scripts themselves. These verification activities should in no way be of the
some magnitude as the tests performed on the AUT but they should involve some basic
activities, such as the following:
Positive Verification Point tests
Negative Verification Point tests
Batch run tests
Each automated test should be executed individually in an effort to ensure it runs
properly and to perform Positive Verification Point tests and Negative Verification Point
tests. Positive Verification Point tests ensure that all Verification Points in a test script
pass when they are supposed to pass. So if a Verification Point is checking to ensure a
screen appears, then upon executing the script, the Verification Point should pass when
the screen appears as expected. Negative Verification Point tests would do the inverse.
Upon executing the script, it is confirmed that the Verification Point fails when the
screen doesn‘t appear. Performing Negative Verification Point tests normally involves
manufacturing an application failure, because automated test development is normally
conducted in a stable environment. There are several ways to manufacture an error but
one of the best ways to manufacture an error is through the use of breakpoints.
A breakpoint is an indicator placed in an automated test that triggers a temporary halt in
the script, allowing for the state of the automated test to be examined at a specific point
in time. Execution may then be restarted at the point the script was halted. Many
automated test tools offer breakpoints but if not, one can be simulated by using the
tool‘s equivalent of a pause statement or message box that temporarily halts execution.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 171


TABOK Segment 3: Microscopic Process Skills

1 Input <Address> table data into the Address textbox


2 Input <City> table data into the City textbox
3 Input <State> table data into the State list box
4 Input <FirstName> table data into First Name application textbox
5 Input <LastName> table data into the Last Name application textbox
6 Click Next button
7• Verify Confirmation Screen Appears

Figure 10-2: Debugging With Breakpoints Example

Figure 10-2 represents an automated test with a breakpoint at step number 7. This
breakpoint allows the simulation of an error because after entering the appropriate data
into the application and clicking the Next button, the script stops, and step number 7 is
not executed; so even though the Confirmation Screen appears in the application, the
screen verification is not performed. While the script is waiting, the automator can
simulate an error condition by manually causing another screen to display in the
application, or by closing the application all together. Once the error condition has been
simulated, the next step is to restart the script at the point it was stopped, and ensure
the Verification Point fails.
After each automated script has been executed individually, performing the positive and
negative tests, the scripts should then be executed in batch mode as they are likely to
be executed for testing of an application build. Running the scripts in batch often
uncovers issues that aren‘t seen when running the tests individually. This is due to the
fact that one test may modify the state of the AUT or test environment in a way that
negatively impacts the tests that follow.

10.2.2 Reproducing the error


Upon identifying the existence of an error, the next step is to ensure it is reproducible.
Errors can be challenging. They may only occur once, and they may only occur
sporadically but until it can be consistently reproduced, it is difficult to ensure that it has
truly been resolved. Time should be spent modifying data, data sequences and
application states in an effort to consistently reproduce the issue but if the error still only
occurs sporadically, make an effort to understand why. For example, you may want to
discern whether one set of conditions causes the error to occur more often than other
conditions?

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 172


TABOK Segment 3: Microscopic Process Skills
Skill Category 10: Debugging Techniques

10.2.3 Localizing the error


The bulk of the debugging effort probably involves what is known as localizing the error.
Error localization is the process of narrowing down the script to the small area that is
likely to contain the true error. The challenge in doing this is in the very fact that the
point at which the error is perceived as occurring is not always the point at which the
true error exists. To further explain this concept, let‘s discuss a hypothetical situation
using the information in Figure 10-2. The hypothetical situation surrounding the
information in this figure is as follows:
All the data entered into the application in steps 1 through 5 is mandatory data.
This means that the data has to be entered properly in order for the Confirmation
screen to be displayed.
The combination of the First Name and Last Name must match an entry currently
in the application database.
The First Name field in the application, unbeknownst to the test team, drops
characters beyond the first 10 characters upon clicking the Next button.

Figure 10-3: Application Error Scenario

Given the scenario just identified, names like ‗Christopher Davis‘ would cause a
problem. The problem is a direct result of the fact that ‗Christopher‘ is 11 characters,
and upon executing the script, the application would drop the last character of the first
name. When the application attempts to match the name ‗Christophe Davis‘ with a

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 173


TABOK Segment 3: Microscopic Process Skills

name in the database, it will fail, preventing the Confirmation screen from appearing. If
the application is dropping characters, yet still has data in the database longer than the
10 characters, it may not be clear, initially, why the application fails to show the
Confirmation page. Then, if the test script tests varying data values, the error may not
be consistently reproducible, because some of the names may be less than 11
characters, resulting in the display of the Confirmation screen. And even when using a
name longer than 10 characters, the issue is not apparent, because the characters are
not dropped in the front end. If the information is reported to the application developers,
they may attempt to reproduce the error in their environment but the names they
attempt to use are less than 10 characters, which seems to be a perfectly valid test,
because nobody knows that the application is dropping characters. When developers
are unable to reproduce the error, a question arises about whether it is somehow due to
the application or due to the automated test scripts. Pinpointing the cause of the error is
additionally challenging because the error is observed at step 7 due to an error at step
6, based on data entered at step 4.
To truly understand the cause or at least the source of an error, the automator must be
skilled in localizing the error. Techniques for error localization include:
Backtracking
Error Simplification
Binary Search
Hypothesis

Backtracking involves starting from the point at which an error is observed and moving
backwards through the code until the cause of the error becomes more apparent.
Error Simplification extracts portions of complex statements or modular components in
an effort to simplify the statement and pinpoint where the error may be originating.

1 name = nameFunction()
2 address = addressFunction()
3 phone = phoneFunction()
4 inputData(name, address, phone)

Figure 10-4: Error Simplification

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 174


TABOK Segment 3: Microscopic Process Skills
Skill Category 10: Debugging Techniques
In the event that an error occurs in step 4 of Figure 10-4, Error Simplification may be a
good approach. In order to simplify the statement, the functions that are used to get
name, address and phone information may be replaced one-by-one with hard-coded
values. This will help to exclude the functions called in steps 1 through 3 from being
possible sources of an error.
Binary Search is an approach that begins with a large body of code, and divides it in
half by placing some sort of check (assertion) halfway through the code. If an error
doesn‘t occur at this assertion, then this is an indicator that the error occurs in the
second half of the code. If the assertion does produce an error, then the problem is
most likely in the first half of the code. This process may then be repeated on the
smaller block of code multiple times until the location of the error is identified.
Hypothesis is the approach where a theory is formed about an error‘s source and/or
cause, and then effort is put in place to prove or disprove that theory. The hypothesis
method is useful in test automation, because many of the error types and causes
experienced in test automation are common across the discipline. Typical automation
error types are explained in Section 10.1 while typical causes of automation errors are
explained below.
Typical Causes of Automation Errors
When an error occurs in an automated test, there are some likely suspects that an
automator may want to investigate. Aside from the obvious cause of an AUT defect,
common script related causes may include:
Synchronization
Data
Poor initial conditions and cleanup (see Section 6.2.4.1 for more information)
Object property changes (refer to Section 9.1 for more information)
Synchronization
Many automated tests fail not because anything is wrong with the AUT nor with the
automated test scripts; many fail simply because of timing issues between the test and
the AUT. For example, a new hypothetical situation can be created using Figure 10-2
once again that shines a light on a potential synchronization failure. Steps 1 through 6
all operate on a single screen while step 7 assumes a new screen – Confirmation
screen – has loaded. It may take several seconds for the AUT to load the Confirmation
screen, and the script may not necessarily wait for the screen to appear before it
executes the step that looks for it. If this occurs, the script will fail at step 7.
Synchronization points are used in automated tests to allow the test to wait for a

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 175


TABOK Segment 3: Microscopic Process Skills

specified amount of time for the AUT to achieve a specified state before moving to the
next test step.

1 Input <Address> table data into the Address textbox


2 Input <City> table data into the City textbox
3 Input <State> table data into the State list box
4 Input <FirstName> table data into First Name application textbox
5 Input <LastName> table data into the Last Name application textbox
6 Click Next button
7 Wait(10)
8 Verify Confirmation Screen Appears

Figure 10-5: Wait Statement

Synchronization points are different from ‗wait‘ or ‗pause‘ statements, as illustrated in


Figure 10-5 and Figure 10-6. These two figures are the same as Figure 10-2 except
they have Wait and Synchronization statements, respectively. The Wait statement
waits an absolute 10 seconds before moving to step 8, so even if the Confirmation
screen is loaded in the AUT within 4 seconds, the script will still wait the remaining 6
seconds before proceeding. The synchronization point in Figure 10-6 is much more
efficient. Instead of waiting an absolute 10 seconds, it waits up-to 10 seconds before
proceeding. This means that if the screen is loaded within 4 seconds, the script will
move on to step 8 without waiting the remaining 6 seconds.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 176


TABOK Segment 3: Microscopic Process Skills
Skill Category 10: Debugging Techniques
1 Input <Address> table data into the Address textbox
2 Input <City> table data into the City textbox
3 Input <State> table data into the State list box
4 Input <FirstName> table data into First Name application textbox
5 Input <LastName> table data into the Last Name application textbox
6 Click Next button
7 Sync_Window(“Confirmation”, 10)
8 Verify Confirmation Screen Appears
Figure 10-6: Synchronization Statement

Data
Data errors occur when the test data environment is not as expected, resulting in
changes to expected results. For example, if a refresh of data from production occurs,
the state of data elements may have changed, i.e. an insurance policy may have been
canceled or an employee may have been terminated. This may appear to be an
application error when it is not.

10.2.4 Fixing the error


Once the error has been localized, it is ready to be fixed. Fixing errors, in some cases,
is very simple. In other cases fixing errors may be a little more involved; particularly in
more structured frameworks. It is important to be sure that a fix in one place doesn‘t
negatively impact the execution of other scripts. In addition, fixing script errors may
often introduce new errors, so it‘s important to guard against doing that.

10.3 Resource References for Skill Category 10


Types of Errors
IBM. Programming Contest Central Debugging Tutorial. Available at https://www-
927.ibm.com/ibm/cas/hspc/Resources/Tutorials/debug_1.shtml
MSDN. Run-Time Errors. Available at http://msdn.microsoft.com/en-
us/library/shz02d41%28v=vs.80%29.aspx
Debugging
Cornell Lecture Notes. Debugging techniques. Available at
http://www.cs.cornell.edu/courses/cs312/2006fa/lectures/lec26.html

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 177


TABOK Segment 3: Microscopic Process Skills

Skill Category 11: Error Handling

Primary role involved: Lead Automation Architect, Cross Coverage Coordinator,


Automation Engineer
Primary skills needed: Error recognition, recovery and logging

Error handling is the automated process of handling the occurrence of some condition
that causes the actual behavior of the application to deviate from the behavior that is
expected by the automated script. These unexpected occurrences may be errors, or
simply some functionality or event that the script was not initially designed to handle.
Whatever the cause, unexpected behavior can wreck havoc on automated test runs, by
distorting data, disrupting batch executions, killing test reliability and rendering the
automated test run useless. Test implementation is largely a process of performing
actions, monitoring the system response, and moving forward accordingly. Error
handling provides automated tests with the ability to move forward based on real time
test responses. The effective handling of errors ensures errors are properly logged, and
that the remainder of the automated test run is salvaged if possible, completed in a
timely manner and maintains its integrity. An automated error handling process typically
involves the following activities:
1. Recovery Trigger
2. Error Recovery
3. Error Logging
4. Script Recovery
A Trigger is the error or event that causes the error handling routine to begin working its
magic. Triggers must be explicitly defined within the automated test framework in order
for the error handling routine to ―know‖ when to begin handling unexpected behavior,
and there must be a way to ―capture‖ the error trigger.
Error Recovery is the process of efficiently handling the error in the most appropriate
manner. For example, if an error negatively affects the application data, the error
recovery might clean up the application data.
Error Logging logs data in a log file, which aids in debugging and test analysis. The
logging data typically addresses the type of error and the recovery activity that resulted.
The error handling component may call a separate Reporting component in the

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 178


TABOK Segment 3: Microscopic Process Skills
Skill Category 11: Error Handling
automated test framework (See Skill Category 5: Automated Test Framework Design for
more information on framework components) for logging errors (See Skill Category 12:
Automated Test Reporting & Analysis for more information on automation logs).
Script Recovery is the process of moving the automated test run forward following the
implementation of the Error Recovery steps.

11.1 Error Handling Implementations


Short Answer
Error handling may be implemented in a variety of ways. Ways that may include adding
error handling statements within a test script, or adding statements at the framework
level that ―oversees‖ many test scripts.

Error handling is implemented in a variety of ways within automated test scripts, most of
which can be summarized in the following three categories:
Step Implementation
Component Implementation
Run Implementation
With each of these categories of error handling follows an increased level of
Robustness.

11.1.1 Step Implementation


This is the type of error handling routine that is concentrated on specific automated test
steps, and requires that the location of a potential error be known. This type of error is
typically preceded by verification points or other volatile steps that could significantly
impact the AUT or the remainder of the test run. For example, if an automated test
performs actions to log into an application with a specific username/password
combination, there is a reasonable chance that the username/password combination
may potentially fail authentication due to changes in backend data. The failure to
successfully login may result in a popup error message, such as the one illustrated in
Figure 11-1. This popup error message may suspend all application activity until the OK
button is clicked.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 179


TABOK Segment 3: Microscopic Process Skills

Figure 11-1: Popup Error Message

Not only will this prevent the current test from successfully logging into the application
but it will also prevent subsequent tests from closing the application, and logging into
the application with correct data. The entire test run will be replete with failures (See
Figure 6-6 for an illustration of cascading errors), and will result in lost testing time and
wasted personnel. This error is very specific to the login process, and must be
anticipated, in order to effectively handle it. An error handling routine would need to be
employed to specifically handle this login error so that it wouldn‘t negatively impact the
remainder of the test run, and to log data that may be used for analyzing the test run
appropriately. An illustration of this implementation is provided in Figure 11-3 and Figure
11-4.

11.1.2 Component Implementation


Component Implementation is a term meant to address a common approach for
handling errors, particularly in more complex automated test frameworks. An automated
test component may represent a function, script, object, etc, that helps modularize the
automated test framework. As automated test frameworks increase in maturity, so do
the reusable components, so it then becomes logical to protect the framework by
protecting each of its major components. One of the most efficient approaches for
accomplishing this is to create a pass/fail results check at the conclusion of the
component, and then implement an error handling routine as necessary upon evaluation
of that pass/fail result. Figure 5-3 reveals an example of this approach. In this figure the
Driver Script component‘s pass/fail status is based on the status of the scripts that it
calls. This information is then used to implement error handling in such a way that it is
highly reusable throughout the framework.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 180


TABOK Segment 3: Microscopic Process Skills
Skill Category 11: Error Handling
This is one of the most efficient approaches, because it allows for a relatively high
degree of error handling, without having to protect every single critical automated test
step in every script.

11.1.3 Run Implementation


The greatest error handling coverage comes from this type of implementation. It is often
difficult to foresee some errors that may occur, so this mechanism handles errors at any
point in which they may occur by serving as ―big brother‖ to the entire automation run,
monitoring the automated test run at every step. This is accomplished differently with
varying tools and scripting language technologies but it usually involves employing
some feature that exists separately from the script that is currently being executed. This
feature monitors the run-time automation environment for error conditions, and deploys
the error handling routine when necessary. This feature may often result in noticeably
slower performance of the automated tests, and should therefore be used sparingly.
One way to provide the effect of Run Implementation, without the negative performance
implications is to use one of the more mature frameworks – Keyword Driven or highly
modularized Functional Decomposition – to build application tests largely by combining
reusable components. The error handling can then be deployed using the Component
Implementation, but since nearly each step uses a component, error handling will be
deployed at each major step. Figure 4-3 helps to illustrate how this works. In this
illustration of a functional decomposition test, the test (on the right of the illustration) is
built completely with components (on the left of the illustration). Provided that these
components are each equipped with error handling routines, every step in the
automated test is in effect covered with error handling. The keyword test illustrated in
Figure 6-4 also helps to reveal this concept. This keyword test has a Recovery column
that allows each line in the test to implement an error handling step, allowing the
framework to behave as if Run Implementation is used for error handling. In reality, the
Component Implementation is applied to each function that is associated with a
keyword in the Action column.

11.2 Error Handling Development


Short Answer
The process for developing an error handling routine largely involves identifying a
method for capturing an error, determining a way to handle the error, and documenting
that error.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 181


TABOK Segment 3: Microscopic Process Skills

Figure 11-2: Error Handling Development Process

Error handling routines are critical for test automation, particularly for effectively
resolving runtime errors and other unexpected system events, because excessive
runtime errors are the enemy of test automation robustness and reliability. Figure 11-2
illustrates an error handling development process that may be used for implementing a
successful error handling approach. These steps include:
1. Diagnose Potential Errors
2. Define Error Capture Mechanism
3. Create Error Log Data
4. Create Error Handler Routine

11.2.1 Diagnose Potential Errors


In an effort to effectively handle an error condition, it‘s necessary to predict the types of
errors that the automated tests may encounter. The errors that may be encountered are
largely a product of the type of application that is being tested, but errors that are more
specific to automated test execution may be categorized in the following manner:
Abrupt Application Termination
Popup Dialogs
Object/Data Failures

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 182


TABOK Segment 3: Microscopic Process Skills
Skill Category 11: Error Handling
Test Verification Failures
(Note: these categories are more specific than the generic list of error types in Figure
10-1)

11.2.1.1 Abrupt Application Termination


An abrupt application termination, also known as a ‗crash‘ is a condition where the AUT
unexpectedly suspends functionality and stops responding to system stimulus, both
external and internal. The crashed program may simply freeze the current application
page on the screen, the AUT may completely close down, or the entire computer may
crash (known as a system crash). Abrupt application termination may have numerous
affects on an automated test run. It will obviously cause problems due to the fact that
the application will be unavailable while the automated test is attempting to perform
actions on it. In addition, crashes often have a tendency of leaving some application
processes open – processes that may prevent the application from being successfully
re-invoked. Therefore, if the application crashes during one test, subsequent tests that
attempt to re-invoke the application will be unsuccessful, resulting in chain of automated
test failures (See Figure 6-6 for an illustration of cascading errors). Another problem that
may be caused by abrupt application termination is the fact that the state of the
application may be left in an unstable manner. The application may end in the middle of
some important data transaction or other activity that is important for successful test
execution.

11.2.1.2 Popup Dialogs


Popup dialogs are windows that are smaller than standard windows and come without a
lot of the features that may be common to other windows in the AUT. The Login Error
illustrated in Figure 11-1 reveals a popup dialog. The unexpected appearance of popup
dialogs is particularly troublesome for automated tests for several reasons. Popup
dialogs that belong to the AUT typically suspend all application activity until the
appropriate dialog button has been clicked. This will prevent the current and subsequent
tests from performing any additional activities in the application, as well as preventing
the appropriate closing and/or re-invoking of the AUT.
Popup dialogs are not specific only to the AUT. They may abound from the operating
system, from system update tools, etc. No matter the source, these dialog windows
have the ability to take control of the application away from automated tests. It is
therefore important to deal with these windows in a satisfactory manner.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 183


TABOK Segment 3: Microscopic Process Skills

11.2.1.3 Object/Data Failures


Failure to find a desired object or object data is a very common failure experienced by
automated test runs. Whether it is caused by an application defect, a runtime problem,
or a behind-the-scenes change to the application object properties, this type of error
may be catastrophic to automated test execution in many of the same ways that the
other categories may be.

11.2.1.4 Test Verification Failures


Test verification points are typically a good way to address issues before they become
problematic for the entire test run. For example, if a verification point seeks to ensure a
specific page is displayed in the application, and the verification point fails, then there‘s
a good chance that the failure will cause problems for the automated test run. These
failures may be anticipated, and the automated test may therefore be set up to handle
errors at this point.

11.2.2 Define Error Capture Mechanism


Once potential errors have been identified, the next step in the error handling
development process is to define a means of capturing the error. Error capture involves
the identification of an error occurrence, determining the type of error, and passing it off
to the error handling routine.
This is often done via the concept of an exception. When encountering error conditions,
automated scripts may trigger what is called an exception in the form of an error code.
Exceptions provide automated tests with the ability to pass control of the script to a
handler routine that will effectively handle the error conditions that caused the exception
to occur. Some exceptions are generated automatically by the automated test tool when
an error occurs but automators may also write code that triggers an exception in specific
circumstances. The automator then creates and/or customizes an error handling routing
that handles the error in such a way that automated tests may continue or end in a
smooth manner. When exceptions are used, error handling is often referred to as
exception handling. Even when exceptions aren‘t used, the term ‗exception handling‘ is
often used interchangeably with ‗error handling‘ but this skill category is intentionally
called ‗error handling‘, based on the understanding that automated testing errors are not
always handled with exceptions.
A useful mechanism offered by many tools and languages for capturing errors is a
try/catch construct. This construct, shown below, will capture an error that occurs within
the code that has been wrapped in the construct.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 184


TABOK Segment 3: Microscopic Process Skills
Skill Category 11: Error Handling
try {
<automated test code>
If failure occurs throw error code
}
catch(exception class variable name){
<exception handling code>
}

In this construct, part or all of the automated test script would be included in the try
block of code represented by <automated test code>. In the event that an
exception occurs, the script captures the error then automatically shifts control to the
catch block of code represented by <exception handling code>.

11.2.3 Create Error Log Data


Refer to Skill Category 12: Automated Test Reporting & Analysis for more information
on error log data.

11.2.4 Create Handler Routine


Error handling may be contained inside of the script that uses it or may be established
in a separate utility that is called once an error is triggered.
An example of how to handle the error displayed in Figure 11-1 within a test script is as
follows:

1 Input “John” into Username textbox


2 Input “Jpass” into Password textbox
3 Click Login button
4 If “Login Error” dialog exists then
5 Click Ok button
6 Log Error
7 Abort the test
8 End If
Figure 11-3: In-Script Error Handling Example

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 185


TABOK Segment 3: Microscopic Process Skills

This example has some error handling in steps 4 through 8. It very specifically looks for
the anticipated ―Login Error‖ dialog, and handles it by clicking the OK button, then
aborting the test. The advantage to handling this error is that time is not wasted by
continuing to execute a failed test; the report will be clean, because only the main error
will be logged, and the remaining tests will not be adversely affected.
Another approach to handling this error is to use a separate error handler.

1 Input “John” into Username textbox


2 Input “Jpass” into Password textbox
3 Click Login button
4 scrVar = Verify “Welcome Screen” exists
5 If scrVar Not Equal to “Pass” then
6 Pass scrVar to Exception Handler
7 End If
Figure 11-4: Passing to Error Handler

Figure 11-4 reveals how a separate error handler component may be introduced. In this
illustration an error code is received from a verification point (line 4), then that code is
sent to an error handler, illustrated in Figure 11-5, in the event that the code is not a
―Pass‖. If a try/catch construct is used, the if-then statement would be unnecessary. The
error handler would simply be called from within the catch block.

1 Select Case errorCode


2 Case 102
3 <error handling instructions>
4 Call Reporter to log error handling
5 Case 201
6 <error handling instructions>
7 Call Reporter to log error handling
8 Case …
9 End Select
Figure 11-5: Error Handler Example

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 186


TABOK Segment 3: Microscopic Process Skills
Skill Category 11: Error Handling

11.3 Common Approaches for Handling Errors


Short Answer
When implementing an error handler, there are two parts to consider: Error Recovery
and Script Recovery. Error Recovery handles the error within the AUT while Script
Recovery handles the continued execution of the automated test run.

Error recovery involves dealing directly with the error that occurs so that it may be
resolved in a satisfactory manner. Some common approaches to error recovery are:
AUT Cleanup – Resetting AUT initial conditions, or resetting AUT data
AUT Shutdown – Closing and/or restarting the AUT
Script recovery addresses how the automated test run is to proceed. Common
approaches for addressing script recovery are:
Re-execute Step – Re-execute the test step(s) that failed
Skip Step – Skip the failed test step(s) and move on to some other portion of the
test
Re-execute Test – Re-execute the entire test that contained the failed step(s)
Abort Test – Terminate the current failed test and move on to the next test
Abort Test Run – Terminate the entire test run

11.4 Resource References for Skill Category 11


Exception Handling
Neil Fraser. Exception Handling. Available at
http://neil.fraser.name/writing/exception/
Wikipedia. Exception Handling. Available at
http://en.wikipedia.org/wiki/Exception_handling

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 187


TABOK Segment 3: Microscopic Process Skills

Skill Category 12: Automated Test Reporting &


Analysis

Primary role involved: Lead Automation Architect, Cross Coverage Coordinator,


Automation Engineer
Primary skills needed: Analysis, logging, reporting

Test reports and logs are important for communicating the status of an automated test
effort as well as for analyzing and debugging of automated tests. Reports are essential
in the identification of issues and in determining whether these issues are related to
application defects or automation framework problems. In addition, automated testing
depends on test reports to substantiate claims about its effectiveness and value.
This category addresses the types of reports that are commonly generated and how to
effectively produce these reports, so that time can be saved in analysis and reporting.

12.1 Automation Development Metrics


Short Answer
Automation development metrics should provide a quick look into the progress of the
automation development effort, as well as insight into how automation is making the
testing effort more efficient.

The ability to show automated test development progress is critical to the success and
survival of an automated testing development effort. These metrics need to first convey
that automated test development tasks are being completed. If several weeks go by
with several man-hours being burned on test automation, stakeholders will reasonably
expect to see increases in the number of completed automated tests. Metrics will also
need to show the impact of test automation on the entire testing effort. The metrics
should be provided in a format that reflects how the data is grouped in the framework
(see section 6.3.1 for more information on test grouping). For example, Figure 12-1:
Automation Development Sample Metrics offers several metrics grouped by Functional
Area.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 188


TABOK Segment 3: Microscopic Process Skills
Skill Category 12: Automated Test Reporting & Analysis

Total Total
Number
Number Number Automation Automatable
Functional Area of Total
Automatable Automated Completion Completion
Tests
% %

Administration 24 24 5 20.83% 20.83%

Authentication 33 10 0 0.00% 0.00%

Content Management 200 110 12 6.00% 10.91%

Orders 155 125 73 47.10% 58.40%

Search 98 98 21 21.43% 21.43%

Totals 510 367 111 21.76% 30.25%

Figure 12-1: Automation Development Sample Metrics

The metrics included in the illustration are as follows:


Number of Total Tests – The total number of tests that exist.
Number Slated for Automation – The total number of tests identified as able and
desired to be automated.
Number Automated – The total number of tests that have been successfully
automated.
Total Automation Completion % – The percentage of Total Tests that have been
automated (Number Automated ÷ Number of Total Tests)
Total Automatable Completion % – The percentage of Automatable tests that
have been automated (Number Automated ÷ Number Automatable)
Each of the percentages is important because they tell two different stories. The Total
Automation Completion % speaks to the impact of automation on the entire testing
effort. For example, the illustration shows that 21.76% of all existing tests are now
covered by test automation. The Total Automatable Completion % speaks to the
progress of test automation development against the goals that have been established.
In the illustration, 111 out of 367 – or 30.25% – of the total tests planned for automation
have been completed. In other words, the automated test team has completed 30.25%
of its goal of 367 automated tests.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 189


TABOK Segment 3: Microscopic Process Skills

Automated

Stay Remaining
Manual

Figure 12-2: Automation Development Sample Chart

Additional useful metrics not included in the chart:


Saved Time % – The percentage of time on the total test execution that has been
saved by automation
Saved Automatable Time % – The percentage of time on the automatable tests
that has been saved
The Figure 12-1: Automation Development Sample Metrics illustration will help to
explain the difference between these two metrics. The Saved Time % compares test
execution time of all tests without automation (510 tests executed manually) to
execution time of all tests with automation (367 tests executed manually + 111 tests
executed by automation). This offers insight into how automation has affected the
efficiency of the entire test effort. The Saved Automatable Time % compares execution
time of the 111 automated tests when executed manually to the execution time of the
same 111 tests when executed with automation. This offers insight into how automation
generally affects the efficiency of testing by looking at how the automated tests have
become more efficient. Be sure to include automated test maintenance and analysis
times when addressing these efficiency percentages.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 190


TABOK Segment 3: Microscopic Process Skills
Skill Category 12: Automated Test Reporting & Analysis

12.2 Execution Reports and Analysis


Short Answer
There are typically two types of execution reports that are generated for test automation
analysis: high-level reports and low-level reports (logs). The high-level reports are
normally for management while the low-level reports are typically for automation
engineers.

The compilation and collation of automation execution data can be an extremely


repetitive process, yet can be extremely time consuming if done manually. Therefore,
the automation framework should be coded and/or configured to dynamically perform
this function following the completion of a run. The information should be collated into at
least two different types of reports:
High-level execution reports
Low-level execution logs

Figure 12-3: Types of Execution Reports

High-level reports are typically developed to easily relay information to upper-level


management, and may include the following information:
Run Date/Time
Test Suites/Functional Area
Total Executed (By Suite/Functional Area)
Total Passed Tests (By Suite/Functional Area)
Total Failed (By Suite/Functional Area)
Total Incomplete (By Suite/Functional Area)

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 191


TABOK Segment 3: Microscopic Process Skills

This information generally summarizes the entire test run as opposed to providing
information on a test-by-test or step-by-step basis. This bigger picture is normally what
upper-level management is most concerned with. High-level reports should organize its
information in a way that reflects how the data was grouped (see section 6.3.1 for more
information on test grouping).
Low-level reports are sometimes used for relaying information to management but are
usually used for providing detailed information about the test run to automated test
engineers. The following information is often in low-level reports:
Date/Time
Associated Screen
Associated Object
Data Used
Desired Output
Actual Output
Pass/Fail Status
Result Description (detailed description of the passed or failed step)
The information found in low-level logs, unlike high-level reports, is not a summary of
the run. If the log is an ‗error log‘ this information is logged for every single error
identified during the test run. If the log is a ‗run log‘ then this information is logged for
every single step in the test run.
Many determinations must be made when making decisions about automated test
report production. The first is a decision about the report format. Automation reports
may be dynamically written in many formats including:
Command prompt
Tool Specific files
Text files
Spreadsheet files
Word Processor files
XML
HTML

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 192


TABOK Segment 3: Microscopic Process Skills
Skill Category 12: Automated Test Reporting & Analysis
Another determination to make is where to store the report. They are often stored in the
same folder as the tests they‘re associated with, and sometimes they are stored in a
separate results folder. At times, they are only generated in a command prompt window
and not stored in any file. It is a good practice, however, to store the data in some file
that may be placed under configuration management and reviewed at any time as
historical data. See Figure 12-5 for an example of a test log.

12.2.1 Analysis
Analysis is important, but can be time consuming and fairly frustrating for the automated
test team and its stakeholders if there is no identified process for how it is to be
conducted following a test run. A process for results analysis may resemble the one
illustrated in Figure 12-4: Automated Test Results Analysis.

Figure 12-4: Automated Test Results Analysis

This process begins with a preliminary review of the automated test reports and logs.
Prior to moving into detailed analysis of any potential failures that may be in the report,
it is important to address any immediate reporting needs that may exist. For example,
management may be particularly concerned with a specific area of the application, and
needs to know immediately if there is a definitive positive result for that area of the
application. In such a situation, a quick review of the results and logs will be followed by
an immediate response to management regarding their pressing concerns.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 193


TABOK Segment 3: Microscopic Process Skills

After the immediate reporting needs are addressed, if there are no failures in the run, a
complete final execution report is submitted to management. In the event that errors do
exist, it‘s time to begin going through and taking a closer look at each of the failures.
After picking an error, a determination must be made on whether it is an application or a
script error. This determination is based on the AUT and the level of detail provided in
the log.

Figure 12-5: Automated Test Execution Log Sample

For example, this illustration revealing a sample log file contains 2 lines that represent a
single application failure (Note: a high-level report would probably only contain the 2nd of
the 2 lines, while the low-level log provides both lines). The failure is an inability to log
into the application. The log‘s first entry reveals that the login failure is probably due to
the script‘s inability to locate the PasswordText textbox. This information in addition to a
quick look at the AUT is enough to make a preliminary determination of the cause of the
error. If, upon manually opening the application and visually inspecting the page for the
PasswordText object, the object exists, this is an indication that something went wrong
during the script execution. Either the object attributes have changed, or there was a
synchronization problem, or some run-time fluke has occurred. If the PasswordText
textbox is missing, then it‘s clear that an application error exists. Refer to Skill Category
10: Debugging Techniques for more information on script debugging and common
automation errors.
If an error is identified as a script error then the script should either be added to the
rerun list – the list of automated tests to re-execute after analysis is complete – or
flagged for post analysis debugging and manual verification. Conversely, if the error is
identified as an application error, steps should be taken to reproduce it manually.
Automation is often used as a scapegoat, so the first thing a software developer will ask
is whether it was done manually or with a script. Having already reproduced it manually
will save a lot of time and effort.
If the failure cannot be reproduced manually then an effort must be made to determine
why it is occurring in the script and not in the application. Eventually, however, if it can‘t
be reproduced, it will need to be flag for later debugging so that the remaining failures
can be debugged.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 194


TABOK Segment 3: Microscopic Process Skills
Skill Category 12: Automated Test Reporting & Analysis
This process continues until all errors have been addressed and the rerun list has been
executed. Then the final execution report is delivered to management.

12.2.2 Addressing Deferred Errors


Upon executing tests, it is not uncommon for a failure to be encountered that is
classified as a low priority failure. The failure could be a script error that occurs
sporadically but is unexplainable. For example, the automated tool may occasionally, for
some unknown reason, have a problem accessing an object. It may be clear from
visually watching the run that the object is there, yet debugging has not uncovered the
reason the object is sporadically not recognized by the automated test. It is decided that
no further debugging will be done on this issue, and the team will just re-execute the
test when it happens. Conversely, instead of being something attributed to the
automated tool or script, the low priority failure could be an application failure that is
being deferred to a later release for fixing.

Figure 12-6: Sample Deferred Error

For example, several automated scripts may check the username and password field
labels on a Login screen. Therefore, if the label for the password field was inadvertently
changed, many automated scripts will fail. This may be deemed low priority by
developers and not fixed immediately. Therefore, all subsequent executions of the
scripts will yield the same known error.
The continuous appearance of expected failures in the automation execution report
instills a numbness to these errors that eventually leads to these errors being
completely overlooked. Overlooking errors is not a good practice, because it can lead to
mistakenly overlooking new errors that require attention. Therefore, it‘s important to
establish an approach for handling known errors. Some approaches include:
Changing the failure to a warning

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 195


TABOK Segment 3: Microscopic Process Skills

Changing the failure to the new expected result


Completely removing the failed verification point

12.3 Resource References for Skill Category


Reporting to Management
Dion Johnson. A-Commerce Marketing. Available at
http://www.automatedtestinginstitute.com/home/ASTMagazine/2010/AutomatedS
oftwareTestingMagazine_March2010.pdf

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 196


TABOK Segment 4

Appendices

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 197


TABOK Segment 4: Appendices

This page intentionally left blank

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 198


TABOK Segment 4: Appendices

Appendix A: Sample Evaluation Criteria

This section contains sample evaluation criteria that may be used for evaluation of
functional automated testing tools.

A.1. Criteria

The following criteria may be used to assess functional automated test tools:

1. Ease of Use/Learning Curve – The intuitiveness of the tool or the extent to which
specialized or vendor-specific training is required before the tool can be fully utilized.
2. Ease of Customization – The ease with which certain features of the tool can be
customized.
3. Cross Platform Support – The ability of the tool to function across different
platforms/Browsers.
4. Test Language Features – The level of support of test language features provided
by the tool.
5. Test Control Features – The measure of error-free control offered by the tools test
control features.
6. Distributed Test Execution – The ability to execute tests across remote locations
from one central location.
7. Test Suite Recovery Logic – A measure of the ability of the tool to recover from
unexpected errors.
8. Tool Integration Capability – The ability of the tool to integrate with other tools.
9. Tool Reporting Capability – How the tool presents the test results, considers the
detail, display and customization of the reports.
10. Vendor Qualifications – The qualifications of the vendor in terms of financial stability,
continued/consistent growth patterns, market share, and longevity.
11. Vendor Support – The amount of technical support available. Also considers the
responsiveness of the vendor‘s customer service in answering and following up on
questions and problems.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 199


TABOK Segment 4: Appendices

12. Licensing – Provides licensing support to meet the needs of the client.

Each tool that is evaluated will be assessed against the above criteria using a weighted
ranking scale. The ranking is done on a scale from 1 to 5, where 5 indicates that the tool
does not provide the function or that there is no perceived benefit from the
functionality/feature, and 1 meaning the tool possesses full and enhanced functionality
or that significant benefit was observed in this area.
Following is a chart that presents a generalized interpretation of the rankings for each
function.

A.2. Criteria Rankings

Function Ranking

1 2 3 4 5
Ease of Use No training Some in- Vendor- Requires Hard to use
required house training specific or continued self even after
is required instructor-led study even training
training after training
required has been
performed.

Ease of Highly Sufficiently Limited Very difficult to Not


Customization customizable customizable customization, customize customizable
somewhat
rigid

Cross High platform Sufficient Limited Extremely No platform


Platform portability platform platform limited portability
portability portability platform
Support
portability

Test Supports all Supports most Supports Supports few Supports no


Language the standard of the test some test test language test language
test language language language features features at
Features
features features features all.

Test Control High level of Sufficient level Limited level Extremely No error-free
Features error-free of error-free of error-free limited level of control
control control control error-free
control

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 200


TABOK Segment 4: Appendices

Function Ranking

1 2 3 4 5
Distributed High level of Sufficient level Limited level Extremely No level of
Test distributed of distributed of distributed limited level of distributed
test test execution test execution distributed test test
Execution
execution execution execution

Test Suite Supports Supports Supports Supports Supports


Recovery recovery from recovery from recovery from recovery from recovery from
all standard most standard some very few no standard
Logic
application application standard standard application
and and application application and
environment environment and and environment
errors errors environment environment errors
errors errors

Tool High level of Sufficient level Limited level Extremely No level of


Integration integration. of integration of integration limited level of integration
integration
Capability
Test High level of Sufficient level Limited level Extremely No level of
Reporting reporting of reporting of reporting limited level of reporting
capability capability capability reporting capability
Capability
capability

Vendor Highly Sufficiently Somewhat Insufficiently Not reputable


Qualifications reputable reputable reputable reputable

Vendor High level of Sufficient level Limited level Extremely No level of


Support support of support of support limited level of support
support

Licensing High level of Sufficient level Limited level Extremely No licensing


licensing of licensing of licensing limited level of
support support support licensing
support

A.3. Criteria Checklist

Criterion/Feature Rating (1 – 5)
Tool 1 Tool 2
1 Ease of Use

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 201


TABOK Segment 4: Appendices

Criterion/Feature Rating (1 – 5)
Tool 1 Tool 2
Record/Playback with minimal coding
Application language easy to understand
Multiple statements can be commented
2 Ease of Customization
Tool Bars help customize/reflect any commonly used tool
capabilities
Ease of adding or removing any fields as and when
necessary
Contains Editor with formats and fonts for better readability
3 Cross Platform Support
Supports Multiplatform support-example-Unix, Windows 7
etc.
Browser support-Single or Multiple-example Firefox, I.E,
Chrome
Cross Technology Support – supports multiple
technologies ex –VB, Java, PowerBuilder, etc.
4 Test Language Features
Allows Add-ins/extensions compatible with third party tools.
Contains Test Editor and Debugging feature
Complexity of Test Scripting Language
Robust test scripting language that will allow modularity.
Test Scripting Language allows for variable declaration and
capability to pass parameters between functions
Test Script compiler available for debugging error
Supports interactive debugging by viewing values at run
time
Supports Data-driven Testing
Allows for automatic data generation
Displays start and end time of test execution
Allows for adding comments during recording
Allows for automated or manual synchronization capability
Supports verification of object properties
Supports database verification – Oracle/SQL etc.
Supports Text Verification
Allows for automatic data retrieval from any data source-
example RDBMS for Data-driven testing
Allows the use of common spreadsheet for Data-driven
testing
Ability to compare the results of test execution of different
runs of the same test or different runs of different tests
Ability to run the same test multiple times and track results
Allows Replay in both Batch Mode and regular mode

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 202


TABOK Segment 4: Appendices

Criterion/Feature Rating (1 – 5)
Tool 1 Tool 2
Supports variable Parameterization
5 Test Control Features
Ability to schedule test execution at predefined times and
unattended (less manual intervention)
6 Distributed Test Execution
Allows for local or remote execution control across
networks
7 Test Suite Recovery Logic
Supports Unexpected Error Recovery
8 Tool Integration Capability
Supports integration with pertinent SDLC tools.
9 Test Reporting Capability
Script Error generated can be easily understood and
interpreted
Allows Error Filtering and reporting features
Supports Metric collection and analysis
Supports Multiple report views
Allows reports to be exported to notepad, word, excel,
HTML or any other format
10 Vendor Qualifications
Financial stability
Continued/consistent growth patterns
Market share
Longevity
11 Vendor Support
Patches provided as and when required and needed
Upgrades provided on regular basis
Scripts from previous versions can be used in newer
versions
Future updates should not require major rework of existing
test scripts.
Supports Help feature that is well documented that can be
easily understood
Vendor provides a support website
Phone Technical support by the vendor as needed
Provided features and functions are supported
12 Licensing
Allows for temporary, floating, local and server licenses

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 203


TABOK Segment 4: Appendices

Appendix B: Sample Coding Standards Checklist

This checklist can be used for performing peer reviews on automated tests.

Header
Format is as follows:
################################################################
# File:
#
# Created by:
# Modification Date (Last):
# Modified By (Last):
# Purpose:
################################################################
Includes Test Name
Includes Date
Includes Date of Last Revision
Includes Created By
Includes Purpose
Includes Modified By

Constant Declarations
Syntax: static/public constant <CONSTANT_NAME> = <const_value>;

Defined at start of script


Constant name in capital letters
Constant name contains no spaces (underscores acceptable)
If defined as public, also defined in library and initialization script

Variable Declarations
Syntax: [private/public]<variable_name> = [<variable_value>];

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 204


TABOK Segment 4: Appendices

Declared below Header and Constant Declaration


Variable name contains no spaces
Hungarian notation used

Array Declarations
Syntax: <array_name>[0]=<value_0>;…<array_name>[n]=<value_n>

Contained with variable declarations

User-Defined Functions
Syntax:
[public/private] function<function_name>([In/Out/InOut]<parameter_list>)
{
Variable declarations:
statement_1;
statement_n;
}
Defined after variable declarations
Placed in function library if declared as public
Standard return values

Comments
Start and end with #

Flow Control Statements


Indented (Tab)
If/Else Statement Syntax:
if ( expression )
{
statement_1;

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 205


TABOK Segment 4: Appendices

…………..;
}
else
{
statement2 ;
}

For Loop Syntax:


for(<initial_condition>;end condition>, <index increment/decrement>)
{
Statement_1;
…………..
Statement_n;
}

While Loop Syntax:


while(<condition>)
{
Statement_1;
………….
Statement_n;
}

Do Loop Syntax:
Do
{
statement_1;
…………..
statement_n;
}

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 206


TABOK Segment 4: Appendices

while(<condition>)

Case Statement Syntax:


select( expression )
{
case expr1:
statement_1;
…………..
statement_n;
case exprn:
statement_1;
…………..
statement_n;
case else:
statement_1;
…………..
statement_n;
}

Object Map
Logical names modified for clarity
No duplicate objects

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 207


TABOK Segment 4: Appendices

Appendix C: Sample Manual-to-Automation


Transition

New Manual Test Case is Created or Existing


Test Case Being Reevaluated/Change
Automation Status to ‗Evaluation‘

Evaluation

Decide to Automate[Meets
Decide not to Automate/Change Automation Automation Standards]/Change
Status to ‗Do Not Automate‘ Automation Status to ‗Automate‘

Do Not
Automate
Automate

Automation Issue Discovered/Change Test is Successfully Automated/


Automation Status to ‗Issue‘ Change Automation Status to
‗Complete‘

Issue Resolved/Change
Automation Status to ‗Automate‘ Issue Compl ete

1. When creating a manual test case, the manual tester places it in Evaluation status.
2. A determination should be made on whether or not to automate the test.
a. If the manual tester decides the test shouldn‘t be automated, it is placed in Do
Not Automate status.
b. If the manual tester decides the test should be automated, move on to the next
step.
3. Once the test is ready for automation, the manual tester places the test in Automate
status.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 208


TABOK Segment 4: Appendices

a. If an issue is found while automating, the automator places the test in Issue
status. The manual tester must work with the automator to resolve the issue.
Once the issues are resolved, the automator places the test back in Automate
status.
4. Upon successful automation of the manual test, the automator places the test in
Complete status.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 209


TABOK Segment 4: Appendices

Appendix D: Sample ROI Calculations

To regularly apprise decision-making stakeholders of the value of automated testing in


the application‘s lifecycle, the test team must regularly apprise and identify automation‘s
potential return-on-investment (ROI), a ratio of benefits to costs. This section includes
sample ROI calculations for Simple ROI, Efficiency ROI and Risk ROI.
Simple ROI Sample Calculation
Let us examine a scenario involving a project with an application that has several test
cycles that equate to a weekly build for six months out of the year. In addition the
project has the following manual and automation factors that will be used in calculating
ROI:
General Factors Manual Factors Automation Factors
1500 test cases, 500 of Manual Test Automated Tool and
which can be Execution/Analysis Time License Cost (5
automated. (average per test) – 10 licenses) – $20,000
minutes
Tester Hourly Rate – Automated Tool
$60 per hour Training Cost– $5,000
Automated Test
Machine Cost (3
machines) – $3000
Automated Test
Development
Debugging Time
(average per test) – 60
minutes (1 hour)
Automated Test
Execution Time – 2
minute
Automated Test
Analysis Time – 4 hours
for 500 tests
Automated Test
Maintenance Time– 8

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 210


TABOK Segment 4: Appendices

General Factors Manual Factors Automation Factors


hours per build (for 500
tests)

In order to calculate the ROI, we must calculate the investment costs and the gain. The
investment costs may be calculated by expressing automation factors in monetary
terms. The automated tool and license cost, training cost, and machine cost are
straightforward but the other factors will need to be processed.
Automated Test Development Time may be converted to a dollar amount by
multiplying the average hourly automation time per test (1 hour) by the number of
tests (500), then by the Tester Hourly Rate ($60) = $30,000.
Automated Test Execution Time doesn‘t need to be converted to a dollar figure in
this example, because the tests will ideally run unattended on one of the
Automated Test Machines. Therefore, no human loses time in execution.
The Automated Test Analysis Time can be converted to a dollar figure by
multiplying the Test Analysis Time (4 hours per week given that there is a build
once a week) by the timeframe being used for the ROI calculation (6 months or
approximately 24 weeks), then by the Tester Hourly Rate ($60). 4 x 24 x 60 =
$5,760.
The Automated Test Maintenance Time can be converted to a dollar figure by
multiplying the Maintenance Time (8 hours per week) by the timeframe being
used for the ROI calculation (6 months or approximately 24 weeks), then by the
Tester Hourly Rate ($60) = $11,520.

The total investment cost can now be calculated as


$20,000 Automated Tool and License
5,000 Automated Tool Training
3, 000 Automated Test Machine
30,000 Automated Test Development Time
5,760 Automated Test Analysis Time
+ 11,520 Automated Test Maintenance Time
$75,280 Total Investment Cost

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 211


TABOK Segment 4: Appendices

That‘s a lot of money! But before you decide to eliminate test automation from your
project, let us consider the gain. The gain can be calculated thus:
Manual Test Execution/Analysis Time that will no longer exist once the set of
tests has been automated. The Manual Execution/Analysis Time can be
converted to a dollar figure by multiplying the Execution/Analysis Time (10
minutes or .17 hours) by the number of tests (500), then by the timeframe
covered by the ROI calculation (6 months or approximately 24 weeks), and by
the Tester Hourly Rate ($60).
The gain is therefore
.17 Execution/Analysis Time (in hours)
500 Number of tests
24 Weeks covered by the ROI calculation
x 60 Tester Hourly Rate
$122,400 Gain

Inserting the gains and investment costs into the formula:


Gains – Investment Costs $122,400 – $75,280
ROI = = = 62.6%
Investment Costs $75,280

Still think an investment cost of $75,280 is too much?


The ROI is calculated at 62.6%, indicating that introducing well-planned automated
testing will result in a 62.6% return on the investment of the tools and test effort. Note
that over time this ROI percentage will increase, because the tool cost eventually
becomes irrelevant and is replaced in the calculation by tool support costs. Also, recall
that manual test development and maintenance are not considered because these
activities must take place regardless of whether or not a test is automated.

Efficiency ROI Sample Calculation


The project scenario from the Simple ROI calculation will be used for this calculation
also. The main difference is that the investment and gains will need to be calculated in
relation to the entire bed of functional tests, not just those that are automated. This
distinction is a due to increased efficiency addressing the efficiency of the entire test
effort, not just of the tests that are being automated.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 212


TABOK Segment 4: Appendices

Once again, in considering the equation illustrated in Figure 1-7, we see that the
investment and gain must be calculated. The investment is derived from calculating the
time investment required for automation development, execution and analysis of 500
tests, and then adding the time investment required for manually executing the
remaining 1000 tests. Calculations are expressed in terms of days rather than hours
because the automated tests ideally operate in 24 hour days while manual tests operate
in 8 hour days. Since tests runs can often abruptly stop during overnight runs, however,
it is usually a good practice to reduce the 24 hour day factor to a more conservative
estimate of about 18 hours.
Automated Test Development Time is calculated by multiplying the average
hourly automation time per test (1 hour) by the number of tests (500), then
dividing by 8 hours to convert the figure to days. This equals 62.5 days. (Note:
This portion of the calculation may be omitted after the first run of the automated
tests unless more development is performed on the tests following the initial run.)
Automated Test Execution Time must be calculated in this example because
time is instrumental in determining the test efficiency. This is calculated by
multiplying the Automated Test Execution Time (2 min or .03 hours) by the
number of tests per week (500), by the timeframe being used for the ROI
calculation (6 months or approximately 24 weeks) then dividing by 18 hours to
convert the figure to days. .03 x 500 x 24 /18 equals 20 days. Note that this
number will be reduced when tests are split up and executed on different
machines but for simplicity we will use the single machine calculation see Section
1.3.5 for more information on significance of running with multiple machines).
The Automated Test Analysis Time can be calculated by multiplying the Test
Analysis Time (4 hours per week given that there is a build once a week) by the
timeframe being used for the ROI calculation (6 months or approximately 24
weeks), then dividing by 8 hours (since the analysis is still a manual effort) to
convert the figure to days. This equals 12 days.
The Automated Test Maintenance Time is calculated by multiplying the
Maintenance Time (8 hours) by the timeframe being used for the ROI calculation
(6 months or approximately 24 weeks), then dividing by 8 hours (since the
maintenance is still a manual effort) to convert the figure to days. This equals 24
days.
The Manual Execution Time is calculated by multiplying the Manual Test
Execution Time (10 min or .17 hours) by the remaining manual tests (1000), then
by the timeframe being used for the ROI calculation (6 months or approximately

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 213


TABOK Segment 4: Appendices

24 weeks), then dividing by 8 to convert the figure to days. This equals 510 days.
Note that this number is reduced when tests are split up and executed by
multiple testers but for simplicity we will use the single tester calculation.

The total time investment (in days) can now be calculated thus:
62.5 Automated Test Development Time
20 Automated Test Execution Time
12 Automated Test Analysis Time
24 Timeframe for ROI Calculation
+ 510 Manual Execution Time
628.5 Total Time Investment (days)
This figure would certainly decrease if more test engineers support or increase with
fewer test engineers.
The gain is calculated in terms of the Manual Test Execution/Analysis Time thus:
The Manual Execution/Analysis Time can be converted to days by multiplying the
Execution/Analysis Time (10 minutes or .17 hours) by the total number of tests
(1500), then by the timeframe being used for the ROI calculation (6 months or
approximately 24 weeks), then dividing by 8 hours to convert the figure to days.
This equals 765 days. Note that this number is reduced when tests are divided
among multiple testers (which would have to be done in order to finish execution
within a week). For simplicity, we will use the single tester calculation.
.17 Execution/Analysis Time
1,500 Total Number of Tests
+ 24 Timeframe for ROI Calculation
765 Total Time Investment (days)
Inserting the Investment and Gains into our formula:

Gains – Investment Costs 765 – 628.5


ROI = = = 21.7%
Investment Costs 628.5

The RO is calculated at 21.7%. This means that for each hour invested, .217 hours
were saved in execution. After the initial execution, the efficiency percentage increases,
because the automation development decreases.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 214


TABOK Segment 4: Appendices

Risk Reduction ROI Sample Calculation


The project example from the Simple ROI Calculation will be used for this sample
calculation also, and the investment costs are calculated the exact same way. The
investment cost is therefore $75,280.
The gain is calculated in terms of the amount of money that would be lost if a
production defect were discovered in an untested area of the application. The loss may
relate to production support and application fixes, lost revenue due to users abandoning
the application, or simply not having the ability to perform the desired functions in the
application (either by insufficient requirements that missed key functionality or bugs that
not averted or repaired until late in the development cycle. For the sake of this scenario,
we will assume that the potential loss is calculated at $500,000.
Inserting the Investment Costs and Gains into our formula:
Gains – Investment Costs $500,000 – $75,280
ROI = = = 564.2%
Investment Costs $75,280

The ROI is calculated at 564.2%, indicating a 564.2% increase in quality over a similar
application for which automated testing was not used.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 215


TABOK Segment 4: Appendices

Appendix E: The TABOK, SDLC and the Automated


Testing Lifecycle Methodology

The Automated Testing Lifecycle Methodology (ATLM)23 is a multi-stage approach for


automated test tool acquisition and implementation. Below is a diagram that reveals
how the ATLM ideally relates to the SDLC. The stages may not always correspond, but
this may provide some guidance on how automated testing may be integrated into each
stage of the SDLC.

23
Dustin, Elfriede, Jeff Rashka, and John Paul. Automated Software Testing: Introduction, Management,
and Performance. Boston, MA: Addison-Wesley, 1999.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 216


TABOK Segment 4: Appendices

Below are descriptions of each ATLM stage.

1. Decision to Automation – Addresses the process that goes into making an


informed decision on whether test automation should be introduce to an organization
or project. This process requires an understanding of the different tools and types of
tools that support the SDLC so that a determination can be made on where
automation may be best applied.
2. Test Tool Acquisition – Addresses the process of moving forward with the
acquisition of an automated test tool. This includes the identification of test
requirements and candidate tools, then conducting an evaluation of all eligible tools.
3. Automated Testing Introduction Process – Addresses the steps necessary for
successfully introducing automated testing to an organization or project. This
includes test tool set up, and the modification of existing processes and standards to
better support the tools.
4. Test Planning, Design and Development – Includes the development and
implementation of a Test Automation Implementation Plan, including the design and
development of the automated test framework. In the event that the automation
effort is focused on functional test automation, this stage also includes the selection
of tests to be automated, followed by the automation of the selected tests.
5. Execution and Management of Tests – Involves using the automated framework
to execute tests that have been designed and developed based on the
organization‘s test plan and goals. Also addressed in this stage are results analysis
and reporting, and a continuing effort to manage expectations by showing how the
results of the test automation implementation support the goals that have been laid
out. This stage also addresses the debugging and maintenance of automated tests
that may be a result of the execution and analysis.
6. Test Program Review/Assessment – Addresses the continuous set of activities
that must take place throughout the testing lifecycle to facilitate continuous process
improvement. This includes an evaluation of the framework and tests that have been
developed along with the metrics that were produced from them. This evaluation will
be used to make judgments on how well the automation effort has met its goals, and
how improvements may be made in order to better meet those goals.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 217


TABOK Segment 4: Appendices

The ATLM stages may be married to the TABOK skills as illustrated in the following
table:

ATLM Stage Relevant TABOK Skill


Categories
1. Decision to Automate Test 1,2,3
2. Test Tool Acquisition 1,2,3
3. Automated Testing Introduction Process 1,2,3,4,6
4. Test Planning, Design, and Development 2,4,5,6,7,8,9,10,11,12
5. Execution and Management of Tests 1,4,6,8,9,10,12
6. Test Program Review/Assessment 1,4,6,12

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 218


TABOK Segment 4: Appendices

Appendix F: Test Automation Implementation Plan


Template

1. INTRODUCTION
1.1. Assumptions and Constraints
1.2. Scope
1.3. Roles and Responsibilities
2. MANUAL-TO-AUTOMATION PROCEDURES
2.1. Test Selection Criteria
3. AUTOMATION FRAMEWORK DESIGN CONSIDERATIONS
4. AUTOMATION FRAMEWORK DESIGN
4.1. Automation Framework Components
4.2. Automation Framework Directory Structure
4.2.1. Root Directory
4.2.2. Driver Scripts Directory
4.2.3. Test Scripts Directory
4.2.4. Data Directory
4.2.5. Libraries Directory
4.2.6. Object Repository Directory
5. AUTOMATED TEST DEVELOPMENT
5.1. Traceability
5.2. Application Objects & the Object Repository
5.2.1. Object Identification
5.2.2. Object Naming Standards
5.3. Reusable Components
5.4. Test Structure
5.5. Comments

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 219


TABOK Segment 4: Appendices

5.6. Verification Points


6. AUTOMATED TEST EXECUTION
6.1. Debugging
7. AUTOMATION METRICS
7.1. Test Development Metrics
7.2. Test Execution Metrics
APPENDIX A - TOOL INSTALLATION INSTRUCTIONS
APPENDIX B – AUTOMATED TESTING QUICK REFERENCE INSTRUCTIONS
APPENDIX C – AUTOMATION READINESS CHECKLIST
APPENDIX D – AUTOMATION COMPLETENESS CHECKLIST
APPENDIX E – APPLICATION LIST

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 220


TABOK Segment 4: Appendices

Appendix G: Sample Keyword Driver Script

This section provides additional detail on components of a keyword driven framework.


Following is an illustration that reveals an approach for structuring a keyword driver
script.

Open Data File


File Opened?
If not, abort the run and log issue in Error Log
Read Keyword File until end of file
Read Screen column
Screen Exists?
If not, call Recovery component and Error Log
Read Object column
Object Exist?
If not, carry out Recovery and log in Error Log
Read Action column
Execute Action by calling Action Function
Next row
Clean-up test environment for next Test Case

This driver script reads a keyword file like the one illustrated in Figure 6-4.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 221


TABOK Segment 4: Appendices

Appendix H: Automated Test Roles


This appendix offers a list of automated testing roles, descriptions and associated
TABOK skill categories.

Role Description Relevant Skill Categories


Test Lead The Team Lead‘s responsibilities mainly 1, 2, 3, 7
involve administrative tasks.
Administrative tasks such as creating an
Automation Implementation Plan (See
Appendix F: Test Automation
Implementation Plan Template) which is
a document that details the development
and implementation of the automated
test framework. In addition, the Team
Lead is responsible for allocating
automation personnel to appropriate
tasks, and for communicating with and
providing reports to management. This
role may coincide with the overall test
team lead role or even the program
management role, yet the automation
team often has a Team Lead that
functions independently of these other
two roles.
Test Engineer The Test Engineer is normally not 6
directly involved with automation tasks
but is rather responsible for the manual
testing tasks that are leveraged by
Automation Engineers. The Test
Engineer is the subject matter expert for
the application or feature of the
application that is being automated.
Often responsible for the manual test
design and execution, the Test Engineer
works directly with the Automation

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 222


TABOK Segment 4: Appendices

Role Description Relevant Skill Categories


Engineer to decide what should be
automated and how manual procedures
may be modified to better facilitate
automation. The Test Engineer often
also doubles as the Automation
Engineer.
Lead Framework specific responsibilities fall to 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,
Automation the Lead Automation Architect. This role 11, 12
Architect is typically held by a test tool subject
matter expert as well as an automation
framework subject matter expert. The
Lead Automation Architect is responsible
for framework maintenance,
configuration management activities
relative to the framework and for being
―on-call‖ support responsible for
answering questions relative to the
framework.
Cross The Cross Coverage Coordinator is 4, 6, 8, 9, 10, 11, 12
Coverage responsible for ensuring all automation
Coordinator efforts that utilize a single framework
within a given organization are in sync
with one another. Staying abreast of all
application specific automation efforts
while keeping Automation Engineers
abreast of framework concerns are
important for this role. The Cross
Coverage Coordinator is responsible for
helping to identify maintenance
procedures for this role while ensuring
that these maintenance procedures are
being followed; maintenance procedures
including the proper use of versioning
software and suggested use of reusable
components. This role works with the
Lead Automation Architect by suggesting
necessary framework changes and

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 223


TABOK Segment 4: Appendices

Role Description Relevant Skill Categories


works with Automation Engineers by
suggesting automation techniques that
facilitate flexibility, reusability,
maintainability and other quality
attributes. Very often the Lead
Automation Architect functions as the
Cross Coverage Coordinator.
Automation The Automation Engineer or the Test 4, 6, 8, 9, 10, 11, 12
Engineer Automator is responsible for the
application specific automation tasks.
While displaying some degree of test tool
expertise is important, the primary
concern for this role is the automation of
assigned application functionality or
tasks within the framework being
implemented by the organization. The
Test Automator is the automated test
suite subject matter expert and is
responsible for coordinating with the Test
Engineer to better understand the
application and manual tests, as well as
for making decisions regarding what
should be automated and how manual
procedures may be modified to better
facilitate automation.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 224


TABOK Segment 4: Appendices

Appendix I: Glossary

Term Definition
Agile Test Principles that define the agile approach to test automation for
Automation medium to large software projects
Principles
Assertion Constructs that provide a mechanism for concisely identifying a
checked condition and presenting an error message if the condition
fails.
AUT Application Under Test. An application that is being tested.
Automatable Something that has the ability to be automated.
Automation The features, reach and deliverables planned for an automation
scope effort.
Black-box Testing method that verifies functionality with little to no regard for
the internal workings of the code that produces that functionality.
Boolean A statement that can only be evaluated to a value of True or False.
condition
Bottom Up A type of integration testing that integrates and tests lower-level units
Technique first. By using this approach, lower-level units are tested early in the
development process and the need for stubs is minimized. The need
for drivers to drive the lower-level units, in the absence of top-level
units, increases, however.
Bug See Defect
Build When used as a verb, a build is the process of converting source
code files into standalone software artifact(s) and placing the
artifact(s) on a system. When used as a noun, a build is the product
of the previously mentioned process.
Collections A set of data or objects that are of the same type.
Compiled A type of programming language that is converted at design-time to
Languages a set of machine specific instructions.
Conditionals Coding constructs that alter the flow of script execution or cause a

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 225


TABOK Segment 4: Appendices

Term Definition
varying set of script statements to be executed based on the
evaluation of a specified Boolean condition. Also known as
branching constructs.
Continuous A frequent, automated build process that also integrates automated
Integration (CI) testing.
Cumulative Test coverage assessed over a period of time or across multiple test
Coverage cycles.
Data-driven A framework built mostly using data-driven scripting concepts. In this
Framework framework each test case is combined with a related data set and
executed using a reusable set of test logic.
Data-driven A scripting technique that stores test data separately from the test
Scripting script that uses the data. The data may be stored in a flat file,
spreadsheet, or some other data store, and it is used by the test
script via parameterization. A single block of script logic may then be
executed within a loop using different data from the data source on
each execution.
Defect An error, bug, flaw, or nonconformance found in a software system
that causes that system to behave in a manner that is either contrary
to requirements, technical specifications, service level agreements,
or at times the reasonable expectations of the system‘s
stakeholders.
Distributed Test Executing tests across remote locations from one central location
Execution
Document Object model for Internet applications.
Object Model
Events Repeatable actions but, unlike methods, are called based on an
action performed by a user (i.e., mouse click, scroll, etc.).
Exception A special condition that causes a program‘s normal flow of execution
to change.
Simultaneous gathering of information, test design and test
Exploratory execution resulting in the immediate creation and implementation of
Testing tests based on how the application is responding in real-time.
FISMA Federal Information Security Management Act. FISMA requires each

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 226


TABOK Segment 4: Appendices

Term Definition
federal agency to develop, document, and implement an agency-
wide program to provide information security for the information and
information systems that support the operations and assets of the
agency, including those provided or managed by another agency,
contractor, or other source.
Flowchart A diagram that represents an algorithm or process.
Framework The physical structures used for test creation and implementation, as
well as the logical interactions among those structures. This
definition also includes the set of standards and constraints that are
established to govern the framework‘s use.
Functional Functional decomposition refers broadly to the process of producing
Decomposition modular components (i.e., user-defined functions) in such a way that
automated test scripts can be constructed to achieve a testing
objective by combining these existing components. Modular
components are often created to correspond with application
functionality but many different types of user-defined functions can
be created.
Functions A block of code within a larger script or program that executes a
specific task. While it is part of the larger script, it operates
independent of that script. It is executed not according to where it is
located in the script, but rather based on where it is ―called‖ within a
script and it typically allows for arguments and return values.
Gain The benefit received from an investment. Used in calculating ROI.
GUI Map See Object Map.
Image-based Automation approach that communicates with the system it is
automation automating via recognition of image information as opposed to object
information.
Initialization A script that sets parameters that are used throughout a test run and
Script brings the test environment to a controlled, stable state prior to test
execution.
Integrated A software application that provides a comprehensive set of
Development resources to programmers for editing, debugging, compiling and
Environment building code.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 227


TABOK Segment 4: Appendices

Term Definition
(IDE)
Integration Testing that involves combining individual units of code together and
Testing verifying they properly interface with one another without damaging
functionality developed. Also known as integration testing.
Interface Testing See Integration Testing.
ISO International Organization for Standards. An international standards
setting organization.
Iterators Coding constructs that provide a mechanism for a single set of
statements to be executed multiple times. Also known as looping
constructs.
Linear An automated test framework that is driven mostly by the use of the
Framework Record & Playback. Typically, all components that are executed by a
Linear framework script largely exist within the body of that script.
Load Test Performance testing approach that gradually increases the load on
Automation an application up to the application‘s stated or implied maximum load
to ensure the system performs at acceptable levels.
Methods Repeatable actions that may be called by an automated script. There
are often two types of methods: functions and sub-procedures.
Model-based Framework that uses descriptions of application features, typically in
Framework the form of state models, as a basis for dynamically creating and
implementing tests on the application.
MTTR Mean time to repair. This is a basic measure of maintainability,
reliability and robustness, and it represents time required to repair a
failed script or component.
Negative Tests Tests that check system‘s response to receiving inappropriate and/or
unexpected inputs.
Notation A set of symbols or conventions set aside for a specialized, specific
use.
Object Map A file that maintains a logical representation and physical description
of each application object that is referenced in an automated script.
Also known as a GUI Map.
Object Model An abstract representation of the hierarchical group of related

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 228


TABOK Segment 4: Appendices

Term Definition
objects that define an application and work together to complete a
set of application functions.
Open Source A set of criteria compiled by the Open Source Initiative used to
Definition determine whether or not a software product can be considered
open source.
Parameterization The association of a script variable to an external data source, such
that data is passed directly to that variable from the associated data
source at run time.
Positive Tests Tests that verify the system behaves correctly when given
appropriate and/or expected inputs.
Properties Simple object variables that may be maintained throughout the life of
the object.
Pseudocode An artificial and informal language with some of the same structural
conventions of programming that is written and read by
programmers as an algorithm that will later be translated into actual
code.
Quality Desirable characteristics of a system.
Attributes
Regression Retesting a previously tested program following modifications to
either that program or an associated program. The purpose of this
testing is to ensure no new bugs were introduced by the
modifications.
Regular A character expression that is used to match or represent a set of
Expression text strings, without having to explicitly list all of the items that may
exist in the set.
Requirements A complete description of the expected behavior and/or attributes of
a system.
Sanity Test A high-level test meant to verify application stability prior to more
rigorous testing.
Scripting Also known as interpreted languages, scripting languages are a type
Language of programming language that is converted at run-time to a set of
machine specific instructions.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 229


TABOK Segment 4: Appendices

Term Definition
Shelfware Software that is developed or acquired but is not being used. It may
literally be stored on a shelf, but doesn‘t have to be.
Smoke Test See Sanity Test
Software A model that defines the structure, processes and procedures used
Development for developing software.
Lifecycle (SDLC)
Stakeholders A person, group or organization that may be affected by or has an
expressed interest in an activity or project.
Stress Test Performance/Load testing approach that involves determining what
Automation load will significantly degrade an application.
String Testing See integration testing.
Sub-procedures A block of code within a larger script or program that executes a
specific task, but while it is part of the larger script it operates
independent of that script. It is executed not according to where it is
located in the script, but rather based on where it is ―called‖ within a
script and it typically allows for arguments. Subroutines often do not
allow for return values.
SUT System Under Test. A system that is being tested.
Test bed The hardware and software environment that has been established
and configured for testing.
Test Coverage A measure of the portion of the application that has been tested by a
suite of tests. Two types of test coverage include requirements
coverage and code coverage.
Test Fixture The necessary preconditions or the state used as a baseline for
running tests. This term is commonly used in reference to unit test
framework events (i.e. setup and teardown) that establish fixtures for
the unit tests.
Test Harness See framework.
Top Down A type of integration testing. High-level logic and communication is
Technique tested early with this technique, and the need for drivers is
minimized. Stubs for simulation of lower-level units are used while
actual lower-level units are tested relatively late in the development

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 230


TABOK Segment 4: Appendices

Term Definition
cycle.
Unit The smallest amount of code that can be tested.
Validate Ensure a product or system fulfills its intended purpose.
Variables A variable is a container for storing and reusing information in
scripts.
Verify Ensure a product or system conforms to system specifications.
VNC Virtual Network Computing. A platform-independent graphical
desktop sharing system that uses the remote framebuffer (RFB)
protocol to allow a computer to be remotely controlled by another
computer.
White-box Testing method that verifies the internal structures of an application.
World Wide Web An international community where member organizations, a full-time
Consortium staff, and the public work together to develop Web standards.
(W3C)
xUnit Name given to a group of unit test frameworks that all implement the
same basic component architecture. When used as a noun, a build
is the resulting artifacts of the build process.

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 231


TABOK Segment 4: Appendices

INDEX

Continuous Integration ............................115, 117, 119, 226


1 Control Flow Functions .................................................. 147
Cost ....... 4, 6, 8, 18, 25, 29, 33, 34, 37, 39, 67, 68, 103, 123
100% automation .............................................................. 26 coverage ................ 10, 18, 20, 28, 41, 42, 84, 141, 181, 223

A D
API.............................................. See Application Interfaces Data-driven Framework ....................................... 76, 82, 89
Application Interfaces ................................................. 31, 50 Data-driven Scripts............................................... 75, 76, 77
API .............................................................................. 51 Advantages ................................................................ 77
CLI .............................................................................. 50 Disadvantages ............................................................ 77
GUI ................................................................ 49, 51, 111 Debugging ............................................5, 40, 167, 170, 173
Web Service ........................................ See Web Services Decision to Automate ................................................. 30, 33
assertion ............................................................ 58, 109, 110 Distributed Test Execution ....................................... 64, 117
AUT ................................................................................ 120 Document Object Model ................................................ 159
Automated Test Interface ................................................ 105 Driver Script ................................................. 89, 90, 94, 180
Automated Test Modes ................................................... 111
Content Sensitive ..................................................... 111 E
Context Sensitive ..................................................... 112
Image-based ............................................................. 112 Efficiency ............................................. 18, 37, 103, See ROI
Automated Test Reporting ............................................... 6 Error Handling ..................................5, 179, 181, 182, 185
automated testing definition............................................ 6, 7 events ............................................................................. 145
automation scope .............................................................. 69 Exception ......................................................................... 58
Automation Types ....................................................... 3, 46 Execution .................................. 21, 102, 113, 114, 171, 191
Execution Level File .................................................. 89, 90
B
F
Black-box.......................................................................... 48
branching constructs ............................................... 109, 147 Flexibility ................................................124, 125, 134, 137
business case ......................................... 3, 15, 17, 30, 32, 33 flowchart ........................................................................ 142
Framework ............................................4, 10, 69, 134, 223
Components .......................................................... 87, 88
C
Directory Structure............................................ 4, 87, 95
Capture/Replay ............................... See Record & Playback Types........................................................................... 87
class ............................................ 47, 78, 145, 146, 153, 156 Functional Decomposition75, 76, 78, 81, 82, 132, 135, 138,
CLI.............................................. See Application Interfaces 181
Coding Standards .................................................. 73, 98, 99 Advantages ................................................................ 80
collections ............................................... 145, 146, 159, 160 Disadvantages ............................................................ 81
compiled languages ................................................ 143, 168 Functional Decomposition Framework78, 79, 80, 81, 82,
conditionals ............................................................ 147, 149 83, 87, 94, 106, 123
Configuration Management ...................... 10, 32, 54, 97, 99 Functional system test automation ................................... 48
Configuration Parameters ............. See Configuration Scripts functions ........................................................... 95, 145, 150
Configuration Scripts .................................................. 92, 93
Constraints G
Environmental ............................................................. 34
Functional .................................................................... 34 GUI ............................................. See Application Interfaces
Quality ......................................................................... 34 GUI Map ..................................................... See Object Map

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 232


TABOK Segment 4: Appendices

Object Models ............................................................ 5, 158


I
Object-oriented Concepts ............................................... 145
Image-base ......................................................... See Modes objects .......................................................... 5, 73, 145, 156
Implementation Standards ............................................ 4, 87 Open Source Definition .................................................... 65
Initialization Parameters .................. See Initialization Script operations ....................................................................... 145
Initialization Script ........................................................... 91
Integrated Development Environment ............................ 143 P
iterators .......................................................................... 149
parameterization ......................................... 76, 89, 106, 127
Performance .............................. 20, 130, 131, 132, 134, 137
K Portability ............................................................... 123, 134
Keyword Driven ................................. 82, 91, 133, 135, 181 positive tests ....................................................... 49, 60, 128
Keyword Driven Framework ...................................... 82, 94 properties...............................................5, 73, 145, 156, 184
Advantages ................................................................. 82 pseudocode ..................................................................... 142
Disadvantages ............................................................ 83
Q
L quality ......................... 6, 18, 21, 24, 42, 120, 128, 134, 135
Linear Framework ...................................... 72, 81, 131, 134 Quality Attributes .......... 34, 74, 88, 105, 107, 120, 135, 136
Linear Scripts .................................................. 73, 76, 77, 89
Advantages ................................................................. 74 R
Disadvantages ............................................................ 74
load testing........................................................................ 50 Record & Playback............................ 4, 27, 72, 73, 131, 158
looping constructs ..................................................... 99, 149 regression .................. 3, 8, 19, 20, 26, 31, 49, 102, 103, 124
Regular Expressions ....................................................... 164
Reliability ....................................................... 128, 134, 170
M repeatability ...................................................................... 18
Maintainability................................ 120, 121, 131, 132, 134 reporting ................................................23, 43, 79, 113, 188
manual testing ................................. 6, 9, 25, 27, 29, 39, 222 Results Logging ................................ 97, 101, 178, 191, 194
Manual vs. Automated Testing ........................................... 6 risk ...................................... 6, 18, 20, 21, 26, 27, 28, 31, 42
Manual-to-Automation Transition ............................ 97, 100 Robustness ..............................................125, 134, 154, 179
maturity................................................. 69, 70, 71, 135, 180 ROI.......................................... 33, 36, 42, 85, 103, 210, 215
methods........................................................... 145, 175, 181 Efficiency .............................................................. 39, 41
metrics .................................. 17, 23, 69, 188, 189, 190, 217 Factors......................................................................... 39
misconceptions ......................................... 16, 17, 24, 28, 29 Formula ....................................................................... 39
Model-based Framework ................ 82, 84, 85, 94, 134, 135 Risk Reduction ............................................................ 39
Advantages ................................................................. 84 Simple ................................................................... 39, 40
Disadvantages ............................................................ 85
MTTR ............................................................................. 121 S
sanity tests ...................................................... 103, 115, 117
N Scalability........................................ 126, 131, 132, 135, 137
negative tests............................................... 49, 60, 128, 172 scope ......... 69, 71, 73, 82, 88, 128, 131, 132, 134, 135, 165
notation ............................................................................. 98 scripting language .....................................33, 141, 143, 181
shelfware .......................................................................... 25
smoke tests ...................................................See sanity tests
O stress testing ..................................................................... 50
Object Behavior .............................................................. 162 sub-procedures ............................................................... 145
object identification ................................................ 127, 158 Synchronization ............................................ 175, 176, 177
Object Map ................................... 5, 91, 121, 125, 127, 156
Maintenance ............................................................. 158

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 233


TABOK Segment 4: Appendices

T U
table-driven ........................................... See Keyword-driven Usability .................................. 129, 131, 132, 134, 135, 137
Test Automation Implementation Plan 87, 97, 217, 219, 222 User-defined Functions .................................................... 95
Test bed .................................................................... 4, 6, 26
test coverage ..................................................................... 61 V
code coverage .............................................................. 61
requirements coverage ................................................. 61 Variables .......................................................... 98, 143, 144
Test Development ................................................... 102, 113 virtualization .................................................................. 117
Test Fixture ....................................................................... 58 VNC ............................................................................... 112
Test harness ...................................................................... 48 volume testing .................................................................. 50
Test Scripts .................................................. 4, 94, 202, 219
Test Selection ................................................................. 102 W
Testability ......................................................................... 32
tool ............................ 10, 16, 23, 34, 82, 164, 165, 184, 223 Web Services .............................................................. 49, 51
Tool Acquisition ............................................................... 30 What to automate .................................... See Test Selection
Evaluation.................................................................... 30 White-box......................................................................... 49
Implementation ............................................................ 30 World Wide Web Consortium (W3C) .............................. 51
Integration ................................................................... 30
Pilot ............................................................................. 30 X
Selection ...................................................................... 30
try/catch .................................................................. 184, 186 xUnit .......................................................................... 57, 72

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 234


TABOK Segment 4: Appendices

COMPREHENSIVE REFERENCE MATERIALS

The following is a list of comprehensive reference material that may be used for
gathering additional information on various topics discussed in the TABOK. For shorter,
more pointed references, see the final subsection of each skill category.

1. Dustin, Elfriede, Jeff Rashka, and John Paul. Automated Software Testing:
Introduction, Management, and Performance. Boston, MA: Addison-Wesley,
1999.
2. Dustin, Elfriede, Garrett, Thom and Gauf, Bernie. Implementing Automated Software
Testing. Boston, MA: Pearson Education, 2009.
3. Fewster, Mark, and Dorothy Graham. Software Test Automation: Effective use of
test execution tools. Reading, MA: Addison-Wesley, 1999.
4. Hayes, Linda. The Automated Testing Handbook. Richardson, TX: Software Testing
Institute, 1996.
5. Mosley, Daniel J., and Bruce A. Posey. Just Enough Software Test Automation.
Upper Saddle River, NJ: Prentice Hall, 2002.
6. Various articles, white papers and books indexed at
www.automatedtestinginstitute.com.
7. Various magazine articles published at
www.astmagazine.automatedtestinginstitute.com

ATI‘s Test Automation Body of Knowledge (TABOK) Manual Page 235