Vous êtes sur la page 1sur 22

Vijayan Reddy

Engineering Manager
Adobe Systems
Why Test Infrastructure
How much of your test project is
 Projected Time slices

manual efforts 

How much of your test project is

test execution

Where are we missing?

 How can we save?

In Scope
 Approach from a typical test
lifecycle – End to End tasks
for shipping a project

 A blue-print for scalable test

 Giving tips on cross-linking of
the tools

 This is no Rocket Science

Test Infrastructure
 All Material Used Tools,
Methods, Scripts

 Platform that hosts &
integrates tools and scripts

 Commercial tools don’t offer
complete support of all

An interface/glue that connects every single task and provides

plug-ability for new tasks

How To Build
 Available off-the shelf? Probably not

 Buy/Build/Customize is the mantra

great commercial software
great open source frameworks
great engineers in your team 

Top Opportunities
 Test Execution – Biggest
Build Systems
 Defect Analysis
Linked Test Cases creation
& Maintenance
 Test Environments
Setup / clean up after
 Test Reporting / Archiving /
Data Mining

Scale with Meta Controller
Meta Controller -
 Able to work p2p, or client – server

 Able to work across platforms

 Able to invoke, monitor tasks, Remotely – Trigger Execution, eg.

 Able to transfer files

 Builds, Installers
 Test Results, Logs, Thread dumps

 Able to parallelize across nodes

 
Optimization @ Test Setup
 Time on Test Setup, Test Clean up
Clean Systems
 Imaging / ghosting can help
Across different platforms?
 Build the software stack -
○ Eg: Application Server, Java/.NET run time
○ Database
○ Deployment of your applications
○ Deployment of test suites & Setup
 Multiply savings by the number of builds, and
Meta Controller
 Control the test automation framework(s)
 Glue the controller and tool(s), Script(s)
APIs exposed
Command line options / Batch files
Parameterize the launch:
○ Platform / machine agnostic scripting
○ specific details of each test machine abstracted
in configuration files
 Enable Metrics Collection – Code Coverage,

Scale with Helpful Interfacing
 Test cases managed under a system – (Manual &
Automated tests) (Eg.

 Why do we need a system, not DOC, PDF, HTML

To assure test coverage / traceability
Cross linking with Feature Tracking
Cross linking with Bug tracker
Cross linking with Source Control
Help track estimates as well, against test cases/cycles

 Interface SCM and Bug Tracker
Scale in Defect Analysis:
 Traceability Linking
Feature -> Test Case ->
Changed Files,
versions -> Bug ->
dependent bugs
SCM – Bug tracker Linking
 One – spot update of bugs & SCM
 When fix is checked in
Use a template
Fill description of fix
Fill ‘areas affected’
Fill ‘Recommended tests’
 Auto - Update Bug tracker
‘Areas Affected’
‘Files Affected’ and versions of these files
 Useful in Regression testing & Injections check
 Search up other bugs which had fix in same files and
Scale with Early Warnings
 Static Code Analyzers

 Catch bugs even before the

compilation is done, let
alone built and tested 
 Standard Code violations,
Uncaught exceptions,
Un-freed memory etc.
 Code Security audits for
script injections
Automated Builds
 Check-in Triggered Builds,
Scheduled Builds
 Build System to access
builds, logs, changed
files, changed versions
 Promotion to QA on
Success, Auto Deploy
 Tagging / Merging on
Success on SCM
Scale with BVT
Catch the Bugs early with BVT

Automate triggering of BVT on

 Use your Automated Sanity /
Acceptance Tests
Link with Build Setup to ‘Bless’
builds for QA promotion –
Saves QA cycles
Automate failure mails – Send

with exact test log, pick the

code changes from the build
Auto-log a bug

Pre Integration Test Setup
PIT saves pitfalls 

 Huge development teams, central builds

 Check-in to a PIT branch (that is Sync with Trunk)
 Kick off
Auto – Build
If Success, auto-Merge change into Trunk
If not, follow BVT failure handling and auto- revert
from PIT Branch

Infrastructure for Reports
 Common results repository
 Different test cases / suites /
tools output differently
 Build interfaces to import,
 Define common results
 Preferable format
How to Export
 Commercial tools export in XMLs
 If that helps, standardize other tools / scripts for
 Use XSLT to transform to common format
 Define Performance results schema

Scale through intelligent
 Easier access for data mining
 Easier reporting, trends along builds, along
features, along platforms, locales
 Drill down ability on any individual test case, and
see history along builds to isolate injections
 Easier collation of all test results
 Charting for Non functional parameters
 Cross Link to Feature / Test Case tracker
Take Away
 Identify these bridges that
work for you – No “one
size fits All theory  ”
 Create an integrated
 Available at