Vous êtes sur la page 1sur 68

For information on obtaining additional copies, reprinting or translating articles, and all other correspondence,

please contact:
Email: InfosyslabsBriefngs@infosys.com
Infosys Limited, 2011
Infosys acknowledges the proprietary rights of the trademarks and product names of the other
companies mentioned in this issue of Infosys Labs Briefngs. The information provided in this
document is intended for the sole use of the recipient and for educational purposes only. Infosys
makes no express or implied warranties relating to the information contained in this document or to
any derived results obtained by the recipient from the use of the information in the document. Infosys
further does not guarantee the sequence, timeliness, accuracy or completeness of the information and
will not be liable in any way to the recipient for any delays, inaccuracies, errors in, or omissions of,
any of the information or in the transmission thereof, or for any damages arising there from. Opinions
and forecasts constitute our judgment at the time of release and are subject to change without notice.
This document does not contain information provided to us in confdence by our clients.
InfrasTrucTurE
ManagEMEnT
Subu Goparaju
Senior Vice President
and Head of Infosys Labs
At Infosys Labs, we constantly look for opportunities to leverage
technology while creating and implementing innovative business
solutions for our clients. As part of this quest, we develop engineering
methodologies that help Infosys implement these solutions right,
frst time and every time. I
n
f
r
a
s
T
r
u
c
T
u
r
E

M
a
n
a
g
E
M
E
n
T
V
O
L

9


n
O

5


2
0
1
1
VOL 9 nO 5
2011
Infosys Labs Briefings
I
n
f
o
s
y
s

L
a
b
s

B
r
i
e
f
i
n
g
s
AJIT MHAISKAR is a Principal Technology Architect with the Manufacturing business unit of
Infosys. He can be reached at Ajit_Mhaiskar@infosys.com.
ASHISH BIRLA is a Lead Consultant with the IMSs Infrastructure Transformation Service
practice at Infosys. He can be reached at Ashish_Birla@infosys.com
BHISHMARAJ SHINDE is a Consultant with FSIIMS Infrastructure Transformation Services at
Infosys. He can be reached at bhishmaraj_shinde@infosys.com
ENOSE NAMPURAJA is a Research Analyst with the Infosys Labs. He is a Post Graduate in
Microwaves and has over a decades experience in the Energy and Telecommunication space.
He can be reached at nampuraja_enose@infosys.com
GOURAV RAJ BUDHIA is a Senior Technology Architect with the Manufacturing business unit of
Infosys. He can be reached at Gourav_Budhia@infosys.com.
KUMAR PADMANABH PhD was a Research Scientist leading the wireless sensor networking
(WSN) research group at Infosys Labs
PARVEEN SHARMA is a Lead Consultant with the FSIIMS Infrastructure Transformation
Services at Infosys. He can be reached at parveen_sharma@infosys.com
PRAJAKTA BHATT is a Technology Lead with the Energy, Utilities, Communications and Services
(ECS) units Technology Focus Group. She has wide experience in the Performance engineering
engagements in the Infrastructure group. She can be reached at Prajakta_bhatt@infosys.com
PRAKASH MANAPILLY is a Senior Technology Architect with the Energy, Utilities,
Communications and Services (ECS) units Technology Focus Group . He leads the Infrastructure
and Security practice in the group. He can be reached at mpprakash@infosys.com
PRASHANTH PRABHAKARA is a Lead Consultant with the Infrastructure Transformation
Consulting Practice within Infosys. He has several years of experience in IT Service Management
consulting and managing large scale IT transformation infrastructure projects. He can be contacted at
Prashanth_prabhakara@infosys.com
RAHUL SINHA is a Senior Consultant with the MFGIMS units Infrastructure Transformation
Services practice at Infosys. He can be reached at rahul_sinha08@infosys.com
RENJITH SREEKUMAR is a Principal Consultant with the Infrastructure Transformation
Consulting Practice within Infosys. He is working extensively in developing developing infrastructure
consulting and transformation solutions. He can be contacted at Renjith_sreekumar@infosys.com
VAIBHAV BHATIA is a Senior Consultant with the Infrastructure Transformation & Green-IT
practice at Infosys. He can be reached at vaibhav_bhatia@infosys.com
History and historicity have often determined our acceptance of or resistance
to change. Well established structures, systems and rules get entrenched in our
psyche, give rise to path dependent behaviors and refuse to fade away. Enterprises
that have steered clear of path dependence are the ones that have embarked on a
transformation journey.
We could not have identifed a more appropriate theme than Infrastructure
Management to explain why molting legacy is as important as embracing change.
Businesses that seemed unstoppable a decade ago have failed to stay relevant to
the current times. Imagine the huge dent that a simple mobile phone could create
on multiple industries. Technologies that seemed robust enough to carry the woes
of the world fall ridiculously short of expectations in todays world. Else who
could fathom the need for the next generation of IP address space IPv6, so soon?
Business lifecycles are being increasingly replaced by industry lifecycles and the
boundaries between industry and businesses are getting more and more blurred.
While propounding new ideas is easy, diffculty lies in choosing between the two
biggest unsaid rules in the corporate world, viz., if it aint broke, dont fx it
and reinvent to remain relevant. at Infosys Labs we chose the latter. While
sETLabs Briefngs enjoys a huge brand recall with you, given that at Infosys Labs
we have transitioned from being a strong r&D department to world class IP co-
creators, it was time for us to reinvent ourselves. In keeping with our reinvention
philosophy, we have rechristened sETLabs Briefngs. You have my assurance that
the journal in its current form will continue to be relevant to contemporary issues
as well as retain its rigor in proposing solutions to your business problems.
This issue is a collection of some contemporary ideas on Infrastructure
Management. Why spend a fortune on application monitoring when there are
ways to pare monitoring costs? We have put together two papers around this
notion. While one discusses an approach to real time application monitoring, the
other prescribes the musts for an advanced monitoring of IT infrastructure.
How smart is your infrastructure? Have you leveraged the smartness of extant
technologies? We have two interesting papers that discuss on how smart
infrastructure can help establishing communication between a variety of complex
systems and devices.
Of late, any compilation without papers on cloud seems to be incomplete. We
realize how important a role cloud computing has come to play in todays
distributed technology world and therefore present to you our views on ways
to manage the lifecycle of enterprise infrastructure on cloud as well as assess the
impact of cloud computing on IT service management.
Is your infrastructure compatible with IPv6 network? and most importantly, have
you migrated to IPv6 technology? If yes, congratulations! Maybe you will have
a story or two to share on your transition journey. Just in case you have not yet
transitioned to IPv6, we have put together a simple framework for you to do so.
as usual, do write to me with feedback and suggestions.
Praveen B. Malla PhD
Editor
praveen_malla@infosys.com
Authors featured in this issue
Infosys Labs Briefings
Advisory Board
Anindya Sircar PhD
Associate Vice President
& Head - IP Cell
Gaurav Rastogi
Vice President,
Head - Learning Services
Kochikar V P PhD
Associate Vice President,
Education & Research Unit
Raj Joshi
Managing Director,
Infosys Consulting Inc.
Ranganath M
Vice President &
Chief Risk Officer
Simon Towers PhD
Associate Vice President and
Head - Center for Innovation for
Tommorows Enterprise,
Infosys Labs
Subu Goparaju
Senior Vice President &
Head - Infosys Labs
Molt Legacy,
reinvent relentlessly

3
9
19
25
31
39
45
53
VOL 9 NO 5
2011
Approach: Effective Real-Time Application Monitoring Do More With Less
By Prajakta Bhatt and Prakash Manapilly
Application deployment is a nervous affair even for the most seasoned hands.
The first few weeks post deployment are very crucial and can spell the difference
between success and failure. This paper discusses an advanced cost effective real time
application monitoring system.
Insight: Extending IT Infrastructure Management Services to The Internet of Things
By Ajit Mhaiskar, Gourav Budhia and Kumar Padmanabh
Smart infrastructure provides the ability to communicate and connect with devices.
Making information available to users remotely and communicating with other smart
assets can now be a reality.
Prescription: Advanced Monitoring for IT Infrastructure
By Vaibhav Bhatia
Data is critical for decision making. Data centers are custodians of data and power
the organizations operations. Uptime of these resources is required as a business
guarantee. This paper talks about various aspects of infrastructure monitoring and
how organizations can benefit from them.
Practitioners Solution: Environment Management How to Set Priorities Right?
By Parveen K Sharma and Bhishmaraj Shinde
To succeed in todays cut throat competition a lot of attention needs to be given to
non-production environments. This will help organizations deliver high quality
products. Many organizations are planning to improve availability and utilization of
non-production environments to deliver consistent and predictable testing services.
Discussion: Smart Grid Management System
By E.K Nampuraja
Smart Grids largest implementation is in the field of power utility. The basic
component that would make this Smart Grid work is the underlying robust
communication network that enables shared understanding between various complex
systems. This paper discusses a Smart Grid Infrastructure Management service that
can deliver business value both to the utility company and its customers.
Perspective: ITIL for Enterprise Cloud Deployment
By Renjith Sreekumar and Prashanth Prabhakara
In a cloud, ITSM integration critically important to organize processes and workflows
that bring in efficiency to the enterprise and enhance customer satisfaction. This
paper discusses key focus areas that need to be designed and deployed to effectively
manage the lifecycle of an enterprise infrastructure cloud environment.
Viewpoint: Cloud Computing and its Impact on ITSM Adoption
By Ashish Birla and Rahul Sinha
A number of IT service managers are wondering how their ITSM world would
change after cloud adoption and if they have to make major changes in the way
they have been operating over the years. This paper attempts to answer some of
these questions.
Framework: Framework to Counter Challenges of Transition to IPv6
By Ashish Birla
IPv4 is history. The future belongs to IPv6. However, the transition from IPv4 to IPv6
has its share of problems. Infrastructure management during this transition is bound
to bother many a technologists globally. The paper discusses a framework that will
ensure smoother transitioning from IPv4 to IPv6 enabled world.
Index
Infosys Labs Briefings

Businesses of tomorrow will need to modernize
existing legacy infrastructure and equip
them with smart capabilities.
Ajit Mhaiskar
Principal Technology Architect
Manufacturing Business Unit
Infosys Limited

Cloud computing is a boon in todays testing times.
It will phenomenally change the way IT infrastructure
is deployed, maintained and used by enterprises.
Renjith Sreekumar
Principal Consultant
Infrastructure Transformation
Consulting Practice Infosys Limited
3
VOL 9 NO 5
2011
Effective Real-Time Application
Monitoring Do More With Less
Strained on your monitoring budgets?
Migrate to real-time monitoring
By Prajakta Bhatt and Prakash Manapilly
A
pplication deployment can be a nervous
affair even for the most seasoned hands.
The frst few weeks post deployment are very
crucial and can spell the difference between
success and failure. Fingers are crossed, as
the system is subjected to real production
load and the application is expected to meet
all the performance parameters with high
user expectations. During this critical period
a number of metrics are monitored. Actual
application loads and the systems behaviour are
thoroughly analyzed. In the unfortunate event
of the application not meeting performance
expectations, years of research, design and
devel opment of t he product , company
reputation and its market credibility is at stake.
On an average, businesses lose between
$84,000 and $108,000 in the United States
for every hour of IT system downtime [1].
Opportunity costs of poorly performing
applications are diffcult to quantify. Though
these downtime costs vary with industry
and scale of business, financial services,
telecommunications, manufacturing and energy
are among the major industries that have high
rate of revenue loss during IT downtime. Even
for medium sized business, though the exact
hourly cost may be lower, the impact on its
business is proportionally much larger. The
true cost of downtime not only counts the idled
labour and lower productivity costs but also
includes the value of the opportunities that were
lost due to unavailability of the application. [1].
Thus, even in the case of small and medium
enterprises application performance monitoring
becomes much more critical.
The rising market pressure to reduce
support cost s ( t ool /st af f ) and i mprove
producti vi ty coerces i nfrastructure and
application support team to provide solutions
wi t h qui ck t urnaround t i me and l ower
resolution costs. Thus innovative solutions
i n the appl i cati on moni tori ng approach
are essential. There are many commercial
performance management tools available in
the market that aid support staff in improving
performance management process and doing
detailed component level monitoring across all
Infosys Labs Briefings
4
software and hardware layers. However, they
are quite expensive and not every medium or
small scale industry can afford it. In this paper
we will see how the advanced application
monitoring approach can be implemented and
how it can be realized using cost-effective way.
NEED FOR EFFECTIVE APPLICATION
MONITORING APPROACH
Tradi ti onal l y, Appl i cati on Performance
Management ( APM) was qui t e l i mi t ed
to components moni tori ng and reacti ve
t roubl eshoot i ng. I ncreased appl i cat i on
complexity and increasing market pressure has
necessitated some innovation in APM approach.
The traditional reactive approach should
be shunned and a more advanced proactive
approach needs to be embraced. Needless to say,
a proactive application monitoring approach
has its own set of associated advantages, some
of which are described below.
System Monitoring
A real-time graphical monitor is the lowest
common denominator required from any
advanced application monitoring system. It
helps in rapid identifcation of problems. A
problem can originate in any component/layer
of the application and manifest in components
and layers across the application. For example,
a severely degraded database response may
reflect as an exhausted connection pool or
CPU utilizations approaching 100 percent may
manifest as degraded response time. Hence,
information across all layers and components
needs to be monitored and reported for reliable
problem identifcation.
A provision in the tool to build custom
dashboards with relevant metrics is also useful
in this regard. Given the limitations of manual
monitoring, instead of tracing/monitoring
hundred of metrics at all times, there should be
a provision in the real-time monitor where the
performance thresholds can be set depending
upon the architecture complexity and business
criticality of the application and when these are
violated, notifcations can be sent out to relevant
support groups for corrective action. For
example, if the disk utilization approaches 95%,
a pager notifcation to the infrastructure support
group can warn them to allocate additional
direct access storage device (DASD), or if
service level agreements (SLAs) for the response
times for some important transaction pages etc.,
are breached an email notifcation to call centres
can prepare operators for appropriate response
to support calls.
Rapid Resolution
While reporting a problem it is very important
that the reporting data should be transaction
centric i.e., instead of just snapshots of the
system at defned intervals, it should be able
to present metrics for trend analysis. For
example, an unanticipated rogue message
which processed by the system may not have a
visible impact at that instant but may result in
eventual degradation and may have a visible
impact at a subsequent instant.
Traditional monitoring done at macro
level completely ignores whats happening
across the application component stack. In other
words, we cannot see how a given shopping
cart checkout, or logon, or whatever other user
transaction we are examining, contributes to
specifc performance conditions. We can see that
a server CPU is busy, but that does not tell us
why a particular user transaction is too slow. [2].
To Enable rapid resolution, all trends need to be
visualized together. Different views of the data
need to be captured and reported to present
the real picture to different stakeholders. For
5
the same data various views like component
view, business user view, end-user or customer
view, cross-technology view, historical view,
IT to business view, operations to support
view, development to quality analysis to
production view, etc., can be constructed and
then presented to appropriate stakeholder [3].
It is very important that this data should
be available both online and offine for different
users so that they can visualize the problem.
Online, it can offer business managers a real-time
dashboard for key business transactions like
number of logins occurring for the system, page
hits for particular application/business feature,
etc. For offine reporting the trace sessions can
be stored on some persistent storage where it
can be replayed to reproduce events preceding
the problem scenario. These traces can provide
developers all the contextual data needed for
their trouble shooting activities and to conduct
root-cause analysis. Generally, this reporting
data can be in the form of end-to-end trace across
physical and logical tiers from web browser to
database and back. It cannot just be the response
time, but the one that presents contextual
information to developers ideally giving name of
methods invoked, their arguments, return values,
SQL statements, exceptions and CPU/ memory
utilization levels, i.e., information about the
system when the problem happened. Automation
across a correlated view of the application will
get the user to the root cause of a problem faster.
Dynamic dashboards can aid users
customize their view of problems resolution.
Business users and IT managers can prioritize
work on any i ssues and devel opers can
meticulously work aided with all the system
information right from application/server/
OS metric to the actual transaction statistics so
that the problem origins and symptoms can be
correlated.
Problem Prevention
The APM system is expected to be running
24x7 so that the transaction groups that
are performance or scalability critical are
moni tored real -ti me and i n case of any
devi at i on f rom benchmarked t hreshol d
numbers are observed, they are immediately
analyzed, diagnosed and fixed, even before
they affect the end users.
The warning mechanism is crucial to
avoid not just incidence of problem but also
recurrence of problems in future. Thresholds
can be revisited and recalibrated, and alerting
processes and workflow may need to be
redesigned following each problem incident.
Thi s woul d hel p i n mi t i gat i ng ri sks of
application downtime and improve application
service levels by early detection of problems.
It will also aid in reducing incident counts and
mean-time-to-resolution (MTTR) of incidents by
providing real-time diagnostics and historical
analytics to prevent potential future problems.
IN-DEPTH APPLICATION MONITORING
APPROACH
Building Profling features into a monitoring tool
Modern Java applications are typically complex,
mul ti threaded, di stri buted systems that
use many third-party components. On such
systems, it is hard to detect, let alone isolate,
the root causes of performance or reliability
problems [4]. Usually there are multiple layers
and components like servlets, JSPs, EJBs, JDBC,
Object/Relational mapping persistence tools
such as Hibernate, and other frameworks such
as Struts, Spring, etc., that make it diffcult to
detect and isolate problems at particular layers
especially when the system is live running in
production environment.
In the early days, developers used to
insert and log debug statements to identify
6
method-level response times. Needless to
say, this was not very elegant and a deviation
from OOPS principles. It often required
recompiling and redeployment as developers
needed to be selective with the logging to
avoid performance overheads. It was also not
useful for profiling component (e.g., JDBC
libraries) whose source was not recompiled
by the development team.
Aspect Oriented Programming (AOP)
makes a good ft to solve system monitoring
woes without tampering with the business
components of the application. In fact, the
crosscutting concern of method tracing is
probably the most common example used to
explain AOP concepts. In Java, AOP is now
a mature framework and its features are
extensively used in many popular frameworks
like the Spring framework.
AOP lets you define pointcuts that
match join points in the business components
where moni t ori ng i s requi red. Advi ces
that alter the regular code flow can be then
written to update performance statistics
automati cal l y whenever the transacti on
enters or exits one of the join points in the
application code. With the introduction of
support for interception based aspect in 1.5
versions, recompilation is avoided giving load
time binding capabilities. This enables even
the support staff to start package, class level
monitoring by just specifying names in the
AOP configuration file. Whenever the code
in the mentioned package/class is executed
by the application, the aspects invoke advices
that log and consolidate application data for
end user perusal. This raw data can be then
viewed directly to generate relations between
call invocations generate information for
execution counts, response time min, max,
average values, etc.
Most monitoring solutions for the Java
Platform expose a JMX interface for plugging in
new metrics/agents. The performance statistics
generated using AOP can then be instrumented
as Java objects and coupled with JMX interface
of the monitoring solution. Though, JMX is
a handy framework for fexible management
and monitoring of the applications, a platform
independent interface like XML over plain-
sockets is ideal to make the monitoring tool
more generic.
PROPOSED ARCHITECTURE
In the production environment, to be a 24X7
solution, it is imperative that the new APM
system should be lightweight having less
overhead so that it does not interfere with the
actual system behavior. Also for high scalability,
maintainability and better performance it will
be good to have client-server architecture
where the agent runs on the client monitored
application, and fetches client data and transfers
it to a server component. The server component
then processes the data and presents it to user
through some presentation layer [Fig. 1]. The
data is also persisted for the offine analysis
mentioned earlier.
The layered approach is described below.
Profler Component
This component enables confgurable selective
code profiling. The agent collects all data
and offoads it to monitoring server through
interfaces exposed by the server component
i.e., the collector that centrally monitors the
application and infrastructure metrics and
establishes co-relation between the data
collected. The recommended architecture is
based on a receiver-makes-right approach,
in which the sender uses its raw and native
7
Figure 1: Advanced APM Solution Architecture
Source: Infosys Research
representation and the receiver converts if
necessary, agents based on AspectJ, Spring
frameworks can be written to profle Java code.
When any conversion becomes necessary, the
burden is always placed on the receiver. This
is particularly appropriate for monitoring or
application instrumentation, as it minimizes
perturbation on the senders side that runs the
actual application code and is less intrusive. The
receiver can be placed on a dedicated machine
if necessary [5]. The agent based architecture
enables in collecting detailed information,
and can be easily installed and maintained.
However, additional care must be taken to
ensure it imposes less performance overhead of
its own. This is achieved by embedding agent
in the application server JVM. Applications
are then instrumented by enabling load time
weaving and writing generic aspects. Aspects
can also be written for general frameworks
like Spring, Hibernate, JSP, Servlets, JDBC etc.,
using AspectJ Framework 1.5+ where source
code is not available. Based on the calls made
from the instrumented applications, the agent
collects statistics regarding the execution of
the applications. This information is then
aggregated and sent over the network to the
remote collector on a pre-configured port.
It processes this data and stores it in a Java
object. The collector then consolidates the
data from the Java Object, establishes relation
between method calls, call tree, maintains
counts, min, max, average response times for
the method calls, keeps track of latest SQL
queries, frequently used queries that can be
stored periodically in a database for offine
usage online display.
Monitoring Component
Separate agents are available to collect hardware,
operating system, middleware information.
Typically these are limited OS/network/server
monitoring out-of-the-box agents/plug-ins that
generate platform dependent metrics. All the
information from various agents is presented
to the monitoring server and the console using
protocols like JMX, telnet or simple socket based
communication.
For visualization and analysis, a separate
presentation layer called GUI based console.
Data is typically represented in time-series
graphs. The console takes care of displaying
to the end user the call stack, breakup of
applications memory footprint and methods
level execution times along with the monitoring
at various application tiers with real-time time
series graphs. Console is the presentation
component that provides tall control and display
functions. Individual performance counters
from each agent can be plotted on charts, tables,
properties or dedicated viewers. More complex
queries can also be defned, grouping multiple
counters from different agents or different hosts
on overlaying the same chart or table. The user
interface also allows setting rules for trigger
alerts and their reactions. User can set triggers
for problem identifcation and alerts, plot time
series charts and get a customizable single-view
dashboard of operating system and middleware
level monitors.
Collector
Database
Database 1
Database N
A
g
e
n
t
Instrumented App 1
Application server 1
Instrumented App 2
Instrumented App N
Application server N
A
g
e
n
t
Instrumented App 1
Instrumented App 2
Instrumented App N
Console
8
CONCLUSION
The paper demonstrates the need for innovation
in the traditional application monitoring
approach and the desired features of a complete
advanced performance monitoring solution
that is cost-effective. It defines a solution
architecture for developing solution from scratch
or using readily available open source tools and
technologies that can be effectively utilized to
provide a promising alternative to commercial
products in the performance testing space.
With minor customization to add application
profling features, a cost effective but complete
performance monitoring end-to-end solution
can be available for IT departments with
limited budgets, especially small and medium
businesses. While we focused on profiling
solutions for the Java platform, it is possible to
extend the concept and make it applicable to non-
Java applications as well, as long as interfaces
exposed by the identifed monitoring tools are
technology and platform agnostic.
REFERENCES
1. Arnold, A. (2010), Assessing the Financial
Impact of Downtime, 26th April, IT-Director.
com. Available at http://www.it-director.
com/business/costs/content.php?cid=12043.
2. Jones, D. (2011), The Five Essential
Elements of Application Performance
Moni t ori ng, Sponsored by Quest
Software, Realtime Publishers, Realtime
NEXUS, The Digital Library for IT
Professionals. Also available at http://
www.quest.com/landing/?id=5008.
3. Optimizing Performance of Business-
Critical Web Applications, CA Report,
April 2009. Available at http://www.
ca.com/~/media/Files/whitepapers/
optimizing-perform-web-apps_204349.
pdf.
4. AspectJ, Popul ar Java based AOP
Framework. Available at http://www.
eclipse.org/aspectj/.
5. Guner, D., Tierney, B., Jackson, K.,
Lee, J . and St ouf er , M. ( 2002) ,
Dyna mi c Moni t or i ng of Hi gh-
Performance Distributed Applications,
in the Proceedings of the 11th IEEE
International Symposium on High
Performance Distributed Computing
held at Edinburgh, Scotland. Available
at http://www-didc.lbl.gov/papers/
HPDC02-HP-monitoring.pdf.
9
Extending IT Infrastructure
Management Services to the
Internet of Things
Embrace the ubiquity of smart assets and
endeavor to improve the quality of human life
By Ajit Mhaiskar, Gourav Budhia and Kumar Padmanabh PhD
I
T Infrastructure Management has evolved
over the last three decades and has reached
a stage of signifcant maturity today, where a
combination of remote and on-premise presence
of people is used to manage infrastructure.
There is also a signifcant focus on IT service
continuity management and frameworks like
Information Technology Infrastructure Library
(ITIL) have brought about standardization and
process maturity across industries.
Most large enterprises today have
externalized the management of a significant
portion of their IT infrastructure to reduce
downtime, provide 24x7 support through a global
helpdesk, improve security and reduce operating
costs. There are a large number of providers
today, providing services related to management
of IT infrastructure including servers, mission
critical databases, network components, telecom
infrastructure, video conferencing equipment, etc.
Dynamic IT infrastructure enabled by
virtualization and automation is the direction
the industry is fast moving towards. Cloud
computing, including public as well as private
clouds is a manifestation of this dynamic
IT infrastructure vision and large software
vendors are making signifcant investments
in building the public cloud infrastructure
as well as creating solutions for setting up of
private clouds. Cloud computing will further
increase the externalization of IT infrastructure
management by offering considerable cost
savi ngs by l everagi ng aut omat i on and
economies of scale.
Rapi d advances i n i nf rast ruct ure
management have been facilitated to a great
extent by the availability of seamless network
connectivity provided by ubiquitous Internet-
Protocol (IP) based networks and the rapid
adoption of the Internet.
VOL 9 NO 5
2011
Infosys Labs Briefings
10
SMART, CONNECTED INFRASTRUCTURE:
THE INTERNET OF THINGS
Today is clearly the age of smart infrastructure
we have smartphones, smart cars, smart electric
meters, smart energy grids, smart buildings, etc.
This smartness associated with various assets
mostly comprises of some or all of the following
characteristics
A significant leap in capability as
compared to a traditional asset
An ability to make complex decisions
Significantly improved control and
communication capabilities
Aids conservation of resources in the
form of energy, water, food, etc.
Almost all the smart infrastructure
available today has embedded electronic
components that can run complex onboard
software for decision making, navigation,
communication, etc. Some examples of smart
infrastructure that have gained widespread
adoption are listed below.
Smartphones available today like the
iPhone, BlackBerry, etc have powerful
processors, innovative user interfaces
and large data storage capacity. Besides
providing us the ability to talk to others,
they also allow us to listen to stored
music, listen to the radio, take pictures
and browse the internet.
Roomba from iRobot is an example
of a smart home appliance that has
become very popular. It has the ability
to perform functions as diverse as
recogni zi ng voi ce commands and
cleaning dirty carpets.
Smart electric meters are being rolled
out in various parts of the world.
These meters allow recording of energy
consumption and communicate that
information to the users as well as to the
utility provider.
In most cases, the smart infrastructure
also provides the ability to communicate and
connect with other devices. The communication
mechanism is almost always through networks
based on the ubiquitous IP. This opens up a wide
range of possibilities for these assets to make
information available to users remotely and also
to communicate with other smart assets.
In whats being called The Internet of
Things, sensors and actuators embedded in
physical objectsfrom remote feld equipment
to home appliances to roadways to medical
equipmentare linked through wired and
wireless networks, often using the same IP that
connects the Internet [1].
Countries in the European Union,
China, United States and other geographies are
carrying out serious research in the feld of The
Internet of Things with the expectation that
smart assets will have widespread adoption,
will foster signifcant economic growth, create
new business models and provide a better life to
citizens. The Internet of Things will comprise of
assets that can be located, tagged, monitored and
remotely controlled by leveraging other assistive
technologies like the Internet, Radio-frequency
identifcation (RFID), Wireless Sensor Networks
(WSN), Near Field Communication (NFC), etc.
Figure 1 shows how The Internet of
Things could comprise of billions of static
11
and mobile devices that are connected using
IP-based networks and are able to exchange
information using embedded sensors and the
pervasive internet.
Some of the key opportunities and
potential benefts presented by The Internet of
Things include the following:
Smart cities, smart homes, smart cars
and smart appliances that improve the
quality of life signifcantly
Improved operations and management
of smart assets, possibly leading to more
sustainable lifestyles that can conserve
and protect natural resources
Game-changing applications in various
fields like healthcare, food safety,
agriculture, supply chain, infrastructure
management, human safety, resource
conservation, etc.
A few examples of new applications and
business offerings enabled by smart assets and
the pervasive internet are listed below.
Smart electric meters are being rolled
out in some parts of the world. The
availability of these meters along with
the pervasive internet has led to the
evolution of services like Microsoft
Hohm and Googl e Power Met er
that allow subscribers to view the
energy consumption of their homes or
commercial establishments online from
anywhere.
Cisco has earmarked nearly a $2 billion
investment for a project in Incheon,
South Korea with plans to build a
smart city with a center to manage
the underlying IP platform that can
control everything from the citys power
management and traffc controls to an
individual apartments lighting and
temperature [2].
In January 2010, Haier launched the
worl d s fi rst Internet of Thi ngs
refrigerator that can store food items,
be connected to an IP-based network,
communicate with supermarkets and
also provide features like internet phone,
internet browsing and watching videos
[3].
Farm equipment manufacturers are
beginning to provide smart precision
farming equipment to farmers with
wireless links to data collected from
remote satellites and ground sensors.
Other value-added services are also
being provided through these wireless
links. Taking various factors like crop
conditions, weather conditions, soil
condition and other parameters into
account, the smart farming equipment
can adjust the distribution of fertilizer,
seeds and ot her f armi ng rel at ed
parameters [1].
Sensor Networks
3.5+Billion
Mobile Devices
2+Billion Static
Information Devices
The Internet
of Things
Pervasive
Internet
Sensors and
Actuators embedded
in Smart Devices
Figure 1: The Internet of Things
Source: Infosys Research
12
Wireless transmitters are being embedded
in shopping carts and throughout a retail
store, gathering valuable data about how
shoppers move through stores and where
they stop to linger or compare prices.
This can help in better positioning of
merchandize and delivering customized
promotions to shoppers [4].
Microcameras delivered into the human
body can traverse the digestive tract
and other internal organs to send back
images to pinpoint sources of illness [1].
Smart cars that can track traffc patterns,
report incidents and make routing
decisions are beginning to make an
appearance [5].
Wireless sensors are being deployed
in buildings to monitor conditions like
temperature, humidity and light in order
to optimize on heating/cooling costs and
also to help ensure safer operations.
We believe that the tipping point has
been reached and such smart assets will become
a common part of our daily lives over the next
decade through their widespread adoption.
MANAGING SMART INFRASTRUCTURE
The Internet of Things will comprise of smart
assets with the ability to communicate and link
up with other smart assets. A lot of these assets
may also have the ability to perform autonomous,
self-governed operations with minimal human
intervention as shown by the examples that follow.
If automobile drivers accept installation
of smart sensors that can record vehicle
speed, acceleration and other parameters
and communicate this information to
insurance providers, it could result in
signifcant discounts for well- disciplined
drivers [5].
A Digital Smart Home Gateway with
built-in intelligence that links and controls
multiple home devices autonomously.
A faucet or another appliance like
a washing machine turning off the
local water supply through wireless
communication in case of a leak detected
in order to avoid wastage.
A smart energy meter turning on the
washing machine at home at off-peak
hours when the utility charges are lower.
A refrigerator that alerts the owner about
food that is going stale and possibly
segregates it into a separate section within.
However, there are several scenarios
where autonomous, self-governed operations of
smart assets may be diffcult to achieve or risky
to implement and will need manual supervision
to ensure safety and accuracy. Some examples
of such scenarios are listed below.
Effectively assisting elderly people in an
increasingly ageing society
Ensuring 24X7 operations in an effcient
manner for critical infrastructure and
assets like power plants, refineries,
manufacturing facilities, etc.
Improving passenger safety in aircraft
and other vehicles through remote
monitoring
13
Monitoring of building operations
and related assets, for e.g., security
monitoring of premises; monitoring of
heating, ventilating and air conditioning
(HVAC) systems; monitoring of ambient
conditions like temperature, humidity
and light; monitoring for water or
gas leakages, and monitoring of other
building systems
Monitoring traffic conditions, traffic
signals, speed cameras, etc.
Monitoring various pipelines for oil
and gas
Heal t h- c ar e r el at ed moni t or i ng
services
Monitoring of environmental conditions
using smart sensors embedded in the
ground to check for earthquakes, soil
quality, water pollution, etc.
Common equi pment moni t or i ng
activities like monitoring for excessive
temperatures, monitoring equipment
effciency, etc [6].
EXTERNALIZED INFRASTRUCTURE
MANAGEMENT SERVICES FOR ASSETS
Just l i ke ext ernal i zed I T i nf rast ruct ure
management provides significant benefits
l i ke r educed downt i me, 24x7 suppor t
t hr ough a gl obal hel pdesk, i mpr oved
security and reduced operating costs, we
believe that smart assets in The Internet
of Thi ngs can potenti al l y be moni tored
and operated remotely from anywhere in
the world in a secure manner to gain the
following benefits
Improved quality of life for humans
Better management of assets through
enhanced productivity and effciency
Improved quality of service (QoS)
Reduced cost of operations.
A McKinsey analysis suggests that just
as IT service provider companies remotely
manage their customers IT infrastructure,
they could also remotely manage other non-IT
infrastructure and operations. For example,
service providers could remotely manage
the energy consumed by their customers
air conditioning and heating systems or
remote monitor feld assets like compressors
and pumps. If these efforts are successful,
McKinsey estimates such remote infrastructure
management services to generate between $100
billion to $130 billion of additional revenue for
the service providers by 2020 [7].
Figure 2 overleaf shows the benefits
provi ded by external i zed i nfrastructure
management of smart assets vis--vis legacy
assets that exist today.
BUILDING BLOCKS FOR PROVIDING
EXTERNALIZED INFRASTRUCTURE
MANAGEMENT SERVICES
The key building blocks that will enable remote
monitoring of building systems, energy grids,
healthcare equipment, high value goods and
large complex infrastructure setups like power
plants and refineries are smart assets with
sensors that have communication and decision
making abilities and the pervasive internet.
Figure 3 overleaf shows the building blocks
that will help enable externalized management
of such diverse infrastructure.
14
Infrastructure: This will be the asset for which
management is being externalized. These will
typically be high value assets or assets that can
deliver signifcant cost savings when monitored
effciently.
Sensors and Controllers: These are components
that will be embedded into the infrastructure or
added as bolt-ons. There could be one or more
sensors depending on the parameters to be
monitored. For example, there could be separate
sensors to monitor temperature, humidity, light,
air quality, etc.
Information Collection and Orchestration
(Middleware): This will be a software-based
middleware solution that collects important
information or events from multiple sensors
and controllers in a complex ecosystem. This
middleware will have intelligence built into it to
make decisions and perform orchestration. This
middleware may not be required in ecosystems
that have low complexity.
Information Transmission: This is the transport
layer that allows transmission of data from the
middleware to other systems for storage and
further analysis. The information transmission
layer will comprise of wired and wireless
networks.
Information Storage, Analysis and Decision
Making: This comprises of a data store for storage
of the information collected from the middleware.
Analytics and decision support systems could be
built on top of this data store. Analysis related
to condition monitoring, predictive analysis and
other back-offce operations can be performed on
the information stored in this data store.
ARCHITECTING A SMART ECOSYSTEM
The smart ecosystem of the future that
wi l l enabl e ext ernal i zed i nf rast ruct ure
management of assets in The Internet of Things
will be formed from partnerships comprising
of platform vendors, communication service
providers, system integrators, cloud service
providers and independent software vendors
(ISVs). This ecosystem will have to be friction-
free and will need to enable quick on-boarding
of additional smart assets and related services.
Information Storage, Analysis and Decision Making
Information Transmission
Information Collection and Orchestration ( Middleware)
Sensors and Controllers
Infrastructure
Figure 3: Building Blocks for Externalizing Management
of Assets in The Internet of Things
Source: Infosys Research
Figure 2: Benefits of Externalized Infrastructure
Monitoring for Smart Assets
Source: Infosys Research
3
2
1
Cost Effectiveness & Efficiency
Smartness of
Infrastructure
1
2
3
Traditional Infrastructure Monitoring
(Wired, On-Premise, Legacy Control and
Automation Systems)
Basic Externalized Infrastructure Monitoring
(Wireless, Mostly On+Premise with some
remote provide, Legacy Automation systems
extended to provide wireless interfaces and
limited smart features)
Smart Externalized Infrastructure Monitoring
(Wireless, Limited On-Premise presence,
primarily,remote systems, Smart extensions
to legacy assests)
15
Figure 4 shows the various architecture
components of this smart ecosystem.
The various architecture components of the
smart ecosystem are explained below.
Smart Assets, Sensors, Control Systems: Smart
assets could comprise of embedded electronic
components using pre-built platforms procured
from hardware vendors or custom solutions
built on off-the-shelf chipsets. Sensors will
monitor various parameters like temperature,
humidity, vibrations, etc. The smart assets will
communicate using wired interfaces like Ethernet
or wireless interfaces like ZigBee, Bluetooth, etc.
The sensors will be embedded within the device
or added as bolt-ons to legacy assets.
Gateway Device / Server (Middleware): The
gateway device/server will be a software
component deployed on a server or an appliance
that resides between the sensors and control
systems on one side and monitoring and
business applications on the other. This
component is optional and may not be needed
in ecosystems with low complexity. This
component will process the information coming
in from the sensors and control systems and
generate real time alerts or perform other actions
while passing on appropriate data to the data
capture and storage system. Decision on which
data should be passed on to the data storage
system and which should be dropped will also
be made by the middleware. Confguration of
various devices, communication protocols and
thresholds will also be handled by this gateway.
Data Transport: Data transport system will
consist of wired Ethernet connections to the
Internet or commonly used wireless wide area
network technologies such as GSM/GPRS/
EDGE, WCDMA/HSPA CDMA or WiMax.
Other wireless technologies such as satellite
networks, radio frequency tracking and wireless
mesh may also be considered depending on the
use case scenario.
Data Capture and Storage: This comprises of
a relational database to store the information
passed on by the gateway/server middleware.
Smart Devices,
Sensors,
Control Systems
Gateway
Device / Server
(Optional)
Data
Transport
Integration with
external systems
Data Capture
and Storage
Monitoring, Reporting,
Analytics, Predictive Analysis
H V A C
System
Engineer:
Operational
Reporting
Analyst:
Dash
boarding
Solution
Figure 4: Benefits of Externalized Infrastructure
Monitoring for Smart Assets
Source: Infosys Research
16
A large amount of data will be generated over
time, especially in cases where continuous
monitoring is being done. Appropriate data
archival, purging and data warehousing
strategies will have to be defned to ensure
smooth, uninterrupted operations.
Moni t ori ng, Report i ng and Predi ct i ve
Analysis: This comprises of reporting systems
and analysis services built on top of the data
capture and storage system. This will be a critical
component that will assist in provisioning
of value-added services. Predictive analysis
based on the data captured can be carried
out using this system. This system will also
display various KPIs and dashboards to enable
accurate monitoring of assets. This system may
also support scenario-based analysis or what-if
analysis.
Integration with External Systems: The
information passed on to the data capture and
storage system will also have to be integrated
with other business systems of record like the
Enterprise Resource Planning (ERP) system or
other external systems. This integration can
be done in a batch manner or in a real-time
manner depending on the use case scenario.
EXTERNALIZED MANAGEMENT OF ASSETS:
ADDRESSING CHALLENGES AND RISKS
Organizations like humans are traditionally
slow in trusting others. Providing access to key
operational systems for remote monitoring will
frst have to overcome this trust defcit and also
ensure data security. However, organizations
have already learnt important lessons and put
in proper governance structures to manage
out sourced manuf act uri ng operat i ons,
outsourced IT applications management,
outsourced IT infrastructure management and
other outsourced services. These learnings
and governance structures will also have to be
applied to remote monitoring of smart assets
like buildings, field equipment, production
equipment, etc.
Some of the best practices that can be put
in place to govern externalized management of
assets in The Internet of Things are listed below.
Provide appropriate privileges to remote
operators with strong authentication and
authorization controls. Consider at least
two-factor authentication for critical
functions.
Perform remote data exchange over
trusted, secure communication channels
usi ng dedi cat ed communi cat i on
networks or technologies like VPN.
For highly critical operations, have an
on-premise employee work along with
the remote service provider agent to
ensure safety of operations just like a
safe deposit locker in a bank uses two
keys to open a safe, one which is with
the bank and the other which is with
the customer.
Another key challenge that organizations
will have to address is regarding adding of
smart capabilities to legacy infrastructure to
enable externalized monitoring.
ADDING SMART CAPABILITIES TO
LEGACY INFRASTRUCTURE
One of the most important challenges that
businesses will need to address is extending
existing legacy infrastructure either through
modernization or upliftment through bolt-on
extensions to provide them smart capabilities.
17
Wireless sensors available today have
a very small footprint and can be embedded
or attached to almost any object. Pre-built
or pre-designed hardware platforms can be
used to build the required processing and
decision making capabilities. These sensors
and hardware platforms can then be bolted on
to existing legacy assets.
Figure 5 shows a conceptual way of
extending legacy assets to allow externalized
monitoring.
CONCLUSION
We believe that over the next decade, smart
assets supported ably by the pervasive internet
will become ubiquitous and a part of daily life.
Some of these smart assets that are high value
will have their condition and usage monitored
remotely. Built-in diagnostic routines could
allow the assets to call service providers
for maintenance activities based on various
parameters before an actual failure occurs.
Service providers will be able to add
signifcant new revenue streams through the
provision of value-added services to smart
assets in The Internet of Things. The Internet
of Things will provide plenty of opportunities
for innovation and will help improve the overall
quality of human life.
REFERENCES
1. Chui, M., Lffer, M. and Roberts, R.
(2010), The Internet of Things, McKinsey
Quarterly, No. 2, pp. 1-10. Available at
https://www.mckinseyquarterly.com/
The_Internet_of_Things_2538.
2. Cheng, R. (2010), In Korea: Ciscos Showcase
for a Smart City, Dow Jones Newswires.
Available at http://www.songdo.com/
songdo-international-business-district/
news/in-the-news.aspx/d=210/title=In_
Korea_Ciscos_showcase_for_a_smart_city.
3. Hai er U- home us e I nt er net of
Things Refrigerator Seizing Business
Opportunities Ahead - Smart Home
I ndus t r y, a Fr i bz. c om Net wor k
Appliance News Report. Available at
http://news.frbiz.com/haier_u_home_
use_quot;internet-210899.html.
4. MacMillan, D. (2009), Best Buy, Other
Retailers Tap Tech to Boost Sales. Available
at http://www.businessweek.com/
bwdaily/dnflash/content/feb2009/
db2009028_712098.htm?chan=top+news_
top+news+index+-+temp_top+story.
5. Sundmaeker, H. (2010), Vision and
Challenges for Realizing the Internet of
Things, Cluster of European Research
Projects on the Internet of Things (CERP-
IoT) Report.
6. Kennedy, S. ( 2010) , Best Fri end?
Remote Services Solve Problems with
Reliability, Staffng and Skills. Available
at http: //www. plantservices. com/
articles/2010/02RemoteServices.html.
7. Kaka, N. (2009), Strengthening Indias
Offshoring Industry. Available at http://
C
o
m
m
u
n
i
c
a
t
i
o
n

P
r
o
t
o
c
o
l
s

-

T
C
P

/

I
P
,


Z
i
g
b
e
e
,
B
l
u
e
t
o
o
t
h
,

G
P
R
S
,

e
t
c
C
o
m
m
u
n
i
c
a
t
i
o
n

p
r
o
t
o
c
o
l
s

-

T
C
P

/

I
P
,

Z
i
g
b
e
e
,

B
l
u
e
t
o
o
t
h
,

G
P
R
S
,

e
t
c
Wired / Wireless
Communication Components
(Commodity Hardware)
Existing Legacy Assets
Wireless
Gateway
Enterprise Systems
(ERP, Management System, etc)
Figure 5: Enhancing legacy assets to enable communication,
decision making and externalized monitoring
Source: Source: Infosys Research
18
mkqpreview1. qdweb. net/Strategy/
Globalization/Strengthening_Indias_
offshoring_industry_2372.
8. OnStar Relaunches Its Brand with Focus
on Responsible Connectivity, 2010.
Available at http://media.gm.com/
content/media/us/en/news/news_
detail.brand_gm.html/content/Pages/
news/us/en/2010/Sept/0915_onstar.
9. Dargie, W. and Poellabauer, C. (2010),
Fundament al s of Wi rel ess Sensor
Networks: Theory and Practice, John
Wiley and Sons.
10. Srinivas, B.G. (2010), Emerging From
the Post Crisis World. Available at
http://www.infosys.com/offerings/
industries/manufacturing/Documents/
new-post-crisis-world.pdf.
19
VOL 9 NO 5
2011
Advanced Monitoring for
IT Infrastructure
Monitor data centers efficiently and avoid cost
leakages in infrastructure management
By Vaibhav Bhatia
D
ata centers that power the companys
operations are the most critical resources
for any company. Uptime of these resources
is required as a business guarantee. The risk
of downtime is simply too high leading to
customer dissatisfaction when a large retailers
or large banks data center is down.
The face of data centers today has
changed from what they were fve years back.
Today, data centers are complex environments
with varied components working together.
There are complexities one never had to think
of earlier, for example virtual servers, air-fow,
CPU utilization, etc. Server architectures are so
varied these days that placing a fully loaded
blade server in one part of the data center will
alter the air-fow drastically, impacting other
servers in the neighboring racks.
For managing such a complex environment,
one needs to know exactly whats going on inside
and the best method of doing so is monitoring the
infrastructure continuously. This paper describes
some of the methods of extensive data center
monitoring that lead to qualitative and quantitative
improvements in data center management.
WHY INFRASTRUCTURE MONITORING?
Data center managers follow the policy of If it
aint broke, Dont fx it. But often they fail to
fgure out that when something breaks, unless
you know what broke, you cannot fx it. The
time of crisis is not when one sets about fguring
out where to begin inspection and troubleshoot.
During crisis, data center managers should
be in a position to get in there and get the
infrastructure back on its feet. Thus it is best
to be prepared and know your infrastructure
inside out. Monitoring the various moving
parts in the data center is one method to help
you do so.
An important point to be noted is that
the preliminary step to automation is always
monitoring. Thus unless the infrastructure is
well monitored, the scope for automation will
always remain low.
Infrastructure monitoring helps in
planning for future upgrades and error free
maintenance. By monitoring the various
blocks of a data entry, you know which parts
are due for maintenance and which ones are
worn out. One can then plan in advance for
Infosys Labs Briefings
20
upgrading the infrastructure or do a technology
refresh. Likewise, if one wants to augment the
computing capacity of the data center and if
there is availability of suffcient power and
space, only the cooling needs to be upgraded
in order to have greater compute capacity. Such
capacity planning decisions can be made in an
instant with live monitoring, else one will have
to start gathering data at the point when one
starts considering an upgrade.
UNDERSTANDING THE VARIOUS
ASPECTS OF A DATA CENTER
At a broad level, a data center can be viewed
as built up of four critical components:
power, cooling, network and computing.
While power is the basic driver for all the
other components, cooling is equally essential
to prevent overheating in the data center.
Networking enables connectivity between the
components without which even monitoring
would be impossible. Compute is the very
reason we build data centers.
With extensive monitoring of these
segments, one can call a data center well-
monitored and controlled. In practice what is
common though is that the IT team monitors
the compute equipment, the network team
the network and the facilities team monitors
the power and cooling aspects. Teams work in
silos, but today with comprehensive monitoring
products these individual teams can come
together and work in sync.
KEY MONITORING IMPROVEMENTS
AND METHODS
The authors experience suggests that some
of the largest and even most sophisticated
data centers lack the right kind of monitoring
methods. It is therefore important that one
understands what steps are necessary to be
incorporated while upgrading any data center.
Some important steps are:
1. Rack level monitoring
2. Utilization monitoring
3. Automation of what is being monitored.
Rack Level Monitoring
Two major blocks of a data center infrastructure
are power and cooling. Typically these two
blocks are monitored at the data center
level through interfaces such as Building
Management Systems (BMSs) and lead to
tangible benefts. A monitoring and control
station keeps tab of the infrastructure elements
such as diesel generators, utility input, UPS,
chiller unit and CRAC units and takes the
necessary corrective steps when required.
For example, the heavy duty diesel
generators used at IT plants are operational
24/7 as the need of IT usually is. This leads
to a lot of wear and tear in the mechanical
parts of the generator. Also, manufacturers
speci f y f i xed hours of operat i ons af t er
which maintenance is required warranting
downtime. At these times backup generators
are used. This regular cycling of generators
is critical and thus needs to be monitored
and automated to avoi d any di srupti on
in service. Monitoring the runtime of the
generators helps in determining when the
next maintenance is required and the relevant
personnel can be given a heads-up. Without
monitoring one may end up in a situation
where the primary generator is waiting for
the maintenance to be conducted and the
secondary has failed and this can prove
disastrous for an IT service provider.
However, monitoring at a high level can
only have such limited benefts. Organizations
today lack focus on detailed monitoring and
this we believe is the next step to data center
21
sophistication. Simply put, monitoring the power
and cooling at the rack level has significant
benefts, a few of which are listed below.
a. Isolating Issues: By monitoring the
power and cool i ng at rack l evel ,
whenever there is a power or cooling
issue in the data center, one can fnd
out where the issue is in a matter of
seconds. Often isolating a cooling issue
takes an extremely long time due to
the complexities of the air fow inside
a data center, which is not predictable.
In one case, a particular server often
was getting turned off at nights due to
high temperature, but whenever the
operator walked up to the area where
the server was located and measured the
air temperature, he found it to be within
limits. This led to a further investigation
and it was found that directly in front of
the affected server was a newly installed
fully populated blade server which was
installed in the opposite direction thus
blowing hot air directly to the affected
servers intake. During the day, the
blade server was not heavily used, it
was used for nightly batch operations
when it would power up completely.
This investigation took a long time and
fnally the issue was corrected and the
fndings were confrmed by placing a
temperature sensor beside the affected
server for 2-3 days. Had there been a rack
level temperature monitoring system in
place already, the issue would have been
isolated and fxed much earlier.
b. Capaci t y Det ai l s Avai l abl e on a
Real-time Basis: When planning for
augmentation in a data center, it is
not only necessary to know how much
physical space is available, but also
whether the physical space has ample
power and cooling available or not.
Without this information, placing a new
server in the data center can have an
impact on the existing setup. Thus when
considering capacity in a data center,
the following three parameters: space,
power and cooling are to be taken into
account. Often space is kept track of at
a rack level, but power and cooling are
not, as they require an investment. But
given todays changing environment,
any investment made into these is worth
every penny.
c. Machine Placement: Another key
decision that can be made with the
information available from rack level
power and cooling sensors is the decision
on where to place new servers in the
data center. One would not want to
place a new and fully loaded blade
server chassis in an area of the data
center that is already operating at above
average temperatures or near the upper
limit of power capacity. The real time
information obtained from sensors also
aids in load distribution around the data
center; non critical servers can be turned
off and moved to other cooler areas in
case they pose a threat to the critical
servers running nearby, or in moving the
workload in an instant in a virtualized
environment.
d. Cooling and Humidity Management:
The key instruments to monitor cooling
and humidity are their respective sensors.
Placement of these sensors in a data
22
center is of very critical importance. In a
data center environment, the population
of sensors is directly proportional to the
server density. Also it is critical to place
the sensors as close to the servers as
possible. Temperature and humidity at
strategic points in the data center help in
detecting hot zones and the sooner these
are detected the more time the data center
administration team has to take preventive
measures by either shutting down the
machine or increasing the cooling power.
Utilization Monitoring
Monitoring the utilization of the infrastructure
such as chillers, CRAC units, generators and
IT equipment such as servers and storage,
over time in the data center has significant
advantages. Once the data has been gathered,
utilization trends can be plotted and some key
decisions can be made. Some scenarios where
utilization monitoring helps are listed below.
a. Virtualization: Utilization trends have
been driving virtualization in the industry
for a while now and need not be re-
stressed. Server and storage utilization are
the key inputs required before any decision
on virtualization is made. Subsequently,
once virtualized, the servers can be reused
or decommissioned as needed.
b. Server Provisioning: Due to lack of
information as to when the machines are
required, they are kept in running state
always. Utilization trends can be used
to provision machines based on historic
need. For example, in a large retail
organization, on a normal business day,
100 servers suffce the need, but during
year end sales the capacity needed can
go up to 175 servers. The extra 75 servers
can be provisioned only during the year
end. This sort of optimization is possible
when we have historic data to analyze
and conclude.
c. Energy Advantage: By analyzing the
utilization trends of the infrastructure,
one may deci de to swi tch off the
equipment when not required to save
energy and also tweak the cooling
infrastructure to provision the right
amount of cooling in each area of the
data center. For example a certain set of
systems are used only during month end
processing during the last two days of
the month and for the rest of the month
their utilization is less than 10%. In
such cases, these systems can be turned
off for the most part of the month and
turned on only moments prior to their
requirement. Also if the number of
systems is signifcant, which could be the
case in a retail or fnancial organization,
these systems can be grouped together in
a separate area in the data center and the
power and cooling for the entire section
can be turned off for most part of the
month and unnecessary expenditures
on energy can be saved.
d. Placement of Cold Air Ventilation Tiles:
By analyzing the thermal patterns
obtained from the rack level sensors,
the operations team is able to alter the
ventilation tile positions in order to
achieve maximum savings. For instance,
cooler zones do not require as many tiles,
and warm zones require tiles with larger
openings. These alterations can be made
based on facts and not guess work.
23
Automation
Monitoring provides ample data, the more
extensive the monitoring infrastructure, the
more exhaustive the data. But most often, this
data is used in its current form or on a real
time basis to take corrective actions, then and
there. Rarely have we seen the monitoring data
stored over extended periods of time and being
used for analysis, improvements or automation.
The basics of automation tell us that at frst one
should know the relationship between the event
and the action and subsequently one must know
how to detect the occurrence of the event and
trigger the right action at the right time.
As an example, let us take the case
when a hotspot is detected in a data center.
The obvious action taken is to increase the
cooling in the area where the hotspot occurred.
Currently this action is either taken manually,
by the operator going into the data center and
physically increasing the cooling or, through
the BMS by issuing a command to the specifc
CRAC unit. Although a mechanical task, the
entire process can be automated. There are a
few data centers where this has been achieved
and shown considerable results.
A second case is where test machines are
kept on even when there is no user or testing in
progress. Test machines can be provisioned only
when an incident is reported in the production
environment and thus generates a need for
testing and reproduction of the production
error.
Similar to the above, there are several
scenarios where the data available from
monitoring infrastructure can be used to
automate the data center operations and save
precious time in mission critical environments.
The scenarios depend on the data center,
its scale of operation and the priorities of
automation.
CONCLUSION
It is critical to have data in order to make
any decision. People often are unable or
scared to make a change to a production
environment because they cannot fathom the
impact. Through monitoring it is possible to
make informed choices on infrastructure. It
is difficult to cover all aspects of monitoring
described above, but it is worthwhile to
prioritize the areas in which one sees most
value in investing. For example some areas in
a data center may already be issue prone and
it is wise to make initial investments in these
issue prone areas. Once satisfied and stabilized,
one can invest in other areas. Data center
improvements should be ongoing and after
monitoring one can move onto automation.
Savings from these key decisions will repay the
investments in the monitoring infrastructure
and the intangible benefits of having an easy
to manage data center and a nightmare free
night are priceless. The savings can also be
routed back to make further developments and
improvements to the data center.
REFERENCES
1. CXOtoday Staff (2008), Boost your
Bottomline with Green Initiatives.
Available at http://www.cxotoday.
com/story/boost-your-bottomline-
with-green-initiatives/.
2. http://www.hp.com.
3. Miller, R. (2008), CFD Thermal Modeling:
Whos Using It, and How?, Opengate
Data Systems, Data Center Knowledge.
Available at www.datacenterknowledge.
c om/ar c hi ve s /2008/10/07/c f d-
thermal-modeling-whos-using-it-and-
how/.
4. Patel, C. D. , Sharma, M. , Marwah,
R. , Bhatia, V. , Mekanapurath, M. ,
24
Vel umani , R. and Vel ayudhan, S.
(2009), Data Analysis, Visualisation
and Knowledge Discovery in Sustainable
Data Centers, in Proceedings of ACM
Compute Conference held in Bangalore,
India during January 9th 10th, 2009.
Available at http: //www. hp. com/
hpinfo/newsroom/press_kits/2011/
HPFor t Col l i ns /Da t a _ Ana l ys i s _
Sustainable.pdf.
5. Ipswitch Report (2008), The Value
of Network Moni tori ng -Why It s
Essenti al to Know Your Network.
Av a i l a b l e a t h t t p : / / www.
what supgol d. com/mai l ers/0809/
valueofnetworkmanagement.pdf.
25
VOL 9 NO 5
2011
Environment Management
How to Set Priorities Right?
By Parveen K Sharma and Bhishmaraj Shinde
S
hall we dare ask: what is the importance
of testing (especially in IT proj ects)?
Absolutely critical would be the obvious
answer. Without a well-thought testing effort,
the project will undoubtedly fail and will
impact the entire operational performance
of the solution. The fit-for-purpose testing
environments are as critical as some of the other
components to ensure testing is successful.
Now another question: what is the one of
the obvious options for a fund freeze when a
project is low on funding or over budget? Well
most of us would have thought about testing
again and within testing process, few would
have surely thought about the non-production
environment cost as well. For starters, non-
production environment means a single or a
group of related applications and associated
data, hosted on a single or multiple servers,
created to functionally mirror production or
future production.
It takes only one Google search to realize
the number of examples available to endorse
what has been stated in the above paragraph
and realize that even the best of the companies
may wilt under the pressure to deliver on time
and have paid heavy price for overlooking
(intentionally or unintentionally) test practices.
Good business organizations have
come to realize that to succeed in todays
cut throat competition a lot depends on how
effectively they prioritize their non-production
environments and are able to deliver high
qual i ty product i n the market. And not
surprisingly many organizations are planning
to improve availability and utilization of non-
production environments to deliver consistent
and predictable testing services.
If we quickly glance through various
industry best practices we notice that there is
no clear guidance on how the non-production
environment should be managed, that further
accentuates the need for robust and lean
environment management practices. However,
with increasing competition and greater focus
on delivering high quality solutions, non-
production environments could be ignored at
their own peril.
Manage non-production environments efficiently
and achieve operational transformation
Infosys Labs Briefings
26
In this paper, we discuss the practices
that organizations could leverage to align, build
and integrate the necessary capabilities to deliver
test environments related needs on promise. The
framework proposed in this paper focuses to
establish cost effective and controlled environment
management strategy by adopting environment
lifecycle management approach. It would aid
IT organizations in delivering true value back to
the stakeholders by defning specifc focus around
consistent and accurate service.
Based on the principles of repeatable
and consistent service model, environment
management and optimization services can
provide robust platform to lower total cost of
operations through reduction in time-to-market,
effective data volume, optimized non-production
landscapes and early detection of defects.
SOME KEY CHALLENGES
Todays customers need better products as
quickly as possible and they are not afraid to
switch to other brands to satisfy their need for
improved products and solutions. This has
forced delivery teams across organizations to
rely heavily on effective testing methods and
their underlying infrastructure (read as non-
production environments). Some of the key
challenges faced by organizations during their
engagement with non-production environments
are described below.
Non-availability of and Instability
i n Test Envi r onment s: Wi t h no
clear guidelines and direction on the
number of right environments needed,
organizations tend to have either too
many or too less of test environments.
With ever complex business needs to cater
to and given the tight timelines for testing
cycles, test environment availability
and stability is just as critical as the live
environments. Modern test environments
are inherently complex by nature and this
further makes environment stability all
the more a tougher task.
Exorbi t ant Test i ng Cost : Servi ce
o pe r a t i o ns i n no n- pr o duc t i o n
environments are loosely defned and do
not really have the due rigor considering
their non-production label. This leads
to too many unauthorized changes to
environment, further leading to outages
(incidents) that result in costly delays in
the go-live date for any project.
Ma n a g e me n t o f Di s t r i b u t e d
Infrastructure: Often, critical and big
budget projects tend to create their
own test environment that remain
unattended and unmanaged and lead
to infrastructure chaos. This fragmented
footprint of infrastructure leads to
increased support cost, multiple technical
landscapes and increased operational
complexity. Dedicated environment
management services would simplify
operations through standardization but
without compromising on fulflling the
functional and business requirements.
Mai nt ai ni ng Consi st ency whi l e
Promot i ng Rel eases i nt o Hi gher
Environments: In this globalized world
where development and integration of
a product is managed by teams spread
across the world, it becomes all the more
diffcult to have consistency in the code
that is being developed, merged and
promoted across various levels in non-
production environment.
27
ENVIRONMENT MANAGEMENT
FRAMEWORK
It has i ncreasi ngl y become evi dent that
organi zat i ons have st art ed t aki ng t est
environments management seriously and are
considering testing activities as one amongst
the key areas during a project delivery cycle.
Due to unavailability of a strong environments
management framework that caters to this need,
organizations tend to manage test environments
in an ad-hoc and chaotic fashion.
The proposed framework draws heavily from
some key design principles as mentioned below.
Consistent and Accurate Service:
Adoption of service-oriented model and
developing robust and effcient IT services
which are ft for purpose and ft for use.
Dynamic and Real Time Provisioning:
Adjust features and services of baseline
environments to meet customers rapidly
changing needs.
Demand-based Footprints and Dynamic
Execution Management: Modularize
testi ng envi ronment i nto mul ti pl e
modules and loading them at runtime
as requested.
Interface with IT Service Management:
Adopt environment lifecycle management
approach and defne standard processes
for environment monitoring, managing
changes/incidents and configuration
management.
Re g u l a t o r y Co mp l i a n c e a n d
Accountability: Verify the integrity of
privacy, security and data protection
and decision making by establishing
appropri ate structure to faci l i tate
accountability at the appropriate level.
Effcient Usage / Management: Promote
vi rtual i zati on, rati onal i zati on and
consolidation practices and having clear
policies/expectations for environment
usage.
Four key elements of the proposed
environment management framework are: (a)
test environment management, (b) process
operations; (c) environment governance; and
(d) continual improvement and optimization
[Fig. 1 overleaf].
Test Environment Management: Assess
the client non-production capabilities
and benchmark it against the current
i ndustry standards. Based on the
inputs define a best fit environment
management solution. Also identify any
environment sharing or improvements
opportunity.
Process Operations: Based on the
assessment and di scovery of non-
production capabilities defne a consistent
and repeatable service model. Align the
environment lifecycle management with
ITIL principles and focus on improving
effciency and consistency.
Environment Governance: Once initial
processes are implemented, it is very
critical to have compliance mechanism
in place to make sure there are no
deviations from the agreed path. It is at
this stage that we defne an appropriate
reporting mechanism for identified
environment lifecycle framework.
28
Co n t i n u a l I mp r o v e me n t a n d
Optimization: Continuously improve
from the present state. This is what makes
the framework dynamic and adaptive
in the changing times. The incremental
gain on return on investment (ROI)
achieved not only during initial process
improvements but over a period of time
will provide the much needed competitive
advantage to stay ahead in the race.
ITERATIVE APPROACH TO ACHIEVE
OPERATIONAL TRANSFORMATION
The business objective of any investment around
non-production environment management is
to strengthen environment governance and
management practices so as to assist seamless
and consistent testing practices. Having an
environment strategy that defines policies
for environments, processes to provision
environments as per test requirements, integration
with project and test delivery framework(s) and
reporting to gauge current service and set future
targets is key to set up such governance. But
is it really practical to define fit-for-purpose
environment strategy in the very frst place?
Some practitioners/consultants can
argue in the affrmative given that they already
have discovered the current state of existing
environment landscape and have worked out
plans to take the current state to the next level.
At the same time there would be situations
where formulating strategy in the frst place
is not easy given the various challenges as
mentioned earlier.
Envi r onme nt ma na ge me nt a nd
opt i mi zat i on f r amewor k i s des i gned
to be tailored for either kind of situation
Enablers
StrategicVision
To validate
the requirement
Policies/
Guidelines
To set the
direction
Management
Support
To set the
ball rolling
Outputs
Test Environment
Management
(Efficiency)
Continual Improvement & Optimization
Power Levers
Leveraging best practices such as CMMI, ITIL, PMBok, CoBIT, ISO/TS Standards, ISTQB & IEEE 829
(Consistency)
Process
Operations
(Compliance)
Environment
Governance
Opportunities
That are easy
to implement,
automate and provide
substantial benefits
Prioritize
The service
improvement
program based
on inherent
business value
Identify
Virtualization,
Rationalization
and Consolidation
requirements
Alignment
With testing practices
and project delivery
framework to support
environments
requirements with
clear accountability
Figure 1: Environment Management Framework Source: Infosys Research
29
and accor di ngl y pr ovi de cust omer s a
dynamic and practical solution approach.
The iterative approach of strategize-discover-
implement helps realizing benefits early as
well as continuously improves and matures
environment management practice [Fig. 2].

Some of the key benefts realized due an
iterative approach are as follows:
1. Reduced time-to-market and faster
turnaround time through use of higher
quality, reliable and consistent processes.
2. Consistent and accurate service by
pr ovi di ng f i t - f or - pur pos e t e s t
environments to support and deliver
highquality service delivery.
3. Lower total cost of operations through
ef f ect i ve dat a vol ume, hardware
r equi r ement s f or nonpr oduct i on
l andscapes and earl y detecti on of
defects.
4. Single service window to fulfill all
envi ronment rel ated requi rements
l eads to i mproved accountabi l i ty,
responsiveness and strong organizational
security.
5. Informed business decisions through
effective simulation of new business
situations in test environments using
up-to-date data.
6. Protection of sensitive data in non-
production systems.
CONCLUSION
When the non-production environments exist
in a chaotic and confused state, the objective of
delivering high quality products to users could
get scuttled.
An unst abl e and unr el i abl e t est
environment leads to lesser testing time and
higher probability of exposing defects in the live
environment. A ft-for-purpose environment
management strategy will help organizations to
Strategy Definition - Define Policy, Processes, Procedures and Metrics
Implementation Plan Deployment Checklist Reporting Structure
Organization
Effectiveness
Discovery
Discover and assess
test environments
Processes Policies
Guidelines
Implementation
Phased implementation
over selected environments
Lessons Learnt
Figure 2: Iterative Approach Source: Infosys Research
30
prioritize their non-production environments to
reduce their overall time-to-market and assist in
taking an informed business decision through
effective simulation of new business scenarios
in a stable test environment.
REFERENCES
1. Industry Trends in Outsourced Testing
Services, IP Devel White Paper, October
2005.
2. Ng. S. P., Murnane, T., Reed, K., Grant,
D. and Chen, T. Y. (2004), A Preliminary
Survey on Software Testing Practices
in Australia, in the Proceedings of
the Australian Software Engineering
Conference, 27th September, pp. 116-
125.
3. Light, M. (2009), Double-Checking Project
Quality with Independent Verifcation
and Validation, Gartner, Report, Gartner
Research ID No. G00172026. Can be
bought on http://www.gartner.com/
DisplayDocument?id=1234015.
4. Cant wal l , L. ( 2007) , Archi t ect i ng
An Automated Test Envi ronment,
A TesLA Whitepaper sponsored by
OnPath Technologies. Available at
ht t p: //www. t e s l a a l l i a nc e . or g/
medi a/pdf s/whi t epaper/TesLA_
OnpathTechnologies_Whitepaper.pdf.
31
Smart Grid Management System
E.K Nampuraja
S
mart Grid is seeing one of the largest
i mpl ement at i ons of communi cat i on
infrastructure and intelligence in a power
utility. At a high-level Smart Grid can be
visualized as an intelligent power grid, which
is an amalgam of electrical infrastructure and
communication infrastructure along with the
built-in intelligence in each of them. The basic
component that would make this Smart Grid
work is the underlying robust communication
network that enables shared understanding
between each of these complex systems. Smart
Grid has therefore brought both the energy
and telecom utilities together entities that
have never worked together before -- for a
huge implementation of infrastructure. But
the greater challenge is on how to effciently
manage this huge infrastructure, which is also
across diverse technologies and protocols.
The NI ST Smart Gri d concept ual
reference model has divided the Smart Grid
into seven domains, viz., bulk generation,
transmission, distribution, markets, operations,
service provider and customer [1].
The components of a domain are a high
level grouping of systems and devices that
have similar objectives or have similar type
of applications. This division consolidates the
Smart Grid components into these domains,
determines the mode of interfacing between
these domains for intelligent information
exchange and identifies the details of interfaces.
It helps focus on the systems, understand the
challenges and create an efficient model to link
up Smart Grid technologies. The domains may
also have overlapping functionalities.
Any system or device that participates
in Smart Grid functionality is termed as an
actor. The actor in one domain which interfaces
with another actor in another domain is called
a gateway actor. An information establishes
a logical communication path between these
actors (within or across domains) using a
variety of communication technologies. Also
since Smart Grid information and controls
flow through many networks with various
owners, it is critical to properly secure the
information and controls, along with the
VOL 9 NO 5
2011
Utilities would do well to harness the
power of Smart Grid to build a
robust communication network infrastructure
Infosys Labs Briefings
32
respecti ve networks. Therefore securi ty
becomes another important aspect of Smart
Grid.
The greater challenge however is in
the different types of interfaces and protocols
required for exchanging information between
each of these actors and domains. The initial
release of the NIST Smart Grid Framework
has identified 75 standards, specifications, or
guidelines that will be immediately applicable
to the ongoing transformation to the Smart
Grid.
CHARACTERISTICS OF A SMART GRID
The fundamental objective of a Smart Grid is to
make the existing grid self-healing, adaptive,
interactive, optimized, predictive, distributed,
integrated and reliable so that it can manage
complex flows of electric energy from the
thousands of large and small sources of power
to millions of customers - homes, businesses,
and industrial. The key characteristics that
a Smart Grid should possess are explained
below.
Adaptive and Self-healing
Smar t Gr i d has di s t i nct appl i cat i ons
t hat pr ovi de f undament al l y di f f er ent
f unct i onal i t i es. However , t he bui l t - i n
intelligence and networking perform in a
cohesive manner to realize self-healing.
Self-healing means quickly responding to
rapidly to changing conditions with less
reliance on operators.
Predictive
Smart Grid intelligence acts as a predictor to
identify potential outages before they occur.
By collating information such as weather
forecasts, historical occurrence of outages,
location of devices and consumers, etc., and
spatially analyzing such information extracted
from systems like SCADA, CIS, weather
forecasts, OMS, etc., it is possible to plan
resource requirements to quickly recover from
emergencies in a pro-active manner.
Integrated
Smart Grid becomes a reality when the barriers
between organizations and domains are broken.
It allows monitoring, control, protection,
maintenance, merging and IT consolidation
across domains and enables management,
remote monitoring and diagnostics. Another
important aspect is the integration of the
distributed renewable energy sources, contrary
to the traditional single point remote bulk
generation.
Interactive
Smart Grid facilitates consumers, utilities
and markets to interact with each other. It
provides the customer, access to accurate
data and knowledge about their consumption
and varied electricity pricing. Leveraging
this information, utilities can analyze the
consumption patterns across their service
area and t ake bet t er i nf ormed pri ci ng
decisions. The details can be instantaneously
communicated to consumers via Advanced
Metering Infrastructure (AMI).
Optimized
Another great feature of Smart Grid is to
maximize reliability, availability, effciency and
performance by improving asset and resource
utilization. Utilities can optimize between
operations and maintenance, distributed
generation resources, design and procurement
of capital equipment, etc. Operators can
analyze the outage information and prioritize
deployment in an optimized way.
33
Secure
Grid security is not only to secure the data
in the substation but also to create a secure
end-to-end architecture. Smart Grid enables
end-to-end security architecture and allows
network operators to control network devices,
users and traffic. Physical security can also
be implemented using surveillance cameras
(IP-based), and video imaging to protect and
alert of intruders. A secure network for grid
data, physical security, and remote workforce
management applications can be achieved in a
Smart Grid.
EVOLUTION OF A SMART GRID
The Smart Grid revolution can be viewed in
two stages. The frst one is the Smart Grid of
now (or immediate future), which is actually
making the existing grid smarter by deploying
intelligent technologies. This stage is either in
the process of being deployed, such as smart
meters, or will be deployed in the near future,
such as sensors. Another important point is the
success of Automatic Meter Reading (AMR)
and utility companies throughout the world
have successfully invested hundreds of millions
of dollars over the last few decades to deploy
AMI systems to collect billing information from
meters that can send more fexible time of use
metering data to the utilitys data center - a
great step toward a smarter grid. The second
is the Smart Grid of the 21st century, which
is the longer-term promise of a grid being
intelligent in itself. The Smart Grid of the future
will ensure grid reliability by managing and
preventing power quality disturbances. Smart
Grid technologies will reduce the price by the
interaction between customers and supply and
will offer them greater choice and fexibility
in consumption. The Smart Grid will improve
the security and safety aspects by reducing
the vulnerability of the grid. Smart Grid
will also promote environmental quality by
utilizing cleaner low-carbon emission energy
and promoting distributed energy resources
resulting in more deployment of renewable
energy sources. Table 1 lists a quick comparison
of todays grid vis--vis tomorrows.
SMART GRID CHALLENGES
In theory, Smart Grid technology is expected to
efficiently manage energy utilization, demand,
improve reliability and efficiency. But the
complete implementation of Smart Grid has
its own challenges. Some of the key challenges
are discussed overleaf.
Todays Grid Tomorrow Grid
Typically none or one way communication; not real time Two-way communication and real time
Customer interaction limited Extensive; makes intelligent decisions and dynamic tariff
More of electromechanical equipment Digital and automated equipments
Generation centralized and in bulk
Centralized and distributed paving way for renewable
energy resources; carbon limits and green power
Manual and time based operation and maintenance Remote monitoring, predictive and condition based maintenance
Prone to failure and cascading outages Proactive, real time and islanding features
Restoration is manual Adaptive and self-healing
Limited control over power fow Distributed control systems substation, distribution automation
Stand alone systems and applications Integrated, interoperable and coordinated automation
Table 1: Comparison of Period Separated Grid Source: Infosys Research
34
Interoperability Standards
The most cited challenge today is the issue of
interoperability standards as a multitude of
actors and technologies will have to talk to
each other in order to have and an end-to-end
intelligent grid. One cannot have a smart grid if
all the major players develop their systems and
technologies independently with incompatible
technologies and systems. Evolving standards
therefore hold the key to the pace of Smart Grid
implementation. But the problem is that there is
no unifying standard for building Smart Grids.
NIST hopes to have draft standards for small
grid deployments soon, but admits: what is
desperately needed is an overall road map
[1]. Deployment of technologies are also still
under development. Even though most of the
technologies and standards to build the Smart
Grid are already available, most of them have
not yet been identifed, validated and mapped
into the electric power domain.
Regulatory Challenges
Todays electric power utility system is more
similar to the telecommunications network of
the past. The Energy Independence and Security
Act of 2007 (EISA 2007), with its support for
Smart Grid research and investment, is an
important step forward in achieving similar
results for the power industry.
Bridging the Telco-Utility Divide
Smart Gri d i s seei ng one of the l argest
implementation of communication network in
the universe to better manage the movement
of electric energy from the point of generation
to consumption which is an important aspect.
However, telecom and utility industries have
never agreed on how to work together in the
past, which is why utility companies have built
their own traditional communications networks.
Security
The vision of a Smart Grid typically boasts of
enhanced system security. On the contrary,
increased connectivity also presents challenges,
especially in security. The technologies being
deployed to support Smart Grid projects such
as advanced communications networks and
smart meters can themselves make the grid
vulnerable to cyber attacks. NERC Critical
Infrastructure Protection (CIP) standards
have clearly specifed that utilities must have
integrated information system security in all
aspects of the automation systems.
Consumer Adoption of Smart Grid Services
The challenge of how to best educate and entice
the public to use Smart Grid technologies
and applications remains. From a technology
perspective, the advancement of home energy
management systems, home area networks,
and new applications (such as electric vehicles)
will bring about the deployment of components
and applications that call for educating the
end-users.
SMART GRID MANAGEMENT SYSTEM (SMS)
Smart Gri d i s total l y dependent on the
communi cat i ons and I T i nf rast ruct ure
supporting the Smart Grid systems that collect
and communicate energy data in real-time.
The communication infrastructure means
inter connecting heterogeneous networks of
wired and wireless access networks, PSTN,
Ethernet, power line - optical, microwave, etc.,
all interconnected at the broadband backbone
network owned by the utility or the service
provider(s). Smart Grid uses multiple broadband
and narrowband transport technologies, for
example, Dial up, DSL, IP, BPL, 2G, 3G, E1/T1,
SDH, SONET, etc., using a large number of utility
specifc protocols. The management of such an
35
interconnected network is therefore complex.
A successful Smart Grid implementation
is very much dependent on a flexible and
effcient management system. In addition, the
end-to-end Smart Grid implementation calls
for setting up of huge intelligent electrical
infrastructure with its immediate applications
and IT infrastructure (datacenter) that run these
intelligent systems and applications. Therefore,
it is equally important to manage the utilitys
IT infrastructure.
Smart Grid Management System is
therefore a unifed management system which
is a blend of Telcos Network Management
System (NMS), the ISOs globally accepted TMN
model for Network Management and Utility
IT Infrastructure Management System. SMS
is built on the industry accepted and proven
model of FCAPS viz., Fault Management,
Confi gurati on Management, Accounti ng
Management, Performance Management,
Security Management along with the set of
best practices used for IT Service Management
(ITSM) as the top layer.
The FCAPS model forms the basic
building block for intelligence and analytics
layer. It is however not a Supervisory Control
and Data Acquisition (SCADA) or Master Data
Management (MDM) system that ensures the
Remote Terminal Unit (RTU)/SCADA or meters
are working as per the specifcation. SMS does
not collect RTU data or meter data. It only
ensures that the SCADA/RTU infrastructure
and metering infrastructure are performing.
It does not monitor electrical characteristics
of the grid but ensures that the devices that
do monitor these characteristics are running
healthy and are communicating.
Business & Service
Management Layer
Service Desk
Service Level Management
Change
Incident/Event
Access Management
Configuration Management
Problem Management
Continual Service Improvement
Capacity Management
Intelligence &
Analytics Layer
Telecom Network
Management System (FCAPS)
Data Centre & Cyber Security Infrastructure
Utility Infrastructure
Management System (FCAPS)
Application
Layer
OMS
LAN
Ethernet
Server Infra
Generation Transmission Distribution Customer Storage
FAN
Wimax,
RF,Fiber,
PLC/BPL
SAN
Wimax,
Substation
LAN,61850,
PLC/BPL
WAN
Cellular,Fiber,
Wireless,
Satellite,PLC
LAN
Wifi,Zigbee,
Home Plug
EMS DMS
MDMS AMI
GSI
SCADA
CIS ERP
LMS
IT Infrastructure
Layer
Communication
Infrastructure Layer
Energy
Infrastructure Layer
Figure 3: Smart Grid Management System Source: Infosys Research
36
SMS Framework
From an architectural perspective this Smart
Grid can be split into different layers: the
energy infrastructure layer (transmission and
distribution), the communications infrastructure
layer (data transport and management), IT
and security infrastructure (IT systems), the
applications layer, the intelligence and analytics
layer which uses the data from the underlying
layers to make the Grid Smart and the services
layer.
Energy Infrastructure Layer
The underlying physical energy infrastructure
i s t he f oundat i on f or t he basi c energy
transfer in a grid. For analysis, the physical
infrastructure is broken into three parts along
the traditional lines of generation, transmission,
and distribution. And all the different load
classes, viz. , industrial, commercial and
residential can be grouped into the customer
category. The physical layer therefore includes
all electrical infrastructure components for
generation, transmission, storage, distribution
and consumption of electrical energy.
Generation is the frst process in delivery
of electricity to customers. This domain may
include various devices such as protection
relays, remote terminal units, equipment
monitors, fault recorders, user interfaces
and programmable logic controllers. The
boundary of the generation domain is typically
the transmission domain. It is electrically
connected to the transmission domain that
supports bulk transfer of electrical energy from
generation sources to distribution through
multiple substations. The basic components
of a transmission domain include remote
terminal units, substation meters, protection
relays, phasor measurement units, power
quality monitors, fault recorders, and substation
interfaces. The transmission network is typically
monitored and controlled through a SCADA
system composed of a communication network,
monitoring devices and control devices.
The distribution domain is the electrical
interconnection between the transmission
domain, the customer domain and the metering
points for consumption, distributed storage,
and distributed generation. The electrical
distribution system may be arranged in a
variety of structures, including radial, looped or
meshed. Distribution domain includes capacitor
banks, protection relays, storage devices, and
distributed generators.
Customer domain enables customers
to manage their energy usage and generation.
Customer domai n i s usual l y segmented
into sub-domains for home, commercial/
building, and industrial. Advanced devices
such as two-way communication meters, home
automation devices such as programmable
and communicating outlet controllers and
communicating thermostats will help customers
in managing their demand for electricity.
Communication Infrastructure Layer
The increased coordination between each of the
systems and applications to realize an effective
Smart Grid operation is enabled through
new or increased information exchange. This
coordination is characterized by bi-directional
f l ow of i nf ormat i on, shared f unct i onal
objectives, enabling of cross boundary benefts,
and mutual cooperation.
IT Infrastructure Layer
Data centers play an important role in sharing
and segmenting the appropriate information
across the fabric. The layer should have
sophi sti cated computi ng pl atforms and
infrastructure to enable seamless data collection
37
techniques and storage solutions for power grid
data analysis and optimization.
Data management refers to all aspects
of collecting, analyzing, storing, and providing
data to users and different applications,
including data identification, validation,
time-tagging, consistency across databases,
accuracy, etc. Data management is among the
most time-consuming and difficult tasks in
many of the functions and therefore must be
addressed.
Cyber security addresses the prevention
of damage due to, unauthorized access and
intrusions, to ensure confdentiality, availability
and integrity. The protection and stewardship
of privacy is a signifcant concern in a widely
interconnected system of systems that is
represented by the Smart Grid. Data integrity
and non-repudiation is needed for succinct,
reliable communication across the grid.
Application layer
Applications refer to programs, algorithms,
calculations, and data analysis. Applications
range from low level control algorithms
to massive transaction processing. These
applications are the core of every function and
node of the Smart Grid.
Advanced applications with increased
functionality allow operators and business
executives to analyse and extract useful
i nformati on from the gri d. Central i zed
management and control are therefore vital to
achieve the benefts of effciency and reliability
from the Smart Grid. However, the challenge
is to bring together the different pieces of
information in different formats for a cohesive,
functional and meaningful view needed to
take necessary action and drive the business
needs. There are multiple applications with
system in their name such as AMI, customer
information systems (CIS), outage management
systems (OMS), energy management systems
(EMS), geographic information systems (GIS),
distribution management systems (DMS), meter
data management systems (MDMS), asset
management or even an enterprise resource
planning (ERP) system. Common practice is
that each of these systems is supplied by a
different vendor and sometimes independent
managing systems, leading to diffculties in
managing the end-to-end data needed to run the
utility business. The industry is moving toward
common information model development and
SMS addresses this concern by providing a
unifed, single pane view of the management
console showing the state of all assets in real-
time including network connectivity.
Intelligence and Analytics Layer
This layer can be viewed as a manager of
managers. Each of the systems in the application
layer may have different systems that manage
their respective systems and functionality.
For e.g., the SCADA system does real time
monitoring using RTUs distributed in the
filled (Substation and T&D). The MDMS
system collects and manages data obtained
from individual meters. AMI system collects,
measures and analyzes energy usage data
collected from smart meters and home controls.
Outage Management system analyses the root
cause for outage, restoring services at the
minimal restoring point at the minimal time and
responding to the customer. Load Management
and Demand Response Management systems
manage efficient utilization of generated
energy by managing the peak and lean loads.
The Customer Information System and Billing
Information system has the entire customer
data to manage consumer bills and accurate
meter readings.
38
The intelligence and analytics layer
therefore collects data specific to FCAPS
functionality from the individual management
systems or applications. The layer is therefore
interfaced with the application by means
of different probes connected through an
enterprise service bus. The interconnecting
probes could be industry-standard protocols
or tailored and vendor specifc plugs wherever
standard interfaces are not available.
The high performance intelligence
and analytics layer segregates the collected
data between el ements of FCAPS usi ng
built-in intelligence and analytics. This layer
also initiates proactive monitoring and most
importantly it ensures that business-critical
services such as demand response, billing
services and automated outage/restoration
management applications get required data.
Business Service Management Layer
FCAPS and service management layer overlap
in terms of concepts. However, they do
different levels of abstraction. FCAPS is
primarily focused on the concept of technology
management, whereas service management
layers focuses on service management. FCAPS
starts with a technology centric view while the
service management layer starts with a service
oriented view. FCAPS is integrated with the
service management concepts for business
value and service level management.
CONCLUSION
Smart Grid is a huge deployment of equipment,
systems, intelligence, network and money. It
affects all areas of the power system starting
from generation, transmission to distribution.
Therefore, a very important underlying need
is for a unifed super network operation center
tbat can manage both the communication
i nfrastructure and Uti l i ty s Smart Gri d
infrastructure and at the same time provide
a service management view that can deliver
business value both to the utility and the
customers.
REFERENCES
1. NIST Framework and Roadmap for
Smart Grid Interoperability Standards,
Release 1.0.
2. Leeds, D. J.(2009), The Smart Grid in
2010: Market Segments, Applications
and Industry Players, GTM Research.
3. Gunther, E.W., Snyder, A., Gilchrist,
G. and Highfill, D. R. (2009), Smart
Gr i d St andar ds Assessment and
Recommendations for Adoption and
Development, EnerNex Corporation,
Draft 0.83.
4. Goyal, P., Rao, M. and Ganti, M. (2009),
FCAPS in the Business Services Fabric
Model published in the Proceedings of
the 18th IEEE International Workshop on
Enabling Technologies: Infrastructures
for Collaborative Enterprises.
5. Sood, V. K., Fischer, D. , Eklund, J.
M. and Brown, T. (2009), Developing
a Communication Infrastructure for
the Smart Grid, in the Proceedings of
the IEEE Electrical Power and Energy
Conference.
39
ITIL for
Enterprise Cloud Deployment
Adopt ITIL principles and extract the
true value of enterprise cloud infrastructure
By Renjith Sreekumar and Prashanth Prabhakara
L
atest technology trends are driving large
scale transformation and are demanding
maturity in Information Technology Service
Management (ITSM). There is a movement from
information technology (IT) as an asset to IT as
a commodity. The key focus area is reduction
in IT infrastructure footprint. Businesses want
to concentrate on their core business strengths
and avoid the pitfall of maintaining a large IT
infrastructure footprint. Technology managers
are under increasing pressure to find out
ways to shed the burden of hosting excess IT
infrastructure and its concomitant costs.
Cl oud comput i ng and i t s t i mel y
emergence has come about as a boon in todays
testing times. It is ushering in a phenomenal
change in the way IT infrastructure is deployed,
maintained and used by enterprises. Enterprise
IT customers are fnding the self-service, pay-
as-you-go, instant deployment values of cloud
computing very appealing. While the demand
for the publicly available cloud infrastructure
is on the rise, tight IT budgets and the need for
out-of-the-box thinking are coercing more and
more organizations to look inwards at building
internal cloud capabilities. To this end, vendors
and service integrators have spawned to help
enterprises deploy their private clouds.
Migration to private clouds has many attendant
benefits. Some being - (a)On-demand self-
service; (b) Ubi qui t ous net work access;
(c)Location independent resource pooling;
(d)Rapid elasticity; (e) Pay Per Use.
Once the cloud infrastructure is built,
be it a private cloud or a hybrid one, ITSM is
of paramount importance in managing the
ongoing IT operations and driving process
effciency to extract maximum benefts of cloud
computing. Service management is not new in
IT operations. In a traditional datacenter based
IT environment, service management is mostly
aligned to ITIL best practices and ITIL has a key
role to play to manage the effciency gains that
cloud promises.
Cloud brings agility, flexibility and
scalability to enterprise computing. A certain
level of management and governance is required
when it comes to managing a large virtual
VOL 9 NO 5
2011
Infosys Labs Briefings
40
C
l
o
u
d

M
a
n
a
g
e
ITSM processes for the Cloud
Traditional ITSM Processes
Service
Design &
Packaging
Service
Orchestration
& Provisioning
Configuration
Management
Service
Transparency
Service
Integration
Incident Mgmt Change Mgmt Problem Mgmt Capacity Mgmt
Event Mgmt Release Mgmt Supplier Mgmt Availability Mgmt CSI
Service Level Mgmt
Figure 1: Reference Model for ITSM on Cloud Source: Infosys Research
infrastructure in the cloud. ITSM helps the CIO
manage these complexities in the cloud through
a system of process enablement and process
automation. Adopting additional aspects of
ITSM establishes a uniform process framework
for a cloud based IT, and more importantly
a standard by which external services can
integrate seamlessly with internal services.
Most cloud providers lose sight of the
fact that they are an incremental extension to
a pre-existing enterprise. There is a need to
accelerate the design and implementation of
ITSM processes and capabilities to manage the
cloud that helps reduce operational expenses
and increase efficiency of a cloud based
infrastructure.
CLOUD MANAGEMENT FRAMEWORK
A hol i sti c and structured framework to
manage the ongoing operations of the cloud
is the need of the hour. We propose an ITIL v3
aligned cloud management framework called
CloudManage. CloudManage has fve focus
areas, each of which can be associated with the
ITIL lifecycle phases. The focus areas are:
(a) Service Design and PackagingService
Orchestration; (b) Cost Transparency; (c) Service
Integration; (d) Confguration Management.
The rest of the ITIL based service management
processes (incident management, Problem
Management, etc) would remain the same,
irrespective of whether the IT is cloud enabled
or not.
To manage a cl oud, the proposed
framework aligned to ITIL best practices, would
deliver the following benefts.
Service Design and Packaging - Design
your services for the cloud
Service Orchestration - Manage the
service lifecycle and ensure faster and
fexible provisioning
Cost Transparency - Pay per use
Service Integrator - Provide a unifed
business interface for governance and
for continuous service improvements
Confguration Management - Manage
changes at the right level.
41
As depicted in Figure 1, the ITSM focus
areas for the cloud within the CloudManage
f r amewor k wor k over and above t he
traditional ITSM processes. These processes
are customi zed for managi ng the cl oud
infrastructure environment. It ensures that
ITIL principles manage the lifecycle of cloud
environment and provide the true value of an
enterprise cloud infrastructure deployment.
The details of each one of the focus areas are
explained below.
Focus Area 1: Service Design and Packaging
In order to provide services on demand, the
prerequisite is to identify services that are to be
offered and deploy them through a self-service
portal. Request Management accomplished
through a service catalog will enable the marketing
and packaging of services. Services (infrastructure
and application) offered need to be intuitive and
easy to understand for the end users.
The three steps needed to effectively
design and package a service are: (a) service
defnition and modeling; (b) enabling service
offering; and (c) ongoing service metering
[Fig. 2]. This will enable in standardizing packages
of resources and deployment configurations
through service catalog as well as standardizing
infrastructure and faster deployment.
Focus Area 2 : Service Orchestration
Once the services are defned there need to be
processes and systems in place to ensure service
provisioning.
Figure 2: Reference Model for ITSM on Cloud Source: Infosys Research
Service Agreement
Service
Planning
Service Definitions &
Cost Modeling
External dependencies
Devices Users Wireless Networks
Blackberry Service
Business Services App
BES Software MDS
Dependent applications
Lotus
Domino
Lotus
Sametime
Corporate
applications
Hardware
BES
Servers
SAN
Storage
DB
Server
Email &
IM Servers
Application
Servers
Network Infrastructure
Firewalls Lines Routers Switches
OS & Supporting software
OS1 SW1 OS2 SW2 OS3 SW3
Service
Metering
Demand
Management
Portfolio
Optimization
42
Project
Week
Project 1
Project 2
Project 3
W1
R
R
R
P
P
P
W2 W3 W4 W5 W6 W7 W8 W9 W10 W11 W12 W13 W14 W15 W16
Non Homogenous sprawl
Automated trigger of policies e.g. if CPU
Utilization goes beyond 80% add a new VM
Shared Resource
Environment Provisioned
Project
Day
Project 1
Project 2
Project 3
D1
R
P
R
P
R
P
D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 D12 D13 D14 D15 D16
R = Request for Infrastructure
P = Provisioning of Infrastructure
Figure 3: Automated & Dynamic Provisioning Source: Infosys Research
In the traditional model, provisioning
has the following key characteristics
High lead times of 8 to 12 weeks to
provision new infrastructure
Ad- hoc r esour ci ng wi t h l i mi t ed
standardization of deployment architecture
result in non-homogeneous sprawl
Deployment units aggregated at VM level.
End- t o- e nd a ut oma t e d s e r vi c e
orchestration resulting in dynamic resource
provisioning is critical for a cloud based
environment. Computing-on-demand and
pay-as-you-go models can be implemented only
if resource provision and fulfllment have been
streamlined and automated [Fig. 3].
I TI L- based servi ce orchest rat i on
enables dynamic provisioning and automated
process from requisition to provisioning of the
infrastructure. The advantages of an optimized
service orchestration layer are
Lead times down from several weeks to
a few hours
Provisioning against a standard and
pre-approved reference architecture,
governed by architecture/security policies
Ability to provision entire environments
(virtual applications) versus only single
units of VMs.
Service orchestration integrates the
various processes of procurement management,
43
access management , chargeback, asset
management and request management
seamlessly. Traditional ITIL-based ITSM
processes still remain applicable, but are
abstracted by service orchestration layer in
an improved way to support real-time needs
and dynamic environment of a cloud based
infrastructure.
Focus Area 3 : Cost Transparency
Pay-for-what-you-use is one of the underlying
principles of cloud computing. In order to
accomplish that, there needs to be complete
transparency in the way services and resources
are priced, usage metered and charged back.
ITILs IT Financial Management process help
one accomplish these objectives. As illustrated
in Figure 4, using a well-defned cost model,
accounting, budgeting and chargeback are the
three processes that need to be defned and
deployed to enhance cost transparency. Once
the processes are defned and deployed, the
next step is that of accurate service metering
leading to chargeback. Service metering enables
the cloud service provider to monitor usage and
enable transparent and fair chargeback. Well
designed reports and management dashboards
also assist in assuring the end users regarding
usage and billing, thus enhancing service and
cost transparency.
Focus Area 4 : Service Integration
In a cloud environment, there are bound to be
multiple service providers, providing various
IT and business services. There will be a
cloud infrastructure provider, cloud support
teams, cloud platform provider, vendors
managing support and development of various
IT applications, Service desk, etc. For optimized
ongoing cloud management and to ensure that
the service delivered by the cloud is optimized
and provides value, there needs to be a service
integration layer that abstracts the various service
providers and vendors. The business should be
shielded from the aspects of integration and
problems of variations in services quality
provided by the different vendors.
More importantly, for ongoing operations,
service integration layer should provide a
unifed business interface for governance and
continuous service improvements.
The servi ce i ntegrator framework
provides the right processes and principles to
manage the various vendors that are generally
present in a cloud environment. Service
integration delivers the following key functions
to a cloud based multi-vendor IT group
Process governance across key ITSM
control processes
Security on the cloud - especially when
moving to public cloud
Supplier management across cloud
service providers and multiple vendors
Figure 4: Cost and Chargeback Model
Source: Infosys Research
O
r
g
a
n
i
z
a
t
i
o
n
a
l

C
h
a
n
g
e

M
a
n
a
g
e
m
e
n
t
1 1 4
Service
Costing
Strategy
Data
Sources &
Channels
Reporting
2 2 2
Chargeback
(recovery)
Budgeting
(forecasts)
Accounting
(costing)
3 4
Automation Integration
3
Management &
Governance
1
Sustainable, Repeatable,
Recoverable Cost Model
(Cost Model)
44
Data integration
Re po r t i ng a nd c o mpl i a nc e t o
data requirements.
Focus Area 5 : Confguration Management
Confguration management is well understood
and an essential ITIL process to manage the
ongoing operations effectively. Confguration
management traditionally provides the key
relationships between confguration items. Most
of the IT and infrastructure support teams would
have implemented confguration management
processes and systems and use them for impact
analysis and change management. However,
for a cloud based environment, confguration
management need to be designed differently.
The additional design principles that need to
be considered are (a) confguration volatility;
and (b) virtual vs. physical ownership.
To take care of confguration volatility
the confguration management processes need
to manage changes at the right level. Earlier, all
confguration items (CIs) and attributes were
needed to be tracked, for e.g., CPU, memory
utilization, etc. As cloud is dynamic and leads
to volatility of CIs that affect the overarching
support services, one needs to design CI attributes
at the right level. Detailed CI database design
with right judgement and supporting process
steps need to be in place to manage changes to
CIs at the right level. Appropriate confguration
management design, tailored for the cloud
ensures that impact due to frequent changes to the
CI at the leaf level does not interfere with impact
analysis and incident/change management.
To handle virtual v/s physical ownership,
we need to make sure that asset management
is no longer high priority. In a cloud, service is
delivered at a virtual level. Customer may not
care much about physical asset ownership and
management. Hence there is a paradigm shift
in the way asset management is handled. The
processes and tools need to be redesigned to
manage virtual machines instead of physical
assets.
CONCLUSION
Cloud computing presents to us a stark
reminder of the advantages and need of
ITSM. In a cloud, ITSM integration is critically
important to organize processes and workfows
that bring in effciency to the enterprise
while at the same time enhancing customer
satisfaction. To complement existing ITIL
processes, the proposed framework suggests
fve new key focus areas that need to be
designed and deployed to effectively manage
the lifecycle of an enterprise infrastructure
cloud environment.
REFERENCES
1. Clayton, I. M. (2010), Every Cloud has
a Silver Lining. Available at http://
ianmclayton.com/?p=601.
2. Raman, A., Nadkarni, G. and Pattnaik,
R. (2010), ITSM Thunderbolts in the
Cloud, Paper presented in the 5th
International Colloquium on IT Service
Management on 6th September 2010 at
Bangalore, organized by QAI Global
Institute. Available on http://www.
qai gl obal servi ces. com/mi ni si t es/
B P W/ i n n e r p a g e s / Do c u me n t s .
asp?FileID=Latest_Trends&ID=Service_
Management.
3. Kamenken, I. (2010), How ITIL and ITSM
Saves the Cloud Computing Providers,
itSMF Presentation, Queensland.
4. Prabhakara, P. (2010), Key Tactics
f o r I T Co s t Vi s i b i l i t y , i t S MF
Presentation,London.
45
VOL 9 NO 5
2011
Leverage the promise and potential
of cloud in building a robust
IT Service Management framework
Cloud Computing and its
Impact on ITSM Adoption
By Ashish Birla and Rahul Sinha
F
or a number of IT service managers, ITIL
V3 is the most recognized framework for
implementing IT Service Management (ITSM).
IT Infrastructure Library (ITIL) defnes best
practices for IT service managers for managing
their services continuously with enhanced
maturity. However with the advent of cloud
computing, a number of these IT service
managers are wondering how their ITSM
world should change to adopt itself to cloud
computing. More importantly, what is more
intriguing is whether ITSM is relevant in a
cloud world.
These questions are becoming more and
more relevant as both business and IT managers
are getting increasingly excited about the
possibilities of cloud computing. And, why not?
Some of the most common benefcial aspects of
cloud computing are: (a) on-demand provision
of software and information leading to major
cost savings with no or negligible initialization
cost; (b) greater association of IT investments
with what the business wants; (c) enhanced
quality of user experience with more fexibility
in choices and lower time to automate; and (d)
easy scalability as per business requirements.
However even with all these advantages,
the question that IT service managers are
asking is whether cloud enables them in ITSM
implementation and whether they have to
make major changes in the way they have been
operating over the years.
WHAT IS CLOUD COMPUTING?
Cloud computing is the term used to indicate
delivering of IT services (software/information/
application) over internet to customers. The
customers only see services and may not
have knowledge about implementation/
infrastructure used by the service providers.
Cloud computing also makes use of clouds that
are virtualized pool of computing devices and
I/O resources (running on virtual machines).
These services can be accessed over internet by
the customers and are billed as per consumption.
Cloud computing can be classifed in different
Infosys Labs Briefings
46
ways. They can be classifed on the type of
usage: public cloud or private cloud. They
can also be classified based on the type of
services provided: SaaS (software/application
over a network), PaaS (tools/services and
middleware to develop software solutions),
IaaS (infrastructure like storage and network).
A typical cloud infrastructure is depicted in
Figure 1.
MANAGING CLOUD AS A SERVICE
As a matter of fact, cloud computing can also
be considered as an IT service that is offered
by a service provider. Hence a framework
on ITSM like ITIL v3 plays an immense role
in delineating various service management
phases involved with cloud computing. Some
of the typical service management phases that
can be associated with cloud computing are
described below.
Service Strategy: The frst phase of the
service lifecycle is the service strategy. The
cloud service provider together with the
business and IT decision makers defne
what services will add the most value for
customers. Strategy questions like: What
services would be provided through my
cloud (minimal or more ambitious set
of services)? What is their value to the
customer and the demand? How am I
going to charge for these services? Post
clarification on these questions cloud
providers move to next stage.
Service Design: Cloud service providers
defne the templates, create and build the
plan for providing cloud computing in
this stage. An architecture plan is also
prepared here.
Service Transition: Various types of
cloud services are defined and the
service catalogue prepared. This service
catalogue will be made visible when
infrastructure is in place. Subscriptions
are offered to potential customers and
the provision of services is initiated.
Service Operations: Phase where the
customer starts accessing the service
and the major responsibility of the
Service Management
Costing Models
P
r
o
c
e
s
s

D
a
t
a
b
a
s
e
S
e
r
v
i
c
e

P
o
r
t
f
o
l
i
o

&

C
a
t
a
l
o
g
Virtual Machines Applications
Physical IT
Resources
VM Ware
Infrastructure Management
Figure 1: Cloud Computing Architecture Source: Infosys Research
47
cloud provider is to ensure that service
level agreements (SLAs) are met. Also
any termination of service contract is
managed in this phase.
Continuous Service Improvement (CSI):
In this phase, new values are provided
t o t he cust omer t hr ough desi gn
improvements, new service introduction
and operation.
From an IT service provider point of
view, utilization of cloud though will have its
own benefts and will also change the way the IT
services are provided to customer and the way
they are managed by the IT service provider.
The impact will be felt in all the life cycle phases,
starting from service strategy to continual
service improvement. However the good thing
to note is that it does not require a gigantic
change in the way the processes are delivered.
One of ITILs assets is its agility and the same
can be leveraged for cloud computing as well.
SERVICE STRATEGY: CONSEQUENCES
This phase consists of the ITSM processes
viz., service portfolio management, demand
management and fnancial management. It is at
this phase where the IT service provider will make
the decision of whether she should go for a cloud
based service or not. The compelling strategic
thinking at this level would be: Do we have to
realign services and priorities to make the cloud
based services a reality? Can we afford it? What
is it worth to the business and to the customer?
What level of cloud imposition will happen in
the way the IT services are provided currently?
What are the capabilities of my organization and
what is the demand for the services?
Once the IT service provider has decided
to go ahead with the cloud based services, the
next step is to identify the service solutions by
reviewing and scrutinizing the service provided
by various cloud computing providers. The
clarity that needs to emerge out in this phase
is on the services that will be offered through
cloud. Described below are the typical changes
in the service strategy processes.
Service Portfolio Management: Inclusion
of cloud based services in the portfolio
along with details on spending; clear
demarcation between what is being
provided using cloud and what is not;
cloud partners and type of services being
provided by them; and the value being
generated out of these services. Also to
be included in the service portfolio are
details about the type of service bundles
(managed services or shared services or
payasyouuse services) as these are
the typical attributes of any cloud based
services. Update the portfolio for any
policy change as well.
Financial Management: The goals of
fnancial management remain same even
with the inclusion of cloud computing,
i.e., providing the business and IT with
the value of IT services. As companies
struggle to identify, document and agree
on the value of erstwhile IT services,
cloud computing with its nature of utility
based costing multiplies this challenge.
And it becomes more imperative to
develop a meaningful cost per unit of
services and charging model.
Demand Management: Analyze the
demand of the XaaS based model
of the services being planned to be
made available on the cloud platform.
48
A chal l enge wi th the cl oud based
services is the fuctuation in the demands
of such services due to the inherent
nature of these services. For example,
a payroll application will see higher
demand toward the end of the month
as compared to the rest of the month.
Ascertaining this variation is the major
element of the demand management
process.
SERVICE DESIGN
In this stage, services and service management
processes are designed and developed. The
other supporting structures like IT plans,
processes, policies, architecture, framework,
metrics, service level agreement (SLA), etc.,
are also developed in this stage. Key ITSM
processes of this phase are discussed below.
Servi ce Cat al ogue Management :
Include the cloud based services in
your service catalogue and ensure that
the services are defned with appropriate
interconnects with the service portfolio.
The filling up of service catalogue is
a vital task as it will help the service
provider make certain that the cost
of the cloud based services have been
suitably and accurately determined and
are published and communicated to the
customers.
Service Level Management (SLM):
While traditional SLM requires the
service provider to ensure that the
service levels have been defined and
agreed with the customer, it is the
operational level agreements (OLAs)
that are more imperative here. Pertinent
OLAs with the cloud provider will
ensure that you are able to deliver on the
SLAs that have been agreed upon with
the end customers.
Capacity Management: In line with
the demand management forecasts,
the servi ce provi der shal l ensure
that the cloud provider has sufficient
infrastructure in place to meet the
capacity demands. Though, at times this
might be diffcult to determine as the
cloud platform is essentially a shared
services model. However, this can be
assured by having resolute OLAs in place.
Availability Management: One of
the benefits of the providing cloud
based ITSM processes is assuring high
availability to customers that is made
possible by the grid based architecture
of cloud. However, the poser before the
IT service provider is the more or less
obscure nature of the exact infrastructure
deployed by the cloud service provider.
The IT service manager can overcome
this challenge by ensuring that SLAs
signed with the cloud provider are
defnite and effective.
Service Continuity Management:
Cloud brings additional set of risks
when it comes to planning for IT service
continuity. The service managers shall
take into consideration the chances of
a disaster happening not only at their
business site but also the cloud service
providers site. A cloud based service is
typically a high risk service and the IT
service continuity plans shall contain
business continuity arrangements in the
event of a disaster happening at both
49
the sites. Though the service levels can
take care of this concern, the IT service
managers shall ensure that the highly
critical services with a low recovery
point objective (RPO) or recovery time
objective (RTO) are having formidable
continuity plans, independent of those
of the cloud providers. Some of the
aspects to be checked regarding cloud
providers continuity plans are: What are
the RPO and RTO for services? What are
the communication plans in the event of
a disaster? Any secondary site in place?
Is it a hot/warm/cold site?
Information Security Management:
Information security is still a priority
concern for all the customers of cloud
computing. The concentration of a
large number of resources for cloud
computing increases the security risks,
though it also helps in reducing the
cost of overall service. Though a basic
but still a major issue for the IT service
managers is whether cloud protects
confdentiality, integrity and availability
of the information it contains. Some of the
checkpoints for the IT service managers
are the ability of the cloud provider
to provide encryption; authentication
services; availability of forensic images;
and cloud providers framework on
information security.
Suppl i er Management : As cl oud
computing is still a relatively new
service, the cloud provider may be
short of the required maturity to handle
complex business requirements. This
necessitates the importance of superior
and meticulous supplier management
practices. IT service managers should
ensure t hat t hey sel ect t he ri ght
cloud providers who understand the
importance of their services. The prices
are usually fxed based on services levels.
For example, platinum/gold/silver
rates based on the hours of services.
SERVICE TRANSITION
In this phase services are transitioned from
design board to operations and requirements of
service strategy encoded in service design are
effectively realized in service operations while
controlling the risks of failure and disruption.
The major ITSM processes in this phase are
discussed below.
Change Management : To ensure
that the changes are implemented
without disrupting operations, it is
imperative that the IT service managers
and cloud providers understand their
respective roles and responsibilities.
Prepare a detai l ed Responsi bi l i ty
Assignment Matrix (RAM) chart that
gives information about who will be
responsible for what phases of change
management. Typically, the change
request would come from the customer
and the responsibility of classifcation,
evaluation, approval would still lie with
IT service managers. However, the cloud
provider shall be involved entirely in
this process, especially during review
meetings. The arrangement for actual
implementation and scheduling will be
a responsibility of the cloud provider.
Release and Deployment Management:
Coordination is of extreme importance
during release management. A release
50
window shall be decided upon before
the cloud provider implements any
major release to ensure that the IT service
managers can make plans to reduce
any impact on end customer. Both the
parties shall collaborate for release of
any changes in the cloud environment.
Service Asset and Configuration
Management: One of the most complex
processes becomes more compl ex
with introduction of cloud in ITSM
environment. There are a number of
challenges envisaged due to introduction
of the cloud environment, for example,
the new set of configuration items
( CI s) /vi rt ual asset s, rel at i onshi p
between these CIs and high rate of
changes in these CIs. A highly matured
configuration management system
(CMS) that establishes relationship at
virtual levels, an effective discovery
tool, and a competent configuration
management database (CMDB) tool
are required for service asset and
confguration management process.
SERVICE OPERATIONS
The purpose of service operation is to coordinate
and carry out the activities and processes
required to deliver and manage services at
agreed levels to business users and customers.
Well-designed and well-implemented processes
will be of little value if the day-to-day operation
of those processes is not properly conducted,
control l ed and managed. Key processes
pertinent to this phase are discussed below.
Incident Management: It is one of the
most important and fundamental ITSM
processes and is trickier to handle
when one uses cloud computing as
the backdrop. Important things to
consider while implementing incident
management is availability of a highly
matured servi ce desk, knowl edge
base and a sturdy CMS. However, it
is easier said than done, as the virtual
machines in cloud are built and flatten
at tremendous speed. To track the
cause of an incident becomes more
di ffi cul t and a hi gh col l aborati ve
model between IT service provider
and cloud provider is required. Start
with ensuring that you have a clear
comprehension of CIs that are used by
cloud provider and the scope of work
is clearly demarcated. To reduce risk,
IT service managers can also ensure
that highly critical businesses are not
directly affected by cloud services
unless there is a matured incident
management process in place.
Probl em Management : Whi l e t he
overall responsibility of the problem
management would lie with the IT
service providers, cloud provider shall
facilitate identifying problem resolutions
to prevent recurring incidents. Develop
an engagement plan to ensure that cloud
provider becomes part of the process at
the right stage.
Event Management: Typically, cloud
providers use a number of tools to
monitor the events. IT service managers
shall collaborate with them to make
certain that they have a visibility into
the cloud providers monitoring system.
Also it is important to compose a list
of highly critical events on occurrence
51
of which the IT service managers shall
be fagged to prepare for any untoward
eventualities.
Access Management : One of t he
valuable aspects of cloud based services
is provision of secure access globally
through internet or on cloud network.
Develop a robust access control scheme
taking these points into consideration
and confrm that the cloud providers
have built in this scheme in their services.
Request Fulfllment: While the request
fulfllment process remains more or less
same from the business users point of
view, the IT service managers beneft
from the faster fulfillment of certain
requests like provision of resources.
CONTINUOUS SERVICE IMPROVEMENT
The last phase of the IT service lifecycle is
CSI, which provides instrumental guidance in
creating and maintaining value for customers
through better design, introduction and
operation of services. Conventional CSI does
not have much control over the improvements
that can be introduced in a cloud environment.
However, the need of the hour is to develop
robust and adequate key performance indicators
(KPIs) and reporting parameters for the cloud
provider to follow. Some key CSI processes are
discussed below.
Service Measurement: As discussed
earlier, IT service managers shall strive
to put effective metrics in place that will
help the overall decision making process.
Identify the key measurement points that
are critical to the overall service delivery
and ensure that services objectives are
being met. Some of the KPIs for a cloud
provider can be SLA response error
rate, services downtime, internal service
satisfaction level and data quality.
Service Reporting: Discuss and firm
up on the reports one would need from
cloud providers. Also recognize the
information that would go in the reports
alongside the frequency of the reports.
7-Step Improvement Process: Though
there would not be any changes in
the 7steps process for identifying the
improvements, only the scope would
now i ncl ude the cl oud computi ng
services as well. Using cloud computing,
one can roll out new improvements
at a faster rate. However, it should
be ensured that the cloud provider
understands the business requirements
for new improvements and they are
rolled out as per the plan.
CONCLUSION
A number of IT managers are still looking
at cloud and considering it as an option to
leverage its advantages. As per an IDC Webinar
10% of the IT applications/services would be
delivered by cloud by 2013 [1]. They are sure
to be considering how it is going to affect their
current IT services. A lot of them believe that
ITSM/ITIL would lose its relevance with the
increase in migration towards cloud. Cloud
also comes with its own set of risks like security
concerns, governance and control issues.
Another question being asked on similar lines
is about the excessive numbers of change and
release requests and the way to manage them.
We believe that these can be handled by more
automated and streamlined processes with
52
h30499.www3.hp.com/t5/IT-Service-
Management-Blog/No-the-CMDB-is-
NOT-Irrelevant-in-a-Virtual-and-Cloud-
Based-World/ba-p/2410374.
4. Pa t t na i k, R. , Ma khi j a , D. a nd
Radhakri shnan, R. ( 2010) , Cl oud
Management - A New ITSM Process,
Present at i on at t he Open Group
Conference, Boston. Available at https://
www.opengroup.org/...live/.../Boston-
CloudManagement-fnal.pdf.
5. Cloud Security Alliance Report (2009),
Security Guidance for Critical Areas of
Focus in Cloud Computing V2.1. vailable
at https://cloudsecurityalliance.org/
csaguide.pdf.
faster turnover. It will be a mixed world where
both ITIL and cloud would co-exist, as while
cloud is a way of facilitating a service, ITIL is
all about managing that service.
REFERENCES
1. Mahowald, R. P. (2010), As-A Service:
Bui l di ng a SaaS Bus i nes s , I DC
Webinar, Framingham, United States.
Available at www.idc. com/getdoc.
jsp?containerId=IDC_P20980.
2. ITIL V3 Life Cycle Phases OGC.
3. Pignault, O. (2010), No, the CMDB is
NOT Irrelevant in a Virtual and Cloud
Based World, IT Service Management
Bl og of HP. Avai l abl e at ht t p: //
53
VOL 9 NO 5
2011
Brace up to embrace the next generation of Internet
Framework to Counter
Challenges of Transition to IPv6
By Ashish Birla
G
iven the pace with which technology is
advancing and being increasingly adopted
in todays blink and you miss world, the
crunch in IP addresses space in the IPv4 era has
nagged technologists, businesses and end users
alike. However, with the emergence of the next
generation Internet Protocol (IP) Ipv6 the
world has awarded itself yet another unending
foray into the IP addresses space.
IPv4 is history. The future belongs to IPv6.
However, the transition from IPv4 to IPv6 is not
devoid of its share of problems. Infrastructure
management during this transition is bound
to bother many a technologists globally.
Transitioning from IPv4 to IPv6 enabled
world would test IP readiness in terms of --
Ability to transmit Internet IPv6 traffc,
through the network backbone, to the
internal LAN.
Ability to transmit LAN IPv6 traffic,
through the network backbone (core),
out to the Internet.
Ability to transmit LAN IPv6 traffic,
through the network backbone (core),
to another LAN (or another node on the
same LAN).
In some ways these challenges can be
likened to Y2K challenges that businesses faced
at the turn of millennium.
BACKGROUND IPV4 UNDONE BY ITS
OWN SUCCESS
Internet Protocol, the underlying communications
protocol for the Internet is a part of the TCP/IP
Protocol Stack. TCP/IP is a set of protocols that
allows large, geographically diverse networks
of computers to communicate with each other
quickly and economically over a variety of
physical links. An Internet Protocol Address
(IP Address) is the numerical address by which
a termination point in the Internet is identifed.
IPv4 was developed in mid 70s and
early 80s. An IPv4 address is 32 bit long and
theoretically provides 4 Billion addresses. This
number was considered more than suffcient
Infosys Labs Briefings
54
since such wide-spread usage of Internet and
mobile devices was not envisioned. In words of
Vint Cerf, one of the inventors of the Internet, It
was enough to do an experiment, the problem
is the experiment never ended.
Since its invention, Internet has grown
beyond its anticipated usage and even while
Internet penetration is just under 25% of
worlds population, the IPv4 addresses space
got exhausted.
In early 1990, Internet Engineering
Task Force (IETF) initiated an effort towards
selecting a new protocol. Projects were assigned
an IP version number by Internet Assigned
Numbers Authority (IANA). Of these, IPv6
was selected by IETF as the next generation
Internet Protocol.
IPV6 FEATURES
Large Address Space: This provides for peer-to-
peer communication across networks, always-
on connection for wired and wireless devices.
With its 128 bits address it provides 340 Trillion
Trillion Trillion or 340 undecillion addresses
translating to 655,570,793,348,866,943,898,599
addresses for every square meter of the Earths
surface.
For the purpose of comparing address
space sizes, if all IPv4 addresses could ft within
a PDA/Blackberry, it would take something
the size of Earth to contain all IPv6 addresses.
Effcient Header Format: Fixed header length
makes routing effcient and reduces delays in
case of fragmented packets.
Hi erarchi cal Addressi ng and Rout i ng
Infrastructure: Avoids usage of Network
Address Translation (NAT), thus making
routing effcient. It allows every host to have
its public IPv6 address.
Auto Confguration: Supports mobility.
Better Support for Quality of Service (QoS):
Enables special felds in the header to allow for
better quality voice and video transmission with
minimum delays.
Improved Internet Control Message Protocol
(ICMPv6): Provides neighbor discovery and
auto-confguration. By integrating with Internet
Protocol Security (IPSec) protects fear of ICMP
based attacks.
Dynami c Host Confi gurat i on Prot ocol
( DHCPv6) : Whi l e us ed pr i mar i l y f or
dynamic IP address allocation in IPv4, in
IPv6, i t provi des several i mprovements
such as stateless auto-configuration, use of
multicasting, etc.
IPv6 Addresses Types: Unicast, Anycast,
and Multicast. Multicast addresses replace
broadcast addresses by identifying a group of
interfaces.
Extensibility: Extension headers after the IPv6
header can be used to extend new features.
Unlike the IPv4 header, which can only support
40 bytes of options, the size of IPv6 extension
headers is only constrained by the size of the
IPv6 packet.
Integrated IPSec: Enhances security via its
two integrated options Authentication
Header (AH) for authentication and integrity;
and Encapsulating Security Payload (ESP) for
integrity and confdentiality.
BUSINESS DRIVERS
Sooner than later, organizations will have to
adopt IPv6. In addition to the dissipation of IPv4
55
addresses, there are some interesting business
drivers that are driving organizations to quickly
plan for transition to IPv6. Some such drivers
are listed below.
Tuning in to Future: IPv6 is the future
of the Internet and IT services especially
emerging ones like cloud computing,
SaaS, peer-to-peer applications, mobile
computi ng. The vast pool of IPv6
addresses along with enhanced features
that come with it mean that we are yet
to see best of the Internet. IPv6 provides
more individuals across the planet
power to realize untapped potential of
the Internet.
Increased reliance on IT: When Internet
was frst invented, the requirements of
data traffc did not require the device
to be a uniquely addressable entity.
However, in this age of broadband
and 3G networks, where anything
and everything is designed to run on
Internet, the network devices on wired or
wireless networks need dedicated public
facing IP addresses.
Future growth and expansion plans:
Many organi zati ons seem to have
purchased a pool of enough IPv4
addresses to support their existing
plans and mitigate any risks associated
with IPv4 exhaustion; such pool may not
do justice to unexpected opportunities
of r api d gr owt h. Fur t he r mor e ,
organizations willing to roll-out new
services on large scale may end up
acquiring IPv6 addresses because large
blocks of IPv4 addresses are no more
available.
Regulatory and Industry Compliance:
More and more Governments today
are considering bringing in regulations
to make IPv6 deployments mandatory
i n the pl atforms and appl i cati ons
pertaining to e-governance. Being IPv6
ready to qualify as a supplier of IT
services or products could become a
de-facto standard soon.
Cost effectiveness: The cost of a
planned transition vis--vis a forced
one (either due to business reasons or
regulatory ones) is comparably less. In
addition, starting early will also give
network administrators and managers
an opportunity to gain experience on
handling dual set of protocols and
eventually help in getting ready to
support IPv6 networks.
Communication Portability: The need
for businesses to communicate to their
existing and potential customers is not
just limited to traditional computing
devices such as PCs and laptops. Mobile
devices are increasingly being used
as computing devices of choice and
given that they are IPv6 compatible,
organizations need to upgrade to IPv6
to communicate with these devices.
Fueling Economy through Internet:
According to some estimates, IPv4
address allocation amongst countries is
as follows: US-37%, UK-11%, China-7%,
Japan-7% India-0.7%. It can be argued
that growth of some these economies are
not only going to propel the growth of
Internet but will act as engine of growth
for these economies. IPv6 will ensure
56
there is no hindrance to economic growth
of these countries and world economy
as well.
WHAT NEXT?
Businesses would want to start early to have
a planned and phased evolution rather than
an abrupt and expensive one. It would make
a lot of sense for businesses to transition to
IPv6 by choice and support business expansion
plans. Businesses can start with public facing
infrastructure frst and then move to other parts
of network gradually.
Various organizations have created test
beds and networks to allow for real time testing
of products/applications on IPv6 network.
Some of the examples are Moonv6, 6-DEPLOY
and 6-bone. Equipment and software vendors
like Cisco, HP, Microsoft, and Apple have been
working closely on IPv6 development as well as
making their products ready for IPv6.
KEY CHALLENGES
While IPv6 is set to become the protocol for
Next Generation of Internet, it is also true that
IPv4 will not be replaced one fne day with IPv6.
The need for co-existence during the period of
transition from IPv4 to IPv6 brings challenges
associated with any such transition.
Businesses will have to ensure business
continuity and growth with existing and new
infrastructure. They need to prepare a business
case and acquire budget for investment in IPv6
ready infrastructure. They will also need to devise
and support comprehensive deployment strategy
to stay connected during and after the transition.
IT managers will be concerned with
ensuring maximum re-use of infrastructure
components and training of the infrastructure
support staff for transition and steady state
support.
IT infrastructure and network architects
will have to worry about deciding transition
mechanism(s), maintaining interoperability and
security during transition and integrating IPv6
into architecture planning.
Various mechanisms/techniques can
be adopted for transition. The Simple Internet
Transition (SIT) was developed along with
IPv6. SIT is a set of protocol mechanisms,
specifically designed to facilitate transitioning
the Internet from IPv4 to IPv6 protocol. SIT
mechanisms include: (a) dual IP layer/dual
stack - availability of both IPv4 and IPv6
stacks in a device at the same time in an
operating system, either independently or
in a hybrid form; (b) translation - network
traffic originating under IPv4 is converted
into IPv6 and vice versa; and (c) tunneling -
encapsulating IPv6 packets inside IPv4 packets
or vice versa.
SMART TRANSITION FRAMEWORK
We propose a smart transition framework to
overcome the challenges of transition to IPv6.
The t hree st ages of t hi s f ramework - -
discover, design and deploy -- are explained
below.
Discover
I n thi s stage, an assessment of current
infrastructure components is to be conducted
using a toolkit (Table 1) to determine IPv6
readiness, calculate hardware and software
cost and estimate manpower costs, including an
inventory of the entire IT environment.
Answers to questions like do organizations
have business case for IPv6? how ready for IPv6
are they? have they budgeted for transition
cost? what help would be required to undertake
the transition? will help in generating a
IPv6 business case as well as a very high level
57
transition budget for organizations. During this
stage it is important to gather technical and
business requirements so as to align IT strategy
with business strategy. This exercise should
help organizations identify priority or low risk
areas to begin transition to IPv6.
Design
A detailed design phase should be carried out
to integrate IPv6 seamlessly integrated into the
existing network. The activities undertaken are
development of a transition plan, testing IPv6
in a lab environment with a pilot program and
preparations for the transition.
Del i ver abl es of t hi s phas e may
i ncl ude, det ai l ed net work desi gn, I Pv6
transition strategy and plan, test plans, etc.
At the end of this phase, organizations may
expect a Bill of Material (BOM) for procuring
the necessary software and hardware. This
phase will also frm up the budget estimations
for transition.
Deploy
This is the execution of IPv6 strategy and plan.
Training has to be provided to support staff as
well for handing over the support for steady
state. The activities of design phase include
implementing IPv6 seamlessly, validating and
testing every step of the implementation. The
deliverables and services of this phase may
include implementation plan, test cases and
plans, and development in addition to the actual
implementation.
CONCLUSION
I n t he l as t f ew year s depl et i ng I Pv4
addresses were given a new lease of life
by inventions such as Network Address
Transl at i on ( NAT) and Cl assl ess I nt er-
Domain Routing (CIDR) to allow the world
to get ready for IPv6.
While transition to IPv6 may have
been an option until now, it is not an option
any more. This transition as with any other
change is full of challenges. The key to a
successful IPv6 transition is compatibility
with the large installed base of IPv4 hosts and
networking equipments. We opine that the
key to streamlining the task of transitioning the
Internet to IPv6 is maintaining compatibility
with IPv4 while it lasts. This requires careful
planning and consideration of various aspects
along with a robust transition mechanism for
businesses to undertake successful transition
and embrace the new protocol.
Table 1: Sample IPv6 Assessment Toolkit Source: Infosys Research
Router Status Costs
IPv6-compliant and currently running IPv6 None
IPv6-compliant but device needs to be confgured for IPv6 Manpower
Requires software upgrade for IPv6 compliance Software and Manpower
Requires hardware upgrade to support software upgrade Hardware, Software and Manpower
Legacy platform: cannot be upgraded to support IPv6 and must be replaced Hardware, Software and Manpower
Will not be upgraded due to planned discontinuation None
Infra Conponent Manufacture Series # Model # IPv6 Readiness
Server IBM Power PS IPv6 Compliant and Running IPv6
Router Cisco IPv6 Compliant but needs confguring
Firewall Checkpoint IPv6 Software upgrade needed
58
REFERENCES
1. Internet Corporation For Assigned
Names and Numbers. Available on
http://icann.org.
2. IPNG Working Group Charter. Available
at http://www.ietf.org/html.charters/
ipngwg-charter.html.
3. I Pv4 Address Al l ocat i on Report .
Available at http://www.ip2location.
com/ip2location-internet-ip-address-
2008-report.aspx.
4. Young, J . ( 2010) , I Pv4 Address
Exhaustion: An Inconvenient Truth,
Available at http://www.gartner.com/
technology/research/burton-group.
jsp?cid=1534
5. Si egel , E. ( 2010) , Changeover t o
I Pv6: The Deadl i ne Approaches,
Gartner Report ID No. G00204000.
Avai l abl e at www. gar t ner . com/
DisplayDocument?id=1405810.
6. Rickard, N. (2010), Internet Protocol
Version 6: Its Time for (Limited) Action,
Gartner Report ID No. G00209084.
Avai l abl e at www. gar t ner . com/
DisplayDocument?id=1488122.
59
Index
Application Performance Management, also
APM 4-7
Aspect Oriented Programming, also AOP 6-8
Authentication Header, also AH 54
Blackberry 10, 54, 41
Bill of Material, also BoM 57
Building Management System, also BMS 20, 23
Cisco 11, 56-57
Classless Inter-Domain Routing, also CIDR 57
Confguration Items, also CIs 44, 50
Continuous Service Improvement, also CSI 47, 51
Critical Infrastructure Protection, also CIP 34
Digital Smart Home Gateway 12
Direct Access Storage Device, also DASD 4
Dynamic Host Confguration Protocol, also
DHCPv6 54
Encapsulating Security Payload, also ESP 54
Enterprise Resource Planning, also ERP 16, 35, 37
Google PowerMeter 11
Heating, Ventilating and Air Conditioning, also
HVAC 13
Improved Internet Control Message Protocol,
also ICMP 54
Independent Software Vendors 14
Information Technology Infrastructure Library,
also ITIL 27, 28, 39-44
Internet Assigned Numbers Authority, also
IANA 54
Internet Engineering Task Force, also IETF 54
Internet Protocol, also IP 54-58
Internet Protocol Security, also IPSec 54
iPhone 10
IT Service Management, also ITSM 34, 35, 39-41,
43-44.
Key Performance Indicator, also KPI 44
Layer
Application 35, 37
Communication Infrastructure 35-36
Energy Infrastructure 35-36
Intelligence and Analytics 36-37
IT Infrastructure 33-36
Management
Access 35
Availability 48
Capacity 48
Change 49
Confguration 27, 50
Demand 41, 47-48
Event 50
Financial 43, 47
Incident 40, 50
Information Security 49
Problem 50
Release and Deployment 49
Services Catalogue 50
Service Asset and Confguration 50
Supplier 49
Test Environment 27-28
Master Data Management, also MDM 35,37
Microsoft Hohm 11
Monitoring
Application 3-5, 7
Rack Level 20
System 4, 6
Utilization 20, 22
Near Field Communication, also NFC 10
Network Address Translation, also NAT 54
Network Management System, also NMS 35
Quality of Service, also QoS 54
Radio Frequency Identifcation, also RFID 10
60
Remote Terminal Unit, also RTU 35
Roomba 10
Service
Design 40
Operations 43
Orchestration 43
Integration 43
Portfolio Management 47
Measurement 51
Reporting 51
Strategy 46
Transition 46
Service Level Agreement, also SLA 48
Simple Internet Transition, also SIT 56
Smart Grid Management System, also SMS 34, 35
Spring 5
Supervisory Control and Data Acquisition, also
SCADA 35
Systems
Demand Response Management 37
Load Management 37
The Energy Independence and Security Act of
2007, also EISA 2007 34
Wireless Sensor Networks, also WSN 10
BUSINESS INNOVATION through TECHNOLOGY
Editorial Office: Infosys Labs Briefings, B-19, Infosys Ltd.
Electronics City, Hosur Road, Bangalore 560100, India
Email: InfosyslabsBriefings@infosys.com http://www.infosys.com/infosyslabsbriefings
Infosys Limited, 2011
Infosys acknowledges the proprietary rights of the trademarks and product names of the other companies
mentioned in this issue. The information provided in this document is intended for the sole use of the recipient
and for educational purposes only. Infosys makes no express or implied warranties relating to the information
contained herein or to any derived results obtained by the recipient from the use of the information in this
document. Infosys further does not guarantee the sequence, timeliness, accuracy or completeness of the
information and will not be liable in any way to the recipient for any delays, inaccuracies, errors in, or omissions
of, any of the information or in the transmission thereof, or for any damages arising therefrom. Opinions and
forecasts constitute our judgment at the time of release and are subject to change without notice. This document
does not contain information provided to us in confdence by our clients.
Editor
Praveen B Malla PhD
Consulting Editor
Madhukar I B
Deputy Editor
Yogesh Dandawate
Graphics & Web Editor
Rakesh Subramanian
IP Manager
K V R S Sarma
ITLS Manager
Ajay Kolhatkar PhD
Program Manager
Arvind Raman
Marketing Manager
Gayatri Hazarika
Online Marketing
Sanjay Sahay
Production Manager
Sudarshan Kumar V S
Database Manager
Ramesh Ramachandran
Distribution Managers
Santhosh Shenoy
Suresh Kumar V H
How to Reach Us:
Email:
Infosyslabsbriefngs@infosys.com
Phone: +91 40 44290563
Post:
Infosys Labs Briefngs,
B-19, Infosys Ltd.
Electronics City, Hosur Road,
Bangalore 560100, India
Subscription:
infosyslabsbriefngs@infosys.com
Rights, Permission,
Licensing and Reprints:
praveen_malla@infosys.com
Infosys Labs Briefngs is a journal published by Infosys Labs with the
objective of offering fresh perspectives on boardroom business technology.
The publication aims at becoming the most sought after source for thought
leading, strategic and experiential insights on business technology
management.
Infosys Labs is an important part of Infosys commitment to leadership
in innovation using technology. Infosys Labs anticipates and assesses the
evolution of technology and its impact on businesses and enables Infosys
to constantly synthesize what it learns and catalyze technology enabled
business transformation and thus assume leadership in providing best
of breed solutions to clients across the globe. This is achieved through
research supported by state-of-the-art labs and collaboration with industry
leaders.
About Infosys
Many of the worlds most successful organizations rely on Infosys to
deliver measurable business value. Infosys provides business consulting
technology, engineering and outsourcing services to help clients in over
32 countries build tomorrows enterprise.
For more information about Infosys (NASDAQ:INFY), visit www.infosys.com
Infosys Labs Briefings
NOTES
NOTES
NOTES
AJIT MHAISKAR is a Principal Technology Architect with the Manufacturing business unit of
Infosys. He can be reached at Ajit_Mhaiskar@infosys.com.
ASHISH BIRLA is a Lead Consultant with the IMSs Infrastructure Transformation Service
practice at Infosys. He can be reached at Ashish_Birla@infosys.com
BHISHMARAJ SHINDE is a Consultant with FSIIMS Infrastructure Transformation Services at
Infosys. He can be reached at bhishmaraj_shinde@infosys.com
ENOSE NAMPURAJA is a Research Analyst with the Infosys Labs. He is a Post Graduate in
Microwaves and has over a decades experience in the Energy and Telecommunication space.
He can be reached at nampuraja_enose@infosys.com
GOURAV RAJ BUDHIA is a Senior Technology Architect with the Manufacturing business unit of
Infosys. He can be reached at Gourav_Budhia@infosys.com.
KUMAR PADMANABH PhD was a Research Scientist leading the wireless sensor networking
(WSN) research group at Infosys Labs
PARVEEN SHARMA is a Lead Consultant with the FSIIMS Infrastructure Transformation
Services at Infosys. He can be reached at parveen_sharma@infosys.com
PRAJAKTA BHATT is a Technology Lead with the Energy, Utilities, Communications and Services
(ECS) units Technology Focus Group. She has wide experience in the Performance engineering
engagements in the Infrastructure group. She can be reached at Prajakta_bhatt@infosys.com
PRAKASH MANAPILLY is a Senior Technology Architect with the Energy, Utilities,
Communications and Services (ECS) units Technology Focus Group . He leads the Infrastructure
and Security practice in the group. He can be reached at mpprakash@infosys.com
PRASHANTH PRABHAKARA is a Lead Consultant with the Infrastructure Transformation
Consulting Practice within Infosys. He has several years of experience in IT Service Management
consulting and managing large scale IT transformation infrastructure projects. He can be contacted at
Prashanth_prabhakara@infosys.com
RAHUL SINHA is a Senior Consultant with the MFGIMS units Infrastructure Transformation
Services practice at Infosys. He can be reached at rahul_sinha08@infosys.com
RENJITH SREEKUMAR is a Principal Consultant with the Infrastructure Transformation
Consulting Practice within Infosys. He is working extensively in developing developing infrastructure
consulting and transformation solutions. He can be contacted at Renjith_sreekumar@infosys.com
VAIBHAV BHATIA is a Senior Consultant with the Infrastructure Transformation & Green-IT
practice at Infosys. He can be reached at vaibhav_bhatia@infosys.com
History and historicity have often determined our acceptance of or resistance
to change. Well established structures, systems and rules get entrenched in our
psyche, give rise to path dependent behaviors and refuse to fade away. Enterprises
that have steered clear of path dependence are the ones that have embarked on a
transformation journey.
We could not have identifed a more appropriate theme than Infrastructure
Management to explain why molting legacy is as important as embracing change.
Businesses that seemed unstoppable a decade ago have failed to stay relevant to
the current times. Imagine the huge dent that a simple mobile phone could create
on multiple industries. Technologies that seemed robust enough to carry the woes
of the world fall ridiculously short of expectations in todays world. Else who
could fathom the need for the next generation of IP address space IPv6, so soon?
Business lifecycles are being increasingly replaced by industry lifecycles and the
boundaries between industry and businesses are getting more and more blurred.
While propounding new ideas is easy, diffculty lies in choosing between the two
biggest unsaid rules in the corporate world, viz., if it aint broke, dont fx it
and reinvent to remain relevant. at Infosys Labs we chose the latter. While
sETLabs Briefngs enjoys a huge brand recall with you, given that at Infosys Labs
we have transitioned from being a strong r&D department to world class IP co-
creators, it was time for us to reinvent ourselves. In keeping with our reinvention
philosophy, we have rechristened sETLabs Briefngs. You have my assurance that
the journal in its current form will continue to be relevant to contemporary issues
as well as retain its rigor in proposing solutions to your business problems.
This issue is a collection of some contemporary ideas on Infrastructure
Management. Why spend a fortune on application monitoring when there are
ways to pare monitoring costs? We have put together two papers around this
notion. While one discusses an approach to real time application monitoring, the
other prescribes the musts for an advanced monitoring of IT infrastructure.
How smart is your infrastructure? Have you leveraged the smartness of extant
technologies? We have two interesting papers that discuss on how smart
infrastructure can help establishing communication between a variety of complex
systems and devices.
Of late, any compilation without papers on cloud seems to be incomplete. We
realize how important a role cloud computing has come to play in todays
distributed technology world and therefore present to you our views on ways
to manage the lifecycle of enterprise infrastructure on cloud as well as assess the
impact of cloud computing on IT service management.
Is your infrastructure compatible with IPv6 network? and most importantly, have
you migrated to IPv6 technology? If yes, congratulations! Maybe you will have
a story or two to share on your transition journey. Just in case you have not yet
transitioned to IPv6, we have put together a simple framework for you to do so.
as usual, do write to me with feedback and suggestions.
Praveen B. Malla PhD
Editor
praveen_malla@infosys.com
Authors featured in this issue
Infosys Labs Briefings
Advisory Board
Anindya Sircar PhD
Associate Vice President
& Head - IP Cell
Gaurav Rastogi
Vice President,
Head - Learning Services
Kochikar V P PhD
Associate Vice President,
Education & Research Unit
Raj Joshi
Managing Director,
Infosys Consulting Inc.
Ranganath M
Vice President &
Chief Risk Officer
Simon Towers PhD
Associate Vice President and
Head - Center for Innovation for
Tommorows Enterprise,
Infosys Labs
Subu Goparaju
Senior Vice President &
Head - Infosys Labs
Molt Legacy,
reinvent relentlessly
For information on obtaining additional copies, reprinting or translating articles, and all other correspondence,
please contact:
Email: InfosyslabsBriefngs@infosys.com
Infosys Limited, 2011
Infosys acknowledges the proprietary rights of the trademarks and product names of the other
companies mentioned in this issue of Infosys Labs Briefngs. The information provided in this
document is intended for the sole use of the recipient and for educational purposes only. Infosys
makes no express or implied warranties relating to the information contained in this document or to
any derived results obtained by the recipient from the use of the information in the document. Infosys
further does not guarantee the sequence, timeliness, accuracy or completeness of the information and
will not be liable in any way to the recipient for any delays, inaccuracies, errors in, or omissions of,
any of the information or in the transmission thereof, or for any damages arising there from. Opinions
and forecasts constitute our judgment at the time of release and are subject to change without notice.
This document does not contain information provided to us in confdence by our clients.
InfrasTrucTurE
ManagEMEnT
Subu Goparaju
Senior Vice President
and Head of Infosys Labs
At Infosys Labs, we constantly look for opportunities to leverage
technology while creating and implementing innovative business
solutions for our clients. As part of this quest, we develop engineering
methodologies that help Infosys implement these solutions right,
frst time and every time. I
n
f
r
a
s
T
r
u
c
T
u
r
E

M
a
n
a
g
E
M
E
n
T
V
O
L

9


n
O

5


2
0
1
1
VOL 9 nO 5
2011
Infosys Labs Briefings
I
n
f
o
s
y
s

L
a
b
s

B
r
i
e
f
i
n
g
s

Vous aimerez peut-être aussi