Vous êtes sur la page 1sur 28

Musings on data centres

This report brings together a series of articles first published through


SearchVirtualDataCentre
January 2013

2012 was a year when organisations had to face up to the fact that the basis
of IT was beginning to change. Energy costs were rapidly escalating, new
technical architectures, such as cloud computing, were coming to the fore
and users were off doing their own things. The potential impact on the data
centre was massive and the following report pulls together articles written
by Quocirca for SearchVirtualDataCentre (now ComputerWeekly)
throughout 2012

Clive Longbottom
Quocirca Ltd
Tel : +44 118 9483360
Email: Clive.Longbottom@Quocirca.com

Copyright Quocirca 2013

Musings on data centres

Musings on data centres


This report brings together a series of articles first published through SearchVirtualDataCentre
The data centre is facing big changes it has to be more flexible in how it supports the business. The following report gives many
ideas on areas organisations should be looking at when reviewing how their data centre facilities operate.

BYOD is happening

Bring your own device (BYOD) is here to stay, and IT has to ensure that it is controlled, rather
than hidden in shadow IT. The data centre can play its role here virtualised desktops and
centralised storage give greater control over an organisations intellectual property.

The data centre has


to be a value centre

For too long, the data centre has been regarded by organisations as a cost centre. It has to be
seen as a place where innovation happens; where value for the business is created. This requires
a change of viewpoint driven by IT. IT has to be less geeky: it must start talking in business
terms and avoid the actual technology. Otherwise, it runs the risk of being outsourced.

Flexibility is key

The data centre has to be able to grow and shrink to reflect what is happening in the business.
Embracing new architectures and going for more modular approaches to data centre builds can
help achieve this.

Energy costs need to


be controlled

With energy prices fluctuating but trending upward, ensuring that the data centre is energy
optimised is important. Low-cost cooling virtualisation and application consolidation can all help
here.

Modelling,
monitoring and
management will be
key

A data centre has to be as hands-off as possible, with as much in the way of maintenance being
automated as can be, the use of data centre infrastructure management (DCIM) tools, combine
with building information management (BIM) tools can make sure that a data centre is well
managed and that it can be better planned through the use of what if? scenario capabilities.

Cloud computing
brings much promise
and many problems

Cloud computing is definitely going to change how organisations use IT and the impact on the
data centre will be massive. As well as ensuring that the data centre is physically fit for cloud,
organisations must ensure that security of information across hybrid clouds (private and public)
is maintained.

Outsourcing cannot
include the strategy

Public cloud and software as a service (SaaS) is very appealing and can be a solid part of an IT
platform strategy going forward. However, it is not an opportunity to abdicate the overall
business IT strategy this must still be driven by the business and the IT department. Use
external providers because they can do something that would be too difficult to do internally
not for pure cost reasons.

Planning for data


centres has to
include what
happens at the end

Data centres are expensive items to build, but will eventually become unfit for purpose. The
costs of continually retro-fitting equipment and of trying to force new capabilities into a facility
that was built to accommodate technology from a different era will finally become too expensive.
Therefore, the options available when decommissioning a data centre must be included in the
original design of the data centre and must be reviewed on a constant basis to ensure that
costs are minimised and residual values maximised.

Conclusions
The data centre has to be more of a focus for organisations now than it has been for some time. The impact of different forces on
the facility energy, new architectures, BYOD, information security to name but a few means that IT has to make sure that the data
centre is fit for purpose not just now, but for the foreseeable future.

Quocirca 2013

-2-

Musings on data centres

Evolution of IT rise of consumerisation


and the need to control IP
With the growing popularity of mobile communication devices and consumerisation of IT, the traditional one-sizefits-all enterprise device strategy is likely to fail. In this tip, expert Clive Longbottom explains why it is important now
more than ever before to revise your IT strategy and to develop a sound device management plan.
Shrinking devices; growing problems
Life used to be so easy when certain computer workloads were carried out on a central device (a server) and the
results of the workload were accessed through an endpoint that did very little in itself (a terminal).
Personal computers didnt radically change how information was managed. Although the end-point device became
intelligent and were able to store information, the device itself remained tethered to a fixed position.
Then we saw the rise of ever-smaller mobile devices. First, it was the luggable; then the laptop; then the rise (and fall)
of the personal digital assistant (PDA); and now the on-going rise of smartphones and tablets.
Problems introduced with such devices, such as the wide distribution of data, loss and theft of devices and the fact
that data tended to be taken by ex-employees along with the device are compounded as prices have dropped to levels
where they are seen as almost being disposable items. When laptops cost an average of 2,500, few employees would
dream of buying one themselves if the company werent offering one funded by the business. But with tablets costing
less than 500 now, they are well within the purchasing power of employees.
Amid the growing mix of different device ecosystems and their entry into the enterprise, any attempt by an
organisation to maintain a one-size-fits-all device strategy in order to be able to validate existing applications will
work across a known set of IT devices will be doomed for failure. Quocirca has seen companies that purchase and
provision laptops and mobile phones for their employees, just to find that the employee just strips the licence codes
from the laptop onto their preferred make (e.g. buying a Sony Vaio rather than using the provided Lenovo ThinkPad)
and transfer the SIM card from the provided phone into their choice of smart phone and then use the SIM card for
personal as well as corporate use. Chaos begins to ensue and now that many will have moved away from a standard,
Windows-based platform to iOS, Android or another operating system, tracking application usage and ensuring
standardisation across document formats and workflows can be difficult. At the smart phone level, individuals go for
their own contracts and expense them to the business so the business does not gain the preferential business tariffs,
nor the capability to aggregate bills to gain discounts for large usage. Pretending that consumerisation is not
happening can be expensive in real terms and in business productivity terms.
Steps to managing consumerisation of IT
It is possible to put in place a well-managed approach to such mixed-device environments. The key is to embrace the
dynamic evolution of technology. Do not create a strategy that is dependent on certain device types. For example,
having a strategy that sounds modern, in as much as it supports the Apple iPad is OK for a moment in time, but with
Android tablets becoming competitive, you will need to review capabilities and continually re-code device apps just
to keep up. Therefore, any strategy must be open and the good news is that strong industry standards such as
HTML5, Java, VPN security and so on - are making this easier than it might have been in years past.
It is possible to define a minimum platform capability that a user-sourced device must have. Within this, there may
be things like support for Java or the capability to support certain virtual private network (VPN) technologies [not
vendor-specific, but industry standard such as PPTE or Secure Shell (SSH)]. This base platform will define a set of
capabilities -- not the device itself so new devices can be embraced as they come along with only minimum need for
testing and validation. Any device that does not meet these basic requirements can be locked out from accessing the

Quocirca 2013

-3-

Musings on data centres


business network as not fit for purpose. However, the IT team may have to assist the general user who may not be in
the best position to understand how well a certain device may align with the corporate requirements.
The next area is to adjust the application and data access strategy so as to protect the business in the best possible
way. This tends to push everything back to where this article started: it is better to have a strategy that is essentially
built around each device being seen as a terminal rather than as a hyper-intelligent device in its own right.
Virtualisation is the key here a virtual desktop infrastructure (VDI) brings all the business logic and data back into
the data centre where appropriate controls can be applied.
The majority of VDI approaches, such as those provided by Citrix, VMware, etc., support standard access approaches
through a browser or through a functional device app that ensures a good user experience in terms of usability and
speed of response. What virtualisation also provides is the capability to sandbox the corporate environment from
the consumer one. By fencing the corporate environment within its own virtual space, interaction between the access
device and the virtual space can be controlled or even completely blocked. And by blocking interaction, no data can
be transferred outside of the corporate space, and the access device remains only as valuable as the device itself it
will not hold corporate data that may have commercial or legal value should the device be lost or stolen.
Also, no matter how poor the users understanding of Internet security is, the corporate environment can remain
clean. Even if the user has visited every compromised Internet site in the world; even if their device is so riddled
with viruses, worms and other malware that the device would not normally be allowed within a mile of the corporate
network, there cannot be any transfer of malware between the device and the corporate network.
An open approach to consumerisation
An open approach to consumerisation of devices, combined with the use of VDI, provides enterprises a means of
dealing with their users desires to embrace the device of their own choice. However, IT teams must implement tools
that will make the devices work for the business. Vendors such as Symantec, Cisco, Landesk and Checkpoint can
provide asset management software, network access controls and end-point management systems that can deal with
occasionally connected devices which should be able to identify when new devices touch the network. The device will
then need to be interrogated to ensure that its base capabilities meet the corporate needs, and where possible, geolocational tools show that the person is accessing the network from an allowable location. It may then need certain
functions to be provisioned to it, such as VPN capabilities or specific access apps, all of which should be automated so
as to allow the user to get on with their work quickly and efficiently. Tools should be able to lock out devices that do
not meet requirements and should also be able to identify and lock devices that have been reported as lost or stolen
to safeguard the corporate network.
Finally, any tools chosen must be able to provide full and comprehensive reports on the users activity and be able to
advise the user in real time if they are attempting to carry out activities that are counter to corporate strategy, such
as accessing highly secure data over an open public WiFi connection. For example, Checkpoints solutions include the
capability for organisations to use data leak prevention to identify if someone is trying to carry out an action that is
against corporate policies. The user can be completely blocked from carrying out the action, along with being
presented with a bespoke message stating why they are being blocked, or can be presented with a Do you really
mean to do this? option (again, along with reasons why it is not recommended and an input box for them to put in
the reason why they still went ahead and carried out the action) which will allow them to carry out the action but
under full audit of the tools so that the organisation knows who has done what, when, where and also why. Such
advice has to be presented in understandable (as opposed to technical terms such as Error 612: Action counter to
profile 164/2012) terms saying why such an action is being prevented or is advised against, and wherever possible
giving alternative means of achieving the results the user requires. For example, presenting a message along the lines
of You are currently connected to the network via an insecure public wireless access point. Transmitting customer
details as in the attached document may be open to others capturing the information. Are you sure you want to
continue? is definitely more meaningful and empowers the user to make an informed decision.
Consumerisation is unstoppable. Dealing with it has major implications on how corporate applications and data have
to be dealt with, and this will have a knock-on effect on the data centre itself. Embrace the change and the
organisation will benefit from it. Fight it and your competitor will overtake you.

Quocirca 2013

-4-

Musings on data centres

The need for a change in how the business


and IT views the data centre
IT always seems to need more money, and yet the business rarely understands where portions of large bottom-line
costs are spent, or even what the returns on investment should be. All too often, businesses see the data centre as a
cost centre where money is continually spent on IT, the IT department appears to carry out dark secret ceremonies
and the organisation gains some small, incremental improvement in services such as provisioning more technical
resources that speed things up for a short period of time until they are subsumed under the weight of the existing
workload. This needs to change if IT is to emerge as a valued business resource and partner.
In this tip, data centre expert Clive Longbottom outlines the measures IT professionals can take to convince business
stakeholders about the business value of technology projects and align the organisations IT with the larger business
objectives.
More business; less geek
IT must evolve as a core facilitator to the business by being able to talk in the language the business understands and
in being able to be able to provide the technical functions and services that help the business with its changing
business processes. This requires IT to be able to demonstrate how any technical change will impact the cost and risk
to the business and how it enables the business to sell more of the same at the same or greater margin, or how it
will be able to bring a new product or service to market at a reasonable margin.
The IT department can no longer afford the luxury of being seen as a mysterious black box. As outsourcing and
public cloud become more available and more functional, it could become too easy for the business to outsource IT
completely which will be to the overall disservice of the business and the IT department. Just how can the IT
department ensure that it is viewed positively by the business? And how can the IT team ensure that technology is
more strategically aligned with the business objectives?
IT has to raise its game and stop being seen as techno-nerds.
Executives have little real interest in whether the latest IT kit has next generation Xeon processors; whether it is
running a 64-bit operating system; or whether the storage is now based on a highly tiered system with solid state
drive (SSD), Serial Advanced Technology Attachment (e.g., SATA 3.0 or later) and Internet Small Computer System
Interface (iSCSI) drives. For the business side, how any money spent on IT will help them improve their bottom line is
what matters.
IT has to be a trusted advisor to ensure that the key decision-makers understand what options are available to them
and what risks these options carry against their respective costs. The IT department must enable the business to make
a balanced decision based on the right amount of information.
So, when it comes to the data centre, here are some measures that the IT team can bring about and present them to
the businesss key stakeholders.
Shrinking the size of the data centre
Reducing the amount of IT in the data centre has multiple benefits to the organisation itself. Less equipment means
fewer systems administrators, less maintenance and lower licencing costs but the biggest benefit -- amid rising
energy costs and increasing government legislation -- is a reduction in energy usage. This is where virtualisation helps
with average server utilisation rates still running at around 5-10%, rationalising existing software instances and
consolidating these down onto virtualised hardware makes a great deal of sense. However, IT cannot present it in
such technical terms to the business but stating the savings that can be made on capital expenses, power costs,
operational speed and efficiency, as well as space and operating cost will generally enable the business to trust the IT
team more and invest in the changes in order to gain the on-going savings.

Quocirca 2013

-5-

Musings on data centres


Outsourcing selectively and strategically
Outsourcing is certainly not an all-or-nothing proposition. In fact, outsourcing is often approached as a solution to
specific tactical business problems. For example, a simple move to a co-location facility can save costs on the need for
building and maintaining a data centre facility, whereas a move to hosting can remove the need for hardware
procurement and maintenance. A move to cloud should present a flexible option where resources can be dynamically
adapted to meet the needs of the workload, so avoiding the costs of over-provisioning resources in an internal data
centre, just in case. The business will need to understand the pros and cons of such a move what workloads is
this suitable for? What are the data security, business continuity, disaster recovery and legal benefits and hurdles
that have to be faced? What is the Plan B that has to be in place to deal with e.g. the failure of the cloud company?
A balanced scorecard detailing benefits against risks will help to formalise the issues here and allow IT and the business
to enter into sensible discussions over the various aspects.
Take action to save energy
Changing the way that your data centre is cooled should also be an important option. Consider a data centre running
at the industry average power usage effectiveness (PUE) of 2.4. For every watt of energy that drives IT equipment,
another 1.4 watts of energy is needed to cool the facility and operate peripheral equipment. But these are capitalintensive optimisations that demand a strong business case.
Again, making an argument to the business in terms of a move to variable-speed CRAC units or free-air cooling may
be met with glazed looks of incomprehension. Instead, explain that for a capital outlay of some amount, the energy
savings at the data centre will be X; thus paying for the change over some period of time. This is usually all that the
business needs to make an informed decision.
Bear in mind that if virtualisation is done first, and certain workloads have been outsourced, there will be a lot less
equipment to cool. When combined with more modern advice on running data centres at higher temperatures, it may
be that cooling can be carried out through very low cost means, such as free air cooling. This type of approach
minimises energy usage and avoids the variability of those energy costs in the future.
Another tactic that is sometimes adopted at the server level is a more frequent technology refresh cycle. This may
seem counter-intuitive especially when virtualisation can potentially extend server operating lifetimes. But consider
that next-generation servers are often far more power-efficient than previous models. The capital needs for faster
server upgrades can often be justified by lower energy costs that outlined in a well-stated business case. These are
just a few possible ways that IT can support business efficiency and operational goals. If this approach is combined
with a full IT lifecycle management (ITLM) approach, the older equipment can be deprovisioned securely and sold off
to offset the capital cost of the new equipment.
Think modular
Finally, when consolidation has been carried out with some aspects of the IT environment outsourced, the use of
highly modularised systems may make more sense than a formal build-out of the data centre environment. For
example, a move to using pre-populated rack-based modules, fully engineered rows or completely self-contained data
centres built inside a standard-sized cargo container can be good for the business.
At the IT level, containerised data centres offer faster time to capability through the avoidance of the need for
components to be installed in racks, for configuration to be carried out efficiently and for pre-provisioning tests.
Equipment hot spots caused through poor rack design are avoided. Wiring is fully structured and controlled.
Interdependencies between different components (such as servers, storage and network switches) are all dealt with
without the need for deep engineering knowledge from system administrators. Modular systems also provide full
visibility of each modules energy and cooling needs, so the data centre facility can be more flexible in the speed in
which it can deal with new equipment being put in place.
And at the business level, a modular data centre can be more flexible. As the business requirements change, in-house
built racks and rows need to be continually changed to try and meet the needs often failing to keep up with the
speed of change. Modules can be swapped in and out far more easily, particularly in a well-implemented virtualised
environment so that the desired business flexibility is in place.

Quocirca 2013

-6-

Musings on data centres


Time to release your inner executive
Overall, the future of the data centre is changing. Rumours of the death of the corporate data centre are missing the
point it will not die out, but should become smaller, more energy-efficient, more modular and more responsive to
business needs through utilising external IT resources where needed. For IT to gain the investment it needs, it must
present itself to the business in business terms not as arcane, complex and costly tech-speak.

A data centre infrastructure strategy fit to


support business growth
Businesses see a clear need for more use of virtualisation amid growing volumes of data and data centre equipment.
IT professionals must have a flexible data centre infrastructure strategy that can support the businesss needs today
and also in the future, and virtualisation is the foundation of this flexible and dynamic IT platform.
Research carried out by Quocirca on behalf of Oracle examined what was driving investments in data centres in Europe
and the Middle East. The research, carried out as two cycles in May and November 2011, found that consolidation
was the main driver for data centre investment, followed by limitations in current facilities. As the prospects of a
double-dip recession set in, the need to support business growth dropped dramatically, as did the need to move to a
new technical architecture.

Figure 1

Quocirca 2013

-7-

Musings on data centres


The drop-off in investments in new technical architectures coincides with increased use of virtualisation during that
period, as demonstrated by other research available here. Among those who have carried out a more complete
adoption of virtualisation, many will already feel that they have changed their platform. Therefore, the drop in those
looking at a move to a new platform will include a number who have adopted virtualisation during the intervening
period. Even if these organisations see themselves adopting more virtualisation during 2012, it will not be seen as a
change of platform.
However, it is apparent that organisations are continuing to have problems as data centre equipment continues to
grow, and that further adoption of virtualisation is still required.
Developing an IT platform and data centre infrastructure strategy
Businesses adopt virtualisation as it allows them to run more workloads on the same amount of IT equipment. It also
provides them the opportunity to lower expenditures (by saving on new physical servers and infrastructure).
Being able to move from a 5-10% server utilisation rate to a 40-50% utilisation rate provides savings not only in
hardware, licensing and support, but also in data centre energy costs, which continue to trend upwards.
Beyond that, one of the main reasons for virtualisation adoption must be to provide a new, flexible and dynamic IT
platform -- one which can more rapidly respond to changes in the business strategic needs. A dynamic IT platform is
one that is able to borrow computing resources as well as network and storage resources to try things out on, rather
than having to physically allocate discrete equipment. This means businesses can experiment with new ideas at low
costs.
However, one of the biggest errors that organisations make while developing an infrastructure strategy is to use
virtualisation just to move an existing environment onto one that is more efficient not necessarily to one that is
more effective.
In many implementations, Quocirca sees virtualisation just allowing existing systems to be run more efficiently
through the use of shared resources, rather than enabling a massive change in platform in and of itself. This use of
virtualisation is really just an evolution of clustering the applications are still constrained by a physical grouping of
specific servers, and workloads are not shared amongst the available resources. It is still a one-application-perenvironment system, rather than a shared-everything one. The correct implementation of virtualisation puts in place
the right platform for an elastic cloud the capability to share available resources amongst multiple workloads in an
automated and transparent manner. Cloud computings definition includes that resources must be able to be applied
elastically, being able to be provisioned and de-provisioned at will against workloads. A good cloud implementation
should also allow for composite applications, where a business process is supported through the aggregation of
different technical services in an on-demand manner. This may require the use of a mix of technical services from a
cloud system owned and operated by the organisation itself, ones that are run on behalf of the organisation by a third
party and ones which are freely available in public cloud services.
Mistakes to avoid while developing an IT platforms and infrastructure strategy
Organisations and IT departments need to adopt a new approach to virtualisation and cloud computing to ensure
their data centre infrastructure is responsive to changing business needs. They should stop thinking along the lines of
-- we are having problems with our customers better buy a customer relationship management (CRM) application or
inventory is causing us issues lets put in a different enterprise resource planning (ERP) package.
Instead they should be thinking -- we have problems with the way we are attracting customers we need to ensure
that our business processes are correct for todays needs, and we need the right technical services to ensure that the
process runs correctly today and can change tomorrow. For the IT department, it is making sure that their
infrastructure strategy along with a mix of cloud environments (both private and public cloud platforms) are able to
support their organisations changing business needs.
A private cloud held in a private data centre needs to be able to be flexible, and as equipment densities continue to
increase, ensuring that power distribution, cooling and floor strengths are sufficient to support needs over a

Quocirca 2013

-8-

Musings on data centres


significant (5 year+) period of time will be required. Similarly, when choosing an external co-location or cloud hosting
environment, the same due diligence is required to ensure that the facilities will support the business for a similar
period of time and that the provider has plans in place to ensure support well beyond that timeframe.
A private cloud in a private facility is also unlikely to be just a pure cloud environment. There will remain certain
functions and applications which for whatever reason an organisation chooses to continue running on a physical
server, on a cluster or on a dedicated virtual platform.
This heterogeneous mix of physical and virtual IT platforms must not only be factored in while configuring and
supporting a data centre facility, but must inform the organizations systems integration and management processes.
IT professionals must also allow for the integration and management across the whole chain of private and public
systems.
Finally, users must accept that the next big thing always supersedes the next best thing. Cloud will certainly be
important, and will lay the foundation for the future. However, there will be continuous changes in how functional
services are provisioned and used causing more changes to the underlying platform.
Developing a modular approach
Therefore, users should not opt for a prescriptive or a proscriptive platform. They must ensure that the IT platform
adheres to industry standards and that the data centre facility itself is maintained in as flexible a manner as possible.
This may mean that a modular approach to the data centre makes more sense, rather than a build based on filling
bare racks. Engineered compute blocks, consisting of pre-configured server, storage and network components, can
be more easily provisioned and maintained within a facility, and can be swapped out more effectively as required.
Alongside this, power distribution and cooling needs are more easily met and there is less need for the facility to be
continuously altered to meet changes at a granular level.
A change in IT platform is what a majority of organisations need to think of while developing a data centre
infrastructure strategy because existing application and physical resource approaches are no longer sufficient. The
key is to provide a flexible environment that supports the business in the most effective manner; now and for the
foreseeable future. Virtualisation and cloud provide this but must be implemented in the correct manner in order
to provide on their promises.

Evaporative cooling in the data centre: how


it work and who its for
Evaporative cooling is a promising, cost-effective part of many data centre cooling strategies. Could it be right for
yours?
But first, a quick test. Blow over the top of your hand. Now lick it and blow over it again. Which felt cooler?
Even though you added a warm liquid (your saliva) to your skin, it felt cooler when you blew over it. Why is this? This
is an effect known as evaporative cooling. When a liquid evaporates, it takes in heat from around it, causing a
cooling effect.
In hot climates, people have used evaporative cooling in their homes for generations -- they hang a wet sheet in a
well-ventilated room; as the water evaporates from the sheet, it cools the ambient temperature of the room.
Evaporative cooling in the data centre
Evaporative cooling can be very important in maintaining an optimum temperature in data centres as well. A simple
setup can create a very low cost data centre cooling strategy, using a simple system of low-cost fans that draw air

Quocirca 2013

-9-

Musings on data centres


through filters, which are kept wet through a falling stream of water. The incoming air causes evaporation of the
water, and the resulting cooler air is then blown into the data centre.
The benefits of using these computer room evaporative cooling systems (CRECS) against standard computer room air
conditioning (CRAC) units is that they dont use any costly refrigerants. Evaporative cooling requires the use of easilymaintained low pressure fans and water pumps instead of higher pressure pumps. Another advantage is that the
CRECS systems are not overly affected by leakage within themselves, as they are based around an open system of
water flows.

Figure 1.: A simple evaporative cooling setup for data centre cooling .
As shown in the figure above, the CRECS pumps water from a reservoir at the bottom of the unit, which soaks filters
on the sides. An air fan pulls in warm air from the surrounding environment. As the air passes through the filters, it
is stripped of any particulates, and is also cooled through the evaporation of the water. The cooled air is then ducted
through to the data centre.
When operating data centres with temperatures of up to 26C, CRECS can help data centre managers to save on
cooling costs. Capital, energy and maintenance costs will all be lower so whats not to like?
When to NOT use evaporative cooling systems
Evaporative cooling is a good solution for many environments, but there are situations where it may not be the best
choice.

Quocirca 2013

- 10 -

Musings on data centres


In some cases, some post-evaporative treatment is required to dehumidify the air so that it is suitable for use within
the data centre overly humid air (for data centres, generally above around 70% relative humidity) can condense on
the surfaces of data centre equipment causing problems.
However, over-drying the air (to below 30% relative humidity) can also cause problems such as increased static
electricity and growth of metallic dendrites on circuit boards and metal cases. These can lead to short circuits in the
long term. Therefore, ensuring that the humidity of the air is kept within best practice parameters is crucial while
using evaporative cooling techniques. Some systems use heat transfer systems so that the humid air from the CRECS
unit is not used directly in the data centre itself. Instead, data centre designers sometimes place a contained air system
with high-surface area heat exchangers between the CRECS and the data centre air.
In areas where the standard environmental air is already quite cold, evaporative cooling is also not a good choice. In
this case, simply using cold air passed directly through particulate filters may be sufficient, providing free air cooling
and limiting energy costs to running the fans.
For areas where the environmental humidity is high, evaporative cooling can be problematic as well. Evaporative
cooling works best where the air is dry compare how domestic washing dries on a hot, dry day compared to on a
hot, humid day. Where humidity is high, it is just too hard for any more moisture to be absorbed by the air and as
such, evaporation just doesnt happen as effectively.
Will evaporative cooling work for UK data centres?
So, for data centres in places such as the Middle East and other hot, dry areas, evaporative cooling can make economic
sense. But can data centres managers in the UK use evaporative cooling techniques? I personally know of a 500kW
data centre in Manchester one of the wettest parts of the UK with relatively low temperatures and often high
humidity, where evaporative cooling is used 100% of the time.
However, an organisation may be wary of using 100% evaporative cooling in their data centre, either due to local
atmospheric conditions at certain times of the year or merely due to worries about the immaturity of the approach.
In this case, designing a hybrid system that mixes evaporative cooling with other data centre cooling techniques
may provide substantial energy savings. For example, using bypass systems such that free air cooling can be used
when the temperature drops below a certain level means that the wet filters can be bypassed and just particulate
filters used. The water pump can be turned off, saving energy.
If the ambient air temperature is too cold, then warm exit air from the data centre can be mixed with the incoming
air to raise the temperature as required. If the humidity is too high, mixing in some of the exit air from the data centre
may be enough to bring it back within the target relative humidity levels. But when the ambient air temperature and
humidity levels are optimal, evaporative cooling can be used.
Such an approach should deal with the weather across the UK in the majority of cases. However, if the majority of
cases is not good enough for your organisation, then you can install a small CRAC unit to work alongside the CRECS
as required. But only as required.
Hosepipe bans, droughts and evaporative cooling
The final issue for the UK is around the current prolonged drought that is hitting the south and east of the country.
An open CREC system will use up water as it is lost through evaporation, and an organisations data centre team could
find itself being questioned by both its local water authority as well as its own management if too much water is being
used particularly where water is being metered.
Therefore, to minimise water loss, I recommend using evaporated water that has been recovered using a condensing
tower. Here, a portion of the water is pumped through a heat exchanger system where the exit air from the data
centre is vented. By lowering the temperature of this air, it will carry less moisture, which will condense out and trickle
back to the reservoir for re-use.

Quocirca 2013

- 11 -

Musings on data centres


Overall, evaporative cooling systems are now at a point where they can create good savings on capital, maintenance,
operating and energy costs and provide a greener alternative than CRAC units. As a complete data centre cooling
system, or even as part of one, CRECS can be a core part of a new and emerging approach to data centre cooling.

Racks, Pods and containers: The new scale


up
Scale out the use of a large number of relatively commoditised server components to create a high-performance
platform has been the norm in many data centres for some years now. Buying bare 19 inch racks and populating
them with 1-4U servers has provided a reasonably easy way of increasing scale for specific workloads.
However, some have found that such an approach is not as easy as it would first seem. As vendors have increased
power density in their equipment, just piling more of it into a rack can lead to unexpected problems. For example,
todays high-speed CPUs run hot, and so need adequate cooling to avoid failure. The use of a rack without thinking
of how best to place power systems and server components can lead to hot spots that are difficult to cool, and so to
premature failure of equipment. As the use of virtualisation has increased, the concept of using completely separate
server, network and storage systems has begun to break down, and the various systems have started to be placed in
close proximity to each other, generally within the same rack or chassis in order to gain best performance through
the use of high speed interconnects. Ensuring that a complete rack is put together in a manner to allow adequate
cooling is becoming increasingly difficult data centre infrastructure management (DCIM) tools such as nlyte or
Emerson Networks Trellis are required to play the what if? scenarios and to model future states using complex fluid
dynamics (CFD) to indicate where hot spots are likely to occur.
However, there is an increasing move from the vendors towards a new approach that can help data centre managers.
Here, the vendor pre-populates a rack, engineers a complete row or pair of rows as a module or creates a complete
stand-alone data centre in a box using a standard road container or similar as a unit containing everything required
to run a set of workloads.
At the basic level is the engineered rack. Often, this will not be just a collection of a vendors equipment bolted into
a standard 19 inch rack, but will be a highly specialised chassis with high speed busses and interconnects built into the
system to provide the optimal speed for data interchange between storage and CPU, as well as dedicated network
systems that are integrated more closely into the system. These network switches referred to as top of rack switches
are driving a new approach to networking using a network fabric, with a flatter hierarchy and lower data latency
across systems, leading to much better overall performance.
However, engineered racks only take a data centre to a certain level. Expansion tends to be a case of a relatively
complex job of integrating one rack with another, and systems management tends to be left to a higher level. To get
around this, many vendors are now providing a more complete system, based around a modular approach, sometimes
called a pod. Here, the system provided is engineered as a complete row or pair of rows. Here, Cisco was a pioneer
with its Unified Computing System (UCS), followed by the joint venture between VMware, Cisco and EMC with its VCE
vBlock architecture. Since then, the majority of vendors have come out with similar approaches. These modules
provide a complete, stand-alone multi-workload capability, complete with in-built virtualisation, systems
management, storage and networking, along with power distribution and cooling. For single row systems, the cooling
tends to be in-row; for paired rows, it is often done as a hot aisle/cold aisle system. The biggest problem with such a
modular system, however, is that expansion either involves a massive step up through the addition of another module,

Quocirca 2013

- 12 -

Musings on data centres


or the addition of smaller incremental systems such as an engineered rack. In both cases, the design of the existing
data centre may not make this easy.
In the final case comes the containerised data centre. Originally, vendors saw this as a specialised system only for use
in specific cases for example where a small data centre was required for a short period of time (e.g. a large civil
engineering building project), or where there is a need for a permanent data centre but there is no capability for a
facility to be provided. Containerised systems can be just dropped into a space this can be in a car park or a field:
as long as there is sufficient power available (and sometimes water for forced cooling), the container can be operated.
Lately, organisations have come to realise that a containerised system can be used as part of their overall data centre
approach. Engineered racks are fine for small requirements, whereas a modular approach provides some room of
flexibility, yet the modules still need building on site. A container is delivered on the back of a lorry and is then
offloaded, put in place, plugged in and started up. Microsoft has taken a combined modular/containerised approach
in its latest data centres, using containers as the fast way of getting standard workloads up and running, and modules
to provide greater flexibility.
However, the greatest issue with containers is that the engineering tends to be too inflexible. Should changes be
required, it generally means a complete strip down and rebuild starting with new components. As containers tend to
be a specific size, much of the equipment used is specialised, and it can turn out cheaper to just replace the container
with a brand new system, rather than try to adapt an existing one.
Again vendors are realising this and are putting in place approaches to deal with it. For example, some vendors are
essentially renting out a data centre capability at the end of the agreed lifetime of the containerised data centre, it
is simply replaced by the vendor with the latest equivalent system and the old one is taken away to be recycled as
much as possible.
Taking the idea to the logical conclusion is Intel, which is working on the concept of the sealed high-temperature
container. If a containerised system can be run with minimal cooling, it will be highly energy efficient, but will suffer
increased equipment failure due to the higher temperatures involved. Individual component manufacturers are
improving the hot running capabilities of their kit, but Intel wants to look at how a 50C container operates.
Understanding that there will be increased component failure, the idea is to over-engineer the complete system by,
say 50% - at the power supply, server, storage, network and other levels. The container is then completely sealed
there is no end-user access whatsoever. The container operates over an agreed period of time, and as equipment
fails, the over-engineering allows for this. By the end of the agreed period of time, the container should still be running
at the originally agreed levels. The vendor then replaces the container with a new one and takes the old one away,
breaks it open and recycles what is possible.
Engineered racks, modules and containers all have a part to play in the future of a modern datacentre. The age of a
self-designed and built, rack-based system is passing rapidly. Quocirca recommends that all data centre managers
look at the use of the wide range of pre-built modular datacentre systems going forward.

Quocirca 2013

- 13 -

Musings on data centres

DCIM what does it mean to you?


Data centre infrastructure management (DCIM) is a relatively new term that many vendors are now using to cover
their tooling for lifecycle management of a data centre. Bringing together what were stand-alone functions such as
datacentre design, asset discovery and management, capacity planning and energy management, along with certain
systems management functions in many cases. As such, DCIM crosses over the facilities management (FM) and the
information technology (IT) functions which is both its greatest strength and its biggest problem.
For a datacentre manager, dealing with the FM team has often led to issues. FM sees the datacentre as just one part
of its overall domain, whereas for the datacentre manager, it is the main focus of their very being. Therefore, when
the FM group refuse to move fast enough, or decline to adapt an existing datacentre facility in order to meet energy
distribution needs of ever higher equipment densities, it can lead to finger pointing and for the organisation to suffer
due to a sub-optimal technology platform.
DCIM aims to enable FM and IT to work together against a common dataset so that each can be better informed. For
example, it may be that the extra kW of power required by a new IT platform architecture just cannot be provided
with the power cabling available to the FM team outside of the datacentre facility. By plugging the DCIM tools into
the rest of the FM teams tools (such as building information systems (BIS)), the datacentre manager can start to
better understand the constraints that lie outside of the datacentre itself and also to understand what changes
would be required to the facility based on future equipment plans. Likewise, the FM team can gain greater insights
into what will be required from the facility when it comes to power and cooling requirements, and can start to plan
accordingly.
The main players in the DCIM market include Emerson Network Power, nlyte Software (used to be GDCM), Romonet
and Modius, along with a host of smaller, more point software vendors. From a position of almost zero real
penetration of the available market in 2009, the highly dynamic nature of datacentres and the strong focus on energy
efficiencies, driven by the use of measures such as power usage effectiveness (PUE), means that usage of DCIM is
growing strongly.
But, what should a DCIM tool really be able to do? From Quocircas point of view, the following functionality should
be available:
Basic datacentre asset discovery the capability to create an inventory of what already exists within a
datacentre facility, from servers, storage and networking equipment and other network attached systems,
through to more manual capabilities of adding facility systems such as power distribution units,
uninterruptable power supplies (UPSs) and chillers.
Advanced asset information systems should include databases of equipment and their real-world energy
requirements. Note that using plate values (the power details printed on the equipment) will lead to
massive overestimation of energy required within a datacentre, as plate values give the rating of the power
supply not the average energy draw.
Granular energy monitoring Whether this is done through the use of specialised power distribution units,
or making the most of intelligence built directly into the equipment, being able to monitor and report on
real-time energy draws means that spikes which can indicate the start of a bigger problem can be
identified and remedial action taken.
Detailed reporting Dashboards should be capable of providing different views for different individuals
for example, an FM employee may want to see the loads being applied against a power distribution unit,
while an IT employee wants to know if there is sufficient power available for an additional server in a
specific rack but both need to be able to work against the same data and both need to be able to drill
down through the views to identify the root cause of a problem and to be able to discuss areas of concern
between each other.

Quocirca 2013

- 14 -

Musings on data centres

Computational fluid dynamics (CFD) Todays datacentres are prone to overheating, and it is important to
ensure that cooling is applied effectively. CFD enables analysis of air flows to be made and for this analysis
to show where hotspots are likely to occur. The CFD analysis should also be able to provide advice on how
to change the air flows to remove such hotspots.
2D and 3D data centre schematics Preferably, these schematics should be active. Here, the schematics
should be able to have live data on the equipment under control, and should also be filterable. For
example, it should be possible to look only at the network wiring, or only at the server topology, or only at
the ducted cooling systems or to be able to overlay a mix of different systems as required. The
schematics should also be used for e.g. showing the CFD analysis results this will help in visualising where
hotspots will occur, and if a possible solution (such as re-routing any cooling) can be easily implemented.
Structured cabling management many datacentres, particularly those with raised floors, rapidly lose any
structure to their cabling, as long cables are used where shorter ones would do, and any extra length is
either looped or just pushed out of the way. Such lack of structure is not just messy, but it also impeded air
flows, and mixing power and data cables can cause interference between them, resulting in data
transmission issues. Structured cabling where data and power are carried in different sectors and the
cables are bundled in an engineered manner, can help in avoiding such issues. DCIM software should be
able to help in showing where cables should be placed and what actual length of cable is required before
an engineer is sent in.
Environmental sensor management Environmental management of datacentres is becoming more
important. As higher temperatures are used, and free or low-cost cooling implemented, being able to
sense when temperatures are exceeding allowable limits, and so taking action, such as increasing cooling if
the temperature rise is due to increased workload, or identifying an underlying issue where this may be
due to failing equipment, can provide greater systems availability. Such environmental monitoring and
management should include not only temperature, but also humidity, smoke and water and may also
include infra-red (to identify hotspots) and other sensors.
Event management The heart of any DCIM system has to be that it can initiate events based on what it
identifies. Such an event engine needs to be able to integrate with an organisations systems management
software, with its trouble ticketing systems and with its security systems such that all actions taking place
within the datacentre are appropriately logged.
What if scenario capability Increasingly, DCIM systems are gaining the capability for those responsible
for the datacentre to be able to try out ideas and see what the impact would be. For example, increasing
the energy requirements on a specific rack may require changes to the power distribution and cooling
capabilities just for that one rack whereas placing the new equipment into an existing, part-filled rack
elsewhere may mean that the equipment can be introduced without any changes to the facility. DCIM
systems should not just be able to show the direct outcome of a proposed change, but should also be
capable of advising alternative approaches.

The above describe some of the attributes that Quocirca believes any DCIM system should have. Many will have other
capabilities, such as helping measure and manage carbon emissions, help in migrating existing systems through to
new facilities or platforms through automated systems design and provisioning and to provide cost comparisons of
multiple systems required to meet specified technical workloads. It will be down to each organisation to make its
own mind up on what is most important to them in these areas.
However, DCIM is now something that Quocirca believes that organisations must look at to ensure that their
datacentres run at an optimal level. Failure to use DCIM will result in lower systems availability and a lack of flexibility
in supporting the organisation.

Quocirca 2013

- 15 -

Musings on data centres

The promise and problems of private cloud


computing.
Cloud is becoming slightly less fluffy as organisations get to grips with what it actually means and how it works.
However, for many, the conflicting messages from vendors, the media and analysts still means that understanding
what the cloud really is still more than a little confusing.
The promise of cloud is reasonably simple moving from a one-application-per-physical-server (OAPPS) approach to
a shared-resources model means that fewer servers, storage systems and network equipment will be required, and
business flexibility can be improved through the reduction of functional redundancy.
Fine words but what do they mean in reality and what are the problems that organisations could come up against
when moving to a private cloud architecture?
Firstly, many organisations will already be using a form of shared resources virtualisation. It is important not to
confuse virtualisation and cloud, even though cloud is dependent on virtualisation. Taking a group of servers,
virtualising them and putting a single application back on top of the virtualised servers will save on the number of
servers required but you will not gain the full benefits of cloud, where multiple physical resources can be shared
between many applications and functions.
This is the real promise of cloud; elasticity is the capability for server, storage and network resources to be used
across different applications and functions. For example, an application such as payroll may have cyclical needs, being
run once a week or month, whereas accounts payable may run every third evening. On an OAPPS model, it is unlikely
that overall server utilisation would run above 5% measured during a 24*7 week and if clustering is used for
availability reasons, you could be looking at 3% or less. However, at peak times, the application may be using 80% of
server resources or even higher, and performance may be compromised as the application occasionally hits the
resource buffer. Such a peak and trough situation is not sustainable it does not support the business effectively,
and it is wasteful of energy, space, maintenance costs and skills as well as software licences.
However, if the payroll and accounts payable applications could share the same underlying resources, through using
the elasticity of a cloud based infrastructure the cyclical nature of the applications means that they will not need
the resources at the same time, so the overall system needs only be architected to meet the maximum needs of the
hungriest application. If this is carried out across the total application portfolio, then resource utilisation can be driven
up to beyond 80% without impacting the core performance of the applications. Indeed, as the resources are being
shared across all the applications, any spike in requirements can generally be well managed so that the spike does not
result in any application hitting the performance buffers.
Note, though, that there is a critical word in the above core performance. It is easy to be misled into the belief
that because everything is now running well in the datacentre, then everything is OK. Recent Quocirca research
(available for free download here) showed that overall end-to-end application performance is of critical importance
to organisation for 2012/13 and cloud computing has to be approached in this light. It is not use having blindingly
fast datacentre performance if the user experience is poor particularly if the organisation is using cloud as a means
of serving up virtual desktops. Tooling has to be put in place to be able to measure in real time how well end user
response times are being met and to be able to identify and deal with the root cause of any issue rapidly.

Quocirca 2013

- 16 -

Musings on data centres


The next area to bear in mind with cloud computing is licencing. As resources are elastic in a true cloud environment,
it is easy to fall into a position where there are many unused but live virtual machines enabled across a cloud.
Should a software audit be carried out, it will then become apparent that in many circumstances, many of these
machines will be running outside of current licensing agreements. Therefore, Quocirca recommends the use of cloudaware licensing management software that can ensure that licence conditions are always met, and also that this is
tied into tools that enable the management of virtual machines and appliances that can identify those systems that
are unused and flag them for administrator intervention.
The next promise of private cloud is the capability to move the workloads from the private data centre to a shared
facility (co-location) or the public cloud as fits in with the general feelings and strategy of the business. This is, at the
current time, proving difficult for many early cloud adopters. The lack of significant agreed cloud standards means
that the majority of private clouds have been built against software stacks that are not compatible with the stacks
used in public clouds, and as such, workloads cannot be moved between the two without considerable rewriting of
applications and porting of capabilities.
Quocirca recommends that private cloud adopters look to stacks from providers who are stating strong commitments
to open standards and where the provider is already talking about private/public cloud interoperability. Areas to look
for may be Amazon Web Services (AWS) Elastic Compute Cloud (EC2) and Simple Storage Service (S3) or the
Rackspace-backed open source initiative, OpenStack. If choosing a software stack that does not provide such
capabilities, ensure that you talk with the provider to find out what services or tools they can provide to help provide
workload interoperability with the major public cloud stacks.
Finally is the perceived problem of cloud security. This is a large subject full of misperception and approaches which
are too focused on the wrong areas, such as the cloud itself as a transport and runtime platform to enable the
flexibility that cloud computing can provide. The area of cloud security will be covered as a separate topic in another
article.
Cloud is the future its promise around cost savings and business flexibility are too positive: if organisations do not
adopt cloud, then their IT costs will balloon in comparison to their competitors that do adopt it. However, adopting
private cloud without due consideration will just replace one chaotic environment with another organisations must
ensure that the main areas are fully considered and covered before taking the plunge. The end result in the vast
majority of cases will be a hybrid environment of private and public cloud services. Ensuring that the journey and the
end result leaves a secure, performant and flexible environment requires solid, up-front planning.

Securing the cloud


In the last article in this series, the security of information was picked up on as being something that needed a little
more than a paragraph or two to cover effectively.
Cloud computing introduces the possibility for more effective working across complex value chains, from the
organisation to its suppliers and on to their suppliers, and from the organisation to its customers and then on to their
customers. The cloud can also enable more effective working across groups that are not just constrained to direct
employees, but can include contractors, consultants and domain experts that need to be brought in on an ad hoc
basis to add their skills to any work under consideration.
However, implementing a cloud infrastructure that gives the capability for such ease of working tends to send security
professionals into shivers of fear. While information is within the direct control of the organisation, security

Quocirca 2013

- 17 -

Musings on data centres


considerations are perceived to be bad enough but as soon as information is put into an environment where a lesser
degree (or no degree at all) of control is in place, security is one of the highest concerns for organisations.
However, such a server hugger mentality is changing. Savvy organisations understand that their own levels of
information security are not always as strong as they would hope, or even expect. External providers from colocation data centre facilities through to hosting companies providing infrastructure, platform or software as a service
(I/P/SaaS) can implement more consistent physical and technical security to ensure that their environments meet
security levels such as the industry standards of ISO17799 or ISO27001.
This does still leave the problem of the information moving along the value chains. The fear here is that such
information could be intercepted or otherwise obtained and the intellectual property within it used to the detriment
of the organisation.
Lets look at this in greater detail. An average organisation may not have a fully coherent information security policy
in place. It may have a perception of security, where technology has been put in place to meet specific point needs.
This may include areas such as putting in place firewalls, using encryption for data at rest and using virtualisation to
centralise information within a specific data centre. However, the wider aspects will tend to have been overlooked.
What happens when an employee leaves an organisation can the organisation guarantee that all copies of data that
the person has on their multiple devices (PCs, laptops, smartphones, tablets, etc.) have all been deleted? What
happens around the disgruntled employee who hasnt handed in their resignation letter as yet? Are their activities
around information usage being tracked and audited? For example, does the organisation know what is being sent
from its own environment to others via email, and what actions are being carried out on specific information assets,
such as database access or email printing or forwarding?
How about the risk of device loss or theft? With the storage capabilities of smart devices increasing rapidly, the
information being held on them can be financially crippling for an organisation if intellectual property is held on the
device or even if it is just personally identifiable information, such as a corporate database of external contacts.
When taking cloud to its logical conclusion of the hybrid cloud a mix of private and public clouds serving the
complete value chain maintaining security can just appear to be such a massive mountain to climb that it is easier
to just clamp down and concentrate purely on what can be controlled i.e. just the private cloud.
However, using relatively simple approaches can result in what Quocirca calls a compliance oriented architecture
a set of processes that leads to a system that is inherently secure and provides support for physical, virtual and cloud
platforms while also promising the capability to embrace any new platforms that may come along.
The key here is to focus on the information itself, and not the technology. Applying security policies to an item of
information makes it secure irrespective of where it is. For example, if an item of information is encrypted on the
move and at rest, then only those with the capability to decrypt the information will have access to it. The decryption
keys may be held on external devices or on employees own devices (through embracing a bring your own device
(BYOD) strategy). These keys can be secured through the use of biometric access to devices, and the lifetime of the
keys can be managed through digital rights management (DRM) software, so that anyone with access to the
information can be immediately cut off from access should they leave, be terminated or otherwise cease to be part
of the required team working on the information.
Data leak prevention systems can be used to ensure that certain information stays within a constrained environment
for example, information on mergers and acquisitions can be constrained to only be accessible amongst a heavily
defined group of senior executives and legal personnel, and information on e.g. a patent application can be stopped

Quocirca 2013

- 18 -

Musings on data centres


from passing from the private cloud to any external environment based on key phrases and intelligent heuristics that
can prcis and match content against base information security rules.
Virtualisation of desktops can be used to ensure that core information is stored within a defined environment, and
storage of information at a client device can be prevented where necessary, with all information being stored
centrally. Virtualisation of the client can be used such that information cannot be cut and paste between the
corporate and personal environments. Logging and event auditing systems can be put in place so that printing,
forwarding and other activities being carried out against information assets are monitored, and where necessary
halted or raised as exceptions to an employees line manager or to the corporate security team.
Such an approach means that the cloud is just another platform for secure information transport and dissemination.
Should the information be intercepted, it will be just a set of ones and zeroes of minimal use to the interceptor. For
those individuals who meet the organisations security policies, the information will be available in a secure manner
no matter where they are.
A compliance oriented architecture puts security where it should be, as an enabler to the business, not as a constraint.
With a suitable approach driven from the private cloud/private data centre, security across the complete value chain
can be maintained and will make any organisation following the approach far more competitive and responsive in
its markets.

PUE? Pretty Useless Effort.


Power Usage Effectiveness, or PUE, is the term du jour for measuring just how effective an organisations data centre
is. At its simplest level, PUE is a direct relationship between how much energy is used across the whole of a data
centre divided by how much energy is used to power the IT equipment. The basic equation can be shown as such:

Total energy

Therefore, energy used in powering cooling systems, uninterruptable power supplies (UPSs), lighting and so on will
make the PUE be a higher number. The theoretical perfection is a PUE of 1, where all power is put into the IT
equipment, with none being put into the support environment.
The majority of existing data centres are running at PUEs of around 2.4, with large, multi-tenanted systems running
at around 1.8 or less.
Many different approaches have been brought to the fore to enhance PUE, such as using free air cooling, variable rate
computer room air conditioning (CRAC) units, lights-out operation, along with modular computing using hot and cold
aisles or even containerised systems to better control how energy is used.
However, PUE remains a pretty crude measurement. Lets take a look at a couple of examples where PUE doesnt
work.
A data centre manager is set the task of improving the utilisation of an existing IT estate of servers. It is apparent that
virtualisation is an easy way to do this by bringing down the number of servers in use from, say 1000 to 500. This is
great the business saves money on the hardware itself, on the energy being used to power them, on licensing,
maintenance and so on. This has to be great, surely?

Quocirca 2013

- 19 -

Musings on data centres


The problem is that the costs of re-engineering the data centre facility tends to militate against the cooling systems
and UPSs being changed. At first glance, this does not seem to be a problem if the cooling and UPS managed to
support 1000 servers, they will easily server 500, so the facilities management people choose to leave them as they
are it is more cost effective for the business.
Again, lets assume a simple model. The old data centre had a PUE of 2 for every Watt of energy going to the servers
(and storage and networking equipment), a further Watt was being used for cooling and UPS and so on. Now, the use
of virtualisation has cut the energy being used by the servers by a half but left the cooling and UPS as they were.
Therefore, the PUE has gone from 2 to 3 a horrendous figure that would frighten any main board looking to improve
the organisations green credentials. Having a data centre move from a PUE of 2 to 3 makes the reality that energy
bills are down by a quarter overall difficult to understand. PUE fails in this situation.
Lets take a completely different view point. A company owns a data centre with quite a lot of existing space in it.
They also happen to be cash rich. They do not want to change the way they already run the servers, storage or
network in the data centre as far as they are concerned, it is running, so best not to fiddle with it. But, they operate
in a market where sustainability is an important message. What can they do about it?
They could go out and buy a load of old servers and storage off eBay and install it in their data centre. If they then
turn the equipment on, but dont use it, PUE allows them to count the energy being pushed into these bits of
equipment as useful energy to go into the bottom part of the PUE equation. As long as the company manages its
cooling effectively and is prepared to sacrifice all this useless kit should there be a power failure, then their PUE is
improved. Crazy? You bet.
It could be easy to move PUE into a measure that makes either of these scenarios more difficult to happen. If the
utilisation levels of the IT equipment is measured as well and brought into the equation, then a more meaningful
measure can be made an effective PUE, or ePUE.
Therefore, existing PUE numbers would be pushed up, as the new equation would look like this:

Total energy
utilisation rate IT energy

If, in the first case, existing utilisation rates were running at 10%, the ePUE goes from 2 to 20. However, if utilisation
rates are driven up through the use of virtualisation to 50%, then the new ePUE after virtualisation moves to 6 (total
energy = 3, IT energy = 1 and utilisation rate = 0.5) a very good improvement, rather than a rise in PUE.
In the second case, it would still be possible for the organisation to apply false loads against the servers and load
storage up with useless files but this would result in massively increased heat profiles which could then lead to a
need for better cooling systems which would push the ePUE back up again.
Anything that helps an organisation to be able to position itself against others in its space when it comes to energy
utilisation and data centre effectiveness has to be welcomed. However, a flawed approach, such as PUE can lead
organisations and their customers to the wrong conclusion.
The use of a more rounded ePUE approach makes comparing data centres and energy usage far more of a level playing
field and puts the focus where it needs to be, on the efficiency and utilisation rates of the IT assets in the data centre.

Quocirca 2013

- 20 -

Musings on data centres

Preparing the data centre for the new


device onslaught
In an earlier, Quocirca wrote about how organisations needed to prepare to deal with the rise in bring your own device
(BYOD) through the insistence that devices support a minimum level of standardisation, as well as using certain serverside technologies such as data leak prevention (DLP) and digital rights management (DRM).
However, as more and more devices hit the markets, it is worth looking at what further server-side technologies could
make BYOD not only less scary, but something that an organisation can embrace and use to its own benefit.
If a device is turned into little more than an intelligent access mechanism, then security issues can quite easily be
minimised. This needs some capability in the device itself; however, abstracting the way the device interacts with
corporate systems away from how it is used by the user in their personal life is a good start. Here, companies such as
Centrix Software and RES Software can help.
Working in conjunction with virtual desktop infrastructure (VDI) vendors such as Citrix and VMware, Centrix and RES
provide a seamless desktop that a user can access from most devices. Even through iPad, Android and other devices,
the desktop that the user sees can look and behave as a pure Windows environment, or can be implemented in a
more tablet-oriented manner.
The desktop itself is a sandboxed environment, so nothing running natively on the device can interact with the
corporate desktop unless the security policy is set up to specifically allow this.
For example, in many instances, organisations have set up VDI desktops where they believe that they have
implemented suitable information security, yet overlook that it is easy for a user to cut from the corporate
environment and paste into their consumer environment opening up major security issues. Controlling VDI activity
with approaches such as preventing cut and paste, email forwarding to the devices own system and even local
printing can be implemented and monitored through policy rules.
The way applications and services are provisioned on the devices can be mixed. Some could be served via pure VDI,
some streamed to run in the sandbox or natively on the device whilst others are taken as services from the public
cloud. However, the total interface remains seamless to the user. Furthermore, the desktop can become self-service:
if a user needs a specific application, provided that they can justify its use, then it can be selected and provisioned for
them rapidly on the fly.
But what does all this mean in the data centre?
At the basic level, it is very similar to just going for a simple VDI implementation. The main desktops will now be run
in the data centre, so a suitable server implementation will be required with good levels of availability, load
management and so on. The systems for providing the base desktops (e.g. Citrix XenDesktop, VMware View) need to
be provisioned. Centrix and RES both provide tools that can audit an existing client-based system, cataloguing
software and providing insights into how much use is being made of the software. On its own, organisations can save
directly through identifying orphan licenses where users have application installed but have not used them for some
time.

Quocirca 2013

- 21 -

Musings on data centres


These audit tools can then provide direct advice on what base desktop images should be created for example, a
general employee may need a desktop with Windows 7, Microsoft Office, Google Chrome and Microsoft Lync. By
creating a single image for supporting this, management of desktops becomes far easier, as only this golden image
needs to be patched and updated, with all other images being spawned from this. Where a user needs a specific extra
application, this can be easily provisioned alongside the base image.
Workers in some groups may need different images for example, a design department may need Adobe Creative
Suite, accountants and bookkeepers may need Sage Accounts. Each group can have its own base image which can be
personalised for each employee as required.
Data storage can be managed in various ways. It can all be central, ensuring maximum information security if data
streamed to user device is encrypted over VPNs or other means; however, access will only be available when the
device is online. Another approach is to store data without persistence, using an abstraction of the storage capability
of the device itself. Here, local storage is used using encrypted cache storage that will over-write itself once the data
has been committed to a central store when the device does make a connection to the central data store.
For the organisation, the benefits of centralised storage and server-based computing are multiple all data is being
stored in a data centre where security and backup can be managed to enterprise levels. Those leaving the company
take nothing with them in the way of stored information even though they keep their device. Anyone losing a device
or having it stolen can rapidly be back up and running again just by buying a new device and re-accessing the system
with necessary security credentials. Software licences are under control, and maintaining the patching and update
levels of the operating systems and applications is easier. The desktop is completely abstracted from the device, so
that providing the user chooses a relatively standard device, they will get a fully functional experience and IT staff
do not have to bother in providing support for the devices themselves only for the applications that the user has
access to.
For the data centre manager, it does involve some serious planning on a new server topology to support the move of
the desktop logic from the device to the server, but with server-based computing having proven itself in many
organisations now, this should not be an overwhelming problem.
Quocircas research shows that BYOD is still scaring organisations, and that desktop migrations are seen as being
costly and of dubious value to the organisation. With Windows 8 and new tablets and other devices coming through
towards the end of 2012, now is the time to prepare for the future with a new access device strategy.

Is wind energy just blowing in the face of


reality?
Google has recently announced that it has signed an agreement to source the power for its Oklahoma data centre
with 48MW of energy from a wind farm that will be on-line later in 2012, bringing the company to a total renewables
contracted level of 260MW.
Great at last, a major energy guzzler taking sustainability seriously and looking to renewable sources to power its
data centres.
But, is this really the case?

Quocirca 2013

- 22 -

Musings on data centres


It is pretty difficult to identify where an electron has come from. If a power socket or distribution system is plugged
into a local or national grid system, the actual energy provided is from a pool of all generated power and no
differentiation can be made between energy that comes from renewables, gas, coal or nuclear. The only way to do
this is to single-feed direct from a given power generation source.
The stated source for the wind power in this case is the Canadian Hills Wind Project. Building started in May 2012
with a rated end capacity stated as being 300MW. However, rated capacity and real output are not the same. For
reasons shown below, the real output of a wind turbine is around 30% of its rated capacity, leaving the project with
around 100MW of output. The idea was that the site would power around 100,000 homes in the local area this will
now have to be done with just 52MW of realistic output, or a little over one half a kW per house. The US EIA states
that an average US house requires 11.496MWh per year 52MW across 100,000 homes gives 4.56MWh per home
over a full year. Just where is the rest of the energy for those homes coming from is it just a case of Google having
shifted the dirty power usage from itself to the householder?
Why only 30% of rated output for a wind turbine? The rated level assumes constant output with optimum wind
speeds. This does not happen, anywhere even in the windiest places. Imagine when there is a meteorological high
over the region. With a high, winds are mild to non-existent, and energy will have to come in from elsewhere on the
grid. Even if the wind blows enough to turn the turbines, with low wind speeds, energy efficiency is hit; optimum
efficiency is only reached with wind speeds of around 34mph the energy gap will have to be filled from elsewhere.
At the other end of the scale, lets assume that there is a gale blowing. With winds above around 50mph, wind
turbines have to be parked to prevent physical or electrical damage. While no energy is being generated from the
wind turbines, then the needed energy has to come from elsewhere. The contracted energy is just being pulled
from that pool of all types of power generation it is not being provided specifically through wind power.
So, is renewable energy all just smoke and mirrors, or is there actually a case for using it?
If the idea is for your organisation just to use wind power, then only go for it as a marketing exercise. Sure, you can
tick the box on the sustainability part of the corporate responsibility statement, and hope that no-one questions it
too deeply. You can close your eyes and ears to reality and fall for the smooth talk of the energy vendor as it says to
you that you are signing up for a pure wind-based contract.
However, a company that really wants to go for renewables needs a blended approach. Google uses hydro power
for its The Dalles data centre in Oregon, and has installed solar power to help power its Mountain View facility and
owns two wind farms outright. Again, although solar power is not continuous, what Google is showing is a capability
to blend its approach with the use of dams, hydroelectricity can be pretty much continuous, as the energy comes
through water and gravity in a predictable manner. It is only in times of severe drought where hydro can run into
problems. What Google cannot do is to use its solar output from very sunny days in California to power its Oklahoma
data centre in periods of low wind.
Google can be pretty choosy in who it signs contracts with and how these contracts are run. For an average
organisation, this may not be the case. However, ensuring that an energy provider is chosen which can demonstrate
that it has a blended sustainable generating capability, mixing constant sources such as hydro or tidal with inconstant
sources such as wind and solar means that there is a better chance of maximising the energy taken from sustainable
sources.
Then, read the small print. Renewable energy bought under contract tends to be sold at a premium. Investment in
renewables is still expensive and a lot of this is underwritten and underpinned by government initiatives. Make sure
that you understand what happens to that premium is it for further investment in more renewables or is a large
chunk of it just for shareholder profits?

Quocirca 2013

- 23 -

Musings on data centres


Will the supplier guarantee what proportion of energy supplied is generated from sustainable sources? For example,
if the supplier has a total generating capability of 1,000MW, of which 500MW is wind, 100MW is solar and 400MW is
hydro/tidal, the actual total is likely to be around 550MW or so of actual generated power capability when measured
against rated capacity. This is an average capability it may be capable of providing 600MW at some times, and as
little as 350MW at others the inconstancy of the power sources means that predictability is difficult. If the totality
of the suppliers contracts comes to 1,000MW, it is short in real terms by 450MW and each customer is only getting
55% of its energy from the suppliers renewable sources. That then leaves the question of where is it bringing the
extra power in from other renewable suppliers (who will have the same problems) or from fossil fuel and nuclear
sources?
Renewables energy is important and should be part of a data centres power mix, but do not fall for the snake oil,
smoke and mirrors and believe that everything signed up for will be from renewables. Check the contract, make sure
that the premium is reinvested in suitable new projects and that excess energy is sourced ethically and openly.

DCIM, BIM and the ideal IT system


A while back, Quocirca wrote an article on the emergence of data centre infrastructure management (DCIM) tools
and what to look out for when buying such systems. Since then, DCIM tools have been maturing and are beginning
to break outside of their core functionality to become more far reaching in what they can do.
The previous article said that some DCIM tools provided some systems management functions: this is progressing
nicely and root cause analysis of problems within the IT and ancillary equipment in the data centre is now often
covered. However, the biggest change that is coming through is in the cross over (or maybe even a collision) between
DCIM and another management approach, building information modelling, or BIM.
BIM is used in many cases from the design phase through construction to the running of a building. It covers a large
remit from modelling of the building and project management of the physical build, the creation of bills of material
for the building supplies through to monitoring and management of the resulting building. There are, however, many
cross-overs in the data centre between what BIM does and what DCIM does.
The problem with a datacentre is that it is essentially two different systems. There is the facility a building that is
generally owned and managed by the facilities management group; and there is the IT equipment housed within it,
which is owned and managed by the IT department. This was fine when the only needs that the IT equipment had
when it came to the facility was that there was enough cooling, mains and backup power available, but times have
changed and such simplicity is no longer enough.
The problem is that the datacentre is now a single unit the IT equipment has to work alongside the facilities
equipment, such as the uninterruptable power supplies (UPSs), the cooling systems, the standby auxiliary generators
and the power distribution layout in a flexible and dynamic manner.
Increasing densities of IT equipment means that just allowing for mains distribution blocks to be provided underfloor
or to a rack may not be good enough, and cooling has to be far more directed than it has been to date. With two
teams of people working apart from each other, an optimum datacentre design is unlikely to happen.
However, by bringing BIM and DCIM together (maybe as a facility information and infrastructure modelling and
management (FIIMM) tool?), the best of all worlds can be combined.

Quocirca 2013

- 24 -

Musings on data centres


From the BIM world comes building modelling, being able to plan in 3D where walls will be, where pillars need to be
and so on. Power systems can all be modelled and seen in place to make sure that clearances are correct. From the
DCIM world comes IT equipment modelling, being able to design racks and rows of equipment and then accurately
calculate exactly what power will be required to each system.
Computational fluid dynamics can be brought to play, using the latest American Society of Heating, Refrigeration and
Air conditioning Engineers (ASHRAE) guidelines on thermal envelopes for data centre equipment, and the BIM system
can then ensure that computer room air conditioning (CRAC) or other cooling systems are positioned correctly in
order to provide the right flows to maintain equipment within these thermal limits.
Intelligence can be built in through movement-sensitive lighting, only using energy when there is movement within
specific parts of the data centre; security systems can be converged so that the facility and the IT equipment work
together to ensure that only the right people have physical and technical access to the right equipment.
Asset management can be optimised across the whole of the facility, tying in planned maintenance of both building
and IT equipment so that only one down-time occurrence is required, and that planned equipment replacement is
carried out in such a manner that new equipments needs are continually met by the facilitys capabilities. What if
scenarios can show where changes to the facility will be required, and the impact of new technological platforms,
such as cloud can be tested as to whether a facility can effectively shrink its capabilities as more workloads are moved
to outside cloud platforms.
Indeed, the convergence of DCIM and BIM can go much further than this. The need to move towards intelligent
buildings, or even campuses and organisations, means that there is a need to bring IT into the realm of facilities
across all of the buildings facilities manages. Through using IT effectively as an underpinning to the intelligent
organisation, energy usage can be better optimised, with excess heat from one part of a building being moved to
other areas where heat may be required, or by using heat pumps to offset the amount of energy required to heat
water. Ventilation can be provided so that during summer, forced cooling is minimised, with automated vents making
the most of external weather conditions to ensure that constant temperatures are maintained throughout the overall
building.
DCIM remains an emerging capability, with many organisations not yet having appreciated how important it can be.
However, if the main players, such as nlyte, Emerson and Romonet can get their messaging correct and partner with
some of the software and service vendors in the BIM space, such as Autodesk, Bentley or FM:Systems, then not only
will the datacentre be a much better place, but the organisation will benefit from a far more efficient and effective
estate of buildings.

End-of-lifing a data centre


Although the pace of change of technology has been, and continues to be rapid, the main way that organisations have
looked at their IT equipment and data centres is to re-use as much of it as possible. This often involves cascading
IT equipment through different types of usage scenario, and retro-fitting the facility with, for example, new
uninterruptable power supplies (UPSs) and computer room air conditioning (CRAC) units as needed.
However, some organisations are now taking a complete change of approach. Virtualisation and cloud computing are
pushing many beyond an evolutionary, incremental approach to a more revolutionary, replacement one. Many
virtualisation and cloud projects have led to small islands of IT equipment positioned in an overly large existing facility,
with high overhead costs in cooling and inefficiencies in energy distribution.

Quocirca 2013

- 25 -

Musings on data centres


The benefits of moving to a new computing architecture are, however, strong a newer, more standardised platform
can be better optimised to meet the organisations needs, and more modern equipment will be more energy efficient
and so provide immediate savings to the organisation.
Doubtless, any organisation with an inkling of capability will ensure that a project plan for areas such as data and
application migration, along with things like ongoing maintenance and support is in place well before attempting to
move from one platform to another.
But what else beyond a technical project plan can be done to minimise the overall cost of end-of-lifing an existing
data centre?

Can anything be effectively cascaded? Although attempting to re-use as much of the existing equipment as
possible could minimise the opportunities of moving over to a new architecture, there may well be a
proportion of equipment which is modern enough to fit in with the new plans, or to be used for specific tasks,
such as file and print serving or archival storage.
What is the residual value of the IT and facility equipment? Too many times, Quocirca sees organisations
paying for old equipment to be removed, or even just dumping it. The waste electronic and electrical
equipment (WEEE) laws mean that it is illegal to just dump any IT equipment, and insecure disposal leaves
organisations open to data protection issues as data could be recovered that contains personally identifiable
information.
Companies such as Bell Microsystems offer services where any residual value within the equipment is
maximised through refurbishment or use as spares. This is always done with security in mind disk drives
can be securely reformatted, or securely disposed of through the destruction of the disk drives from just
punching the spindle through to macerating the whole drive to leave nothing larger than 25mm or 2.5mm,
depending on needs. Networking equipment can have log files securely cleaned to remove any identification
of where the device has come from and any log of username/password pairs. Part of Bell Microsystems
services is also to then recover as much value through such scrap as possible there is a lot of gold, copper
and other rare and valuable metals in such scrap material.
In the majority of cases, more can be recovered in overall residual values than the cost of it dealing with the
needs for secure disposal.
Even at the data centre facility level, there may be good residual values in old auxiliary generators, UPSs,
CRAC units and so on.

The facility itself can be a problem or an opportunity. If this is a stand-alone data centre that is not part of
an organisations larger building or campus, then there is the possibility to sell it on as a data centre shell.
With the growth of co-location, hosting and cloud services, a large enough facility with the right
characteristics (e.g. strong floor, all services on hand, basically secure building) can be tempting for an
organisation that needs to bring new infrastructure to market fast. However, this is not always feasible, so
what else can be done with such a building?
The main problem tends to be that a purpose built data centre is neither fish nor fowl: it is similar to a
warehouse in that it has a large open volume as its main aspect, but it also tends to be over-designed and
provisioned for a sale as warehousing to be a cost effective means of optimising any payback for the facility.
It is not conducive to rapid conversion to office space, as mezzanine floors and new heating and other
services will all have to be put in place to make it usable.
For an organisation where the facility is part of a larger building or part of a campus where it would be difficult
to sell off the building, then it comes down to identifying the best possible re-use for it even if there is a
cost associated in retro-fitting for alternative use.

Quocirca 2013

- 26 -

Musings on data centres


If it can be sold off as a separate unit, then it may just be a case of seeing what per square footage value can
be obtained for it even if this does not provide good return on investment. Even at the most basic level,
bear in mind that any building has something that is always of value underpinning it the land it is built on.
Should a buyer be able to gain change of use planning permission, what could appear to be a few pounds per
square foot of empty building space could be turned into high-value housing or retail space. Do not just look
at the facility itself as being the only thing that has value.
For organisations that have reached the end of the useful life for their existing facility, ensuring that costs of any move
are minimised has to be a priority. Making sure that as much value is gained through the disposal of existing
equipment and buildings can help in this regard the IT equipment and data centre facility as true assets and cash in
on them wherever possible.

Quocirca 2013

- 27 -

REPORT NOTE:
This report has been written
independently by Quocirca Ltd
to provide an overview of the
issues facing organisations
seeking to maximise the
effectiveness
of
todays
dynamic workforce.
The report draws on Quocircas
extensive knowledge of the
technology
and
business
arenas, and provides advice on
the approach that organisations
should take to create a more
effective
and
efficient
environment for future growth.

About Quocirca
Quocirca is a primary research and analysis company specialising in the
business impact of information technology and communications (ITC).
With world-wide, native language reach, Quocirca provides in-depth
insights into the views of buyers and influencers in large, mid-sized and
small organisations. Its analyst team is made up of real-world practitioners
with first-hand experience of ITC delivery who continuously research and
track the industry and its real usage in the markets.
Through researching perceptions, Quocirca uncovers the real hurdles to
technology adoption the personal and political aspects of an
organisations environment and the pressures of the need for
demonstrable business value in any implementation. This capability to
uncover and report back on the end-user perceptions in the market
enables Quocirca to provide advice on the realities of technology adoption,
not the promises.

Quocirca research is always pragmatic, business orientated and conducted


in the context of the bigger picture. ITC has the ability to transform businesses and the processes that drive them, but
often fails to do so. Quocircas mission is to help organisations improve their success rate in process enablement
through better levels of understanding and the adoption of the correct technologies at the correct time.
Quocirca has a pro-active primary research programme, regularly surveying users, purchasers and resellers of ITC
products and services on emerging, evolving and maturing technologies. Over time, Quocirca has built a picture of
long term investment trends, providing invaluable information for the whole of the ITC community.
Quocirca works with global and local providers of ITC products and services to help them deliver on the promise that
ITC holds for business. Quocircas clients include Oracle, IBM, CA, O2, T-Mobile, HP, Xerox, Ricoh and Symantec, along
with other large and medium sized vendors, service providers and more specialist firms.
Details of Quocircas work and the services it offers can be found at http://www.quocirca.com
Disclaimer:
This report has been written independently by Quocirca Ltd. During the preparation of this report, Quocirca may have
used a number of sources for the information and views provided. Although Quocirca has attempted wherever
possible to validate the information received from each vendor, Quocirca cannot be held responsible for any errors
in information received in this manner.
Although Quocirca has taken what steps it can to ensure that the information provided in this report is true and
reflects real market conditions, Quocirca cannot take any responsibility for the ultimate reliability of the details
presented. Therefore, Quocirca expressly disclaims all warranties and claims as to the validity of the data presented
here, including any and all consequential losses incurred by any organisation or individual taking any action based on
such data and advice.
All brand and product names are recognised and acknowledged as trademarks or service marks of their respective
holders.

Vous aimerez peut-être aussi