Académique Documents
Professionnel Documents
Culture Documents
2012 was a year when organisations had to face up to the fact that the basis
of IT was beginning to change. Energy costs were rapidly escalating, new
technical architectures, such as cloud computing, were coming to the fore
and users were off doing their own things. The potential impact on the data
centre was massive and the following report pulls together articles written
by Quocirca for SearchVirtualDataCentre (now ComputerWeekly)
throughout 2012
Clive Longbottom
Quocirca Ltd
Tel : +44 118 9483360
Email: Clive.Longbottom@Quocirca.com
BYOD is happening
Bring your own device (BYOD) is here to stay, and IT has to ensure that it is controlled, rather
than hidden in shadow IT. The data centre can play its role here virtualised desktops and
centralised storage give greater control over an organisations intellectual property.
For too long, the data centre has been regarded by organisations as a cost centre. It has to be
seen as a place where innovation happens; where value for the business is created. This requires
a change of viewpoint driven by IT. IT has to be less geeky: it must start talking in business
terms and avoid the actual technology. Otherwise, it runs the risk of being outsourced.
Flexibility is key
The data centre has to be able to grow and shrink to reflect what is happening in the business.
Embracing new architectures and going for more modular approaches to data centre builds can
help achieve this.
With energy prices fluctuating but trending upward, ensuring that the data centre is energy
optimised is important. Low-cost cooling virtualisation and application consolidation can all help
here.
Modelling,
monitoring and
management will be
key
A data centre has to be as hands-off as possible, with as much in the way of maintenance being
automated as can be, the use of data centre infrastructure management (DCIM) tools, combine
with building information management (BIM) tools can make sure that a data centre is well
managed and that it can be better planned through the use of what if? scenario capabilities.
Cloud computing
brings much promise
and many problems
Cloud computing is definitely going to change how organisations use IT and the impact on the
data centre will be massive. As well as ensuring that the data centre is physically fit for cloud,
organisations must ensure that security of information across hybrid clouds (private and public)
is maintained.
Outsourcing cannot
include the strategy
Public cloud and software as a service (SaaS) is very appealing and can be a solid part of an IT
platform strategy going forward. However, it is not an opportunity to abdicate the overall
business IT strategy this must still be driven by the business and the IT department. Use
external providers because they can do something that would be too difficult to do internally
not for pure cost reasons.
Data centres are expensive items to build, but will eventually become unfit for purpose. The
costs of continually retro-fitting equipment and of trying to force new capabilities into a facility
that was built to accommodate technology from a different era will finally become too expensive.
Therefore, the options available when decommissioning a data centre must be included in the
original design of the data centre and must be reviewed on a constant basis to ensure that
costs are minimised and residual values maximised.
Conclusions
The data centre has to be more of a focus for organisations now than it has been for some time. The impact of different forces on
the facility energy, new architectures, BYOD, information security to name but a few means that IT has to make sure that the data
centre is fit for purpose not just now, but for the foreseeable future.
Quocirca 2013
-2-
Quocirca 2013
-3-
Quocirca 2013
-4-
Quocirca 2013
-5-
Quocirca 2013
-6-
Figure 1
Quocirca 2013
-7-
Quocirca 2013
-8-
Quocirca 2013
-9-
Figure 1.: A simple evaporative cooling setup for data centre cooling .
As shown in the figure above, the CRECS pumps water from a reservoir at the bottom of the unit, which soaks filters
on the sides. An air fan pulls in warm air from the surrounding environment. As the air passes through the filters, it
is stripped of any particulates, and is also cooled through the evaporation of the water. The cooled air is then ducted
through to the data centre.
When operating data centres with temperatures of up to 26C, CRECS can help data centre managers to save on
cooling costs. Capital, energy and maintenance costs will all be lower so whats not to like?
When to NOT use evaporative cooling systems
Evaporative cooling is a good solution for many environments, but there are situations where it may not be the best
choice.
Quocirca 2013
- 10 -
Quocirca 2013
- 11 -
Quocirca 2013
- 12 -
Quocirca 2013
- 13 -
Quocirca 2013
- 14 -
Computational fluid dynamics (CFD) Todays datacentres are prone to overheating, and it is important to
ensure that cooling is applied effectively. CFD enables analysis of air flows to be made and for this analysis
to show where hotspots are likely to occur. The CFD analysis should also be able to provide advice on how
to change the air flows to remove such hotspots.
2D and 3D data centre schematics Preferably, these schematics should be active. Here, the schematics
should be able to have live data on the equipment under control, and should also be filterable. For
example, it should be possible to look only at the network wiring, or only at the server topology, or only at
the ducted cooling systems or to be able to overlay a mix of different systems as required. The
schematics should also be used for e.g. showing the CFD analysis results this will help in visualising where
hotspots will occur, and if a possible solution (such as re-routing any cooling) can be easily implemented.
Structured cabling management many datacentres, particularly those with raised floors, rapidly lose any
structure to their cabling, as long cables are used where shorter ones would do, and any extra length is
either looped or just pushed out of the way. Such lack of structure is not just messy, but it also impeded air
flows, and mixing power and data cables can cause interference between them, resulting in data
transmission issues. Structured cabling where data and power are carried in different sectors and the
cables are bundled in an engineered manner, can help in avoiding such issues. DCIM software should be
able to help in showing where cables should be placed and what actual length of cable is required before
an engineer is sent in.
Environmental sensor management Environmental management of datacentres is becoming more
important. As higher temperatures are used, and free or low-cost cooling implemented, being able to
sense when temperatures are exceeding allowable limits, and so taking action, such as increasing cooling if
the temperature rise is due to increased workload, or identifying an underlying issue where this may be
due to failing equipment, can provide greater systems availability. Such environmental monitoring and
management should include not only temperature, but also humidity, smoke and water and may also
include infra-red (to identify hotspots) and other sensors.
Event management The heart of any DCIM system has to be that it can initiate events based on what it
identifies. Such an event engine needs to be able to integrate with an organisations systems management
software, with its trouble ticketing systems and with its security systems such that all actions taking place
within the datacentre are appropriately logged.
What if scenario capability Increasingly, DCIM systems are gaining the capability for those responsible
for the datacentre to be able to try out ideas and see what the impact would be. For example, increasing
the energy requirements on a specific rack may require changes to the power distribution and cooling
capabilities just for that one rack whereas placing the new equipment into an existing, part-filled rack
elsewhere may mean that the equipment can be introduced without any changes to the facility. DCIM
systems should not just be able to show the direct outcome of a proposed change, but should also be
capable of advising alternative approaches.
The above describe some of the attributes that Quocirca believes any DCIM system should have. Many will have other
capabilities, such as helping measure and manage carbon emissions, help in migrating existing systems through to
new facilities or platforms through automated systems design and provisioning and to provide cost comparisons of
multiple systems required to meet specified technical workloads. It will be down to each organisation to make its
own mind up on what is most important to them in these areas.
However, DCIM is now something that Quocirca believes that organisations must look at to ensure that their
datacentres run at an optimal level. Failure to use DCIM will result in lower systems availability and a lack of flexibility
in supporting the organisation.
Quocirca 2013
- 15 -
Quocirca 2013
- 16 -
Quocirca 2013
- 17 -
Quocirca 2013
- 18 -
Total energy
Therefore, energy used in powering cooling systems, uninterruptable power supplies (UPSs), lighting and so on will
make the PUE be a higher number. The theoretical perfection is a PUE of 1, where all power is put into the IT
equipment, with none being put into the support environment.
The majority of existing data centres are running at PUEs of around 2.4, with large, multi-tenanted systems running
at around 1.8 or less.
Many different approaches have been brought to the fore to enhance PUE, such as using free air cooling, variable rate
computer room air conditioning (CRAC) units, lights-out operation, along with modular computing using hot and cold
aisles or even containerised systems to better control how energy is used.
However, PUE remains a pretty crude measurement. Lets take a look at a couple of examples where PUE doesnt
work.
A data centre manager is set the task of improving the utilisation of an existing IT estate of servers. It is apparent that
virtualisation is an easy way to do this by bringing down the number of servers in use from, say 1000 to 500. This is
great the business saves money on the hardware itself, on the energy being used to power them, on licensing,
maintenance and so on. This has to be great, surely?
Quocirca 2013
- 19 -
Total energy
utilisation rate IT energy
If, in the first case, existing utilisation rates were running at 10%, the ePUE goes from 2 to 20. However, if utilisation
rates are driven up through the use of virtualisation to 50%, then the new ePUE after virtualisation moves to 6 (total
energy = 3, IT energy = 1 and utilisation rate = 0.5) a very good improvement, rather than a rise in PUE.
In the second case, it would still be possible for the organisation to apply false loads against the servers and load
storage up with useless files but this would result in massively increased heat profiles which could then lead to a
need for better cooling systems which would push the ePUE back up again.
Anything that helps an organisation to be able to position itself against others in its space when it comes to energy
utilisation and data centre effectiveness has to be welcomed. However, a flawed approach, such as PUE can lead
organisations and their customers to the wrong conclusion.
The use of a more rounded ePUE approach makes comparing data centres and energy usage far more of a level playing
field and puts the focus where it needs to be, on the efficiency and utilisation rates of the IT assets in the data centre.
Quocirca 2013
- 20 -
Quocirca 2013
- 21 -
Quocirca 2013
- 22 -
Quocirca 2013
- 23 -
Quocirca 2013
- 24 -
Quocirca 2013
- 25 -
Can anything be effectively cascaded? Although attempting to re-use as much of the existing equipment as
possible could minimise the opportunities of moving over to a new architecture, there may well be a
proportion of equipment which is modern enough to fit in with the new plans, or to be used for specific tasks,
such as file and print serving or archival storage.
What is the residual value of the IT and facility equipment? Too many times, Quocirca sees organisations
paying for old equipment to be removed, or even just dumping it. The waste electronic and electrical
equipment (WEEE) laws mean that it is illegal to just dump any IT equipment, and insecure disposal leaves
organisations open to data protection issues as data could be recovered that contains personally identifiable
information.
Companies such as Bell Microsystems offer services where any residual value within the equipment is
maximised through refurbishment or use as spares. This is always done with security in mind disk drives
can be securely reformatted, or securely disposed of through the destruction of the disk drives from just
punching the spindle through to macerating the whole drive to leave nothing larger than 25mm or 2.5mm,
depending on needs. Networking equipment can have log files securely cleaned to remove any identification
of where the device has come from and any log of username/password pairs. Part of Bell Microsystems
services is also to then recover as much value through such scrap as possible there is a lot of gold, copper
and other rare and valuable metals in such scrap material.
In the majority of cases, more can be recovered in overall residual values than the cost of it dealing with the
needs for secure disposal.
Even at the data centre facility level, there may be good residual values in old auxiliary generators, UPSs,
CRAC units and so on.
The facility itself can be a problem or an opportunity. If this is a stand-alone data centre that is not part of
an organisations larger building or campus, then there is the possibility to sell it on as a data centre shell.
With the growth of co-location, hosting and cloud services, a large enough facility with the right
characteristics (e.g. strong floor, all services on hand, basically secure building) can be tempting for an
organisation that needs to bring new infrastructure to market fast. However, this is not always feasible, so
what else can be done with such a building?
The main problem tends to be that a purpose built data centre is neither fish nor fowl: it is similar to a
warehouse in that it has a large open volume as its main aspect, but it also tends to be over-designed and
provisioned for a sale as warehousing to be a cost effective means of optimising any payback for the facility.
It is not conducive to rapid conversion to office space, as mezzanine floors and new heating and other
services will all have to be put in place to make it usable.
For an organisation where the facility is part of a larger building or part of a campus where it would be difficult
to sell off the building, then it comes down to identifying the best possible re-use for it even if there is a
cost associated in retro-fitting for alternative use.
Quocirca 2013
- 26 -
Quocirca 2013
- 27 -
REPORT NOTE:
This report has been written
independently by Quocirca Ltd
to provide an overview of the
issues facing organisations
seeking to maximise the
effectiveness
of
todays
dynamic workforce.
The report draws on Quocircas
extensive knowledge of the
technology
and
business
arenas, and provides advice on
the approach that organisations
should take to create a more
effective
and
efficient
environment for future growth.
About Quocirca
Quocirca is a primary research and analysis company specialising in the
business impact of information technology and communications (ITC).
With world-wide, native language reach, Quocirca provides in-depth
insights into the views of buyers and influencers in large, mid-sized and
small organisations. Its analyst team is made up of real-world practitioners
with first-hand experience of ITC delivery who continuously research and
track the industry and its real usage in the markets.
Through researching perceptions, Quocirca uncovers the real hurdles to
technology adoption the personal and political aspects of an
organisations environment and the pressures of the need for
demonstrable business value in any implementation. This capability to
uncover and report back on the end-user perceptions in the market
enables Quocirca to provide advice on the realities of technology adoption,
not the promises.