Vous êtes sur la page 1sur 52

Compact_

2012 0
year 39 www.compact.nl

International Edition

IT Governance, Performance & Compliance


Orderthehard-coverCompactinternationaleditionnow onwww.compact.nl/special/bestelformulier.htm
248full-colourpages isbn978-90-77487-64-8

The Compact journal has been appearing for almost 40 years. In the Dutch-speaking territories, Compact is the leading periodical in the fields of IT auditing and IT advisory services. To make articles published in this journal available to a broader public, a number of the most important articles in the areas of IT governance, performance and compliance have been translated into English and published in this book. The articles were written by authors who are leading in their respective fields and these authors have revised and updated the articles in question to accommodate the most recent developments. The articles in this book from 2008 address the areas of IT Strategy & Governance, ERP Advisory, IT Attestation, IT Project Advisory, IT Security Services, IRM in the External Audit, and Regulatory & Compliance Services.

Contents
3 37

Strategic choices for data centers

Harry Boersen, Mark Butterhoff, Stefan Peekel and Ruben de Wolf


Recent developments regarding data centers and the underlying choices that our clients make.

Adaptive IT service providers


Albert Plugge
An adaptive strategy will enable IT service providers to perform better for their customers.

Access to the cloud

14

Edwin Sturrus, Jules Steevens and Willem Guensberg


Trust is often a major obstacle to the adoption of cloud services.

Social engineering: the art of deception


Matthieu Paques
Methods, purpose, and (often surprising) results of hacker tests.

22

A closer look at business transformations


Guido Dieperink and Jeroen Tegelaar
The concept of transformation, its characteristics, and guidelines for a successful finish.

43

Toward a successful Target Operating Model

30

Gerard Wijers, Rudolf Liefers and Oscar Halfhide


To govern outsourcing effectively attention must be given to designing the right management model from the very first moment.

IT a meaningful factor in evolving health care sector


Stan Aldenhoven and Jan de Boer
Visit www.compact.nl for this article about the role that IT can play to tackle challenges like rising demand for better and cheaper care, increasing personnel shortages, more privatization and merit pay.
Editors Design, editing & typesetting
LINE UP boek en media bv, Groningen, the Netherlands

online

Year 39, number 0 Compact is published by KPMG IT Advisory and Uitgeverij kleine Uil. This magazine is published 4 times a year. The views expressed in this magazine are not the official views held by KPMG IT Advisory.

2012 0

Compact magazine is produced with the utmost care. It is possible, however, that the information contained within is not completely correct due to the passage of time and/or other causes. Neither KPMG, KPMG IT Advisory, nor the editors, nor Uitgeverij kleine Uil, accept any form of liability whatsoever for any direct or indirect consequences of the use of the information provided.

J.A.M. Donkers (editor in chief) B. Beugelaar M.A. Francken J.A.M. Hermans D. Hofland M.A.P. op het Veld L.H. Westenberg

Reproduction of articles
Reproduction and circulation of articles and other textual items is allowed only with the publishers written consent.
issn 0920 1645

Editorial secretariat
Marloes Janssen Jacqueline Hartman Kai Hang Ho compact@kpmg.nl

Compact_ 2012 0

Contents

Compact_
Subscriptions

125,00 excl. VAT (132,50 incl. Single issues:


VAT) for an annual subscription. 31,25 excl. VAT (33,13 incl. VAT). Students subscriptions: 62,50 excl. VAT (66,25 incl. VAT) for an annual subscription. Subscriptions can be cancelled by letter not later than one month before the start of the new subscription year. If cancellations are too late, the subscription is automatically extended by one year. Note: Compact is published in Dutch. This is an international issue.

From the editor-in-chief


Last years enthusiastic response worldwide to our international Compact special was the incentive for us to publish an international edition again this year. To bring everyone up to date, first some background information on Compact. KPMG IT Advisory has been publishing Compact for 39 years now. What started as an internal magazine in the early seventies is now a leading magazine in the Dutch marketplace. As we run a mature IT advisory practice, we wish to share our insights with our worldwide clients, our current and future workforce, and the community. All articles are written personally by KPMG staff and our clients. We continue to be grateful that our staff and our clients are willing to share their expertise in this way. We observe that the IT domain within organizations is still undergoing major developments. IT is not only regarded as an expense that is consistently under scrutiny, it is also increasingly expected to contribute to enhancing efficiency and optimizing operational processes. This is clearly evident during transformations, but we also encounter it in the annual discussion of the CIO agenda. It is obvious that, with reference to IT, the predominant credo is that success depends more than ever on the capability to adapt to changes in ones environment. We see CFOs, COOs and CIOs struggling with an appropriate positioning of IT and, this being the case, our theme of Whats shaping the CIO agenda?, which has been featured in several issues of Compact, is more topical than ever. I am very proud to present you with this international edition of Compact, which covers a selection of articles published in the Dutch Compact editions during the last year. In this form it is intended to give you an update on recent developments in some of the fields that we are working on with our clients. Of course, this edition contains only a selection of such developments. It is by no means intended to cover all of our activities. I would like to thank all the authors as well as our editors and editorial secretariat for their contribution to this international edition of Compact. I trust that you will all greatly enjoy reading this publication, and we look forward to your feedback.

Administration of subscriptions
Compact Aweg 4/1 9718 cs Groningen the Netherlands Fax: +31 50 318 20 26 E-mail: info@compact.nl www.compact.nl All (temporary) changes of address must be notified at least 8 weeks before the date of issue.

Hans Donkers
Photography
Photos in this issue are placed with permission Photographers: www.sxc.hu philipn (cover, p. 1, 2) www.sxc.hu yenhoon (p. 3) www.istockphoto Ints Vikmanis (p. 14) www.sxc.hu Nico1 (p. 22) www.sxc.hu zahal (p. 30) www.sxc.hu SailorJohn (p. 37) www.sxc.hu lizerixt (p. 43) Image bank KPMG (cover III)

KPMG partner, editor-in-chief of Compact

Strategic choices for data centers


Harry Boersen, Mark Butterhoff, Stefan Peekel and Ruben de Wolf The emergence of cloud computing and the technological and market developments that underlie this trend have prompted organizations to reevaluate their data center strategy. There is no magic formula that clearly points to modernization, redevelopment or outsourcing of data centers. The motivations for making these choices range from operational problems in legacy data centers to the promise of cloud computing lowering costs and providing greater flexibility and cost elasticity. This article discusses the technical infrastructure in data centers and recent technological and market developments that have a significant impact on the strategic choices that our clients make about the future of their data centers. The central theme is do more with less. Nonetheless, consolidation and migration of data centers comes with significant costs and risks.

H.J.M. Boersen

is a consultant at KPMG IT Advisory. boersen.harry@kpmg.nl

M.J. Butterhoff

is a senior manager at KPMG IT Advisory. butterhoff.mark@kpmg.nl

Introduction
Data centers are the ganglion hubs of the nervous system of our economy. In fact, almost all automated data processing systems are housed in data centers. Government and large enterprises alike are particularly dependent on these data-processing factories. A data center comprises not only the building with the technical installations inside, but also the IT equipment within the building that is used for the processing, storing and transporting of data. Data centers have a useful life of ten to twenty years, while IT equipment must be replaced about every five years. The investment for a midsize data center1 is at least one hundred million Euros. In contrast

S. Peekel

is a manager at KPMG IT Advisory. peekel.stefan@kpmg.nl

R. de Wolf

is a partner at KPMG IT Advisory. dewolf.ruben@kpmg.nl

Compact_ 2012 0

1 Midsize data center means a data center with a floor area of 5,000 square meters or more that is air conditioned. Large data centers, usually for IT service providers, may have tens of thousands of square meters of floor space that is air conditioned.

to the long lifetime of a data center, technological developments and business objectives evolve at an extremely high tempo. A data center strategy must focus on future requirements and the organizations capacity to change so it can adapt to these new technologies. This article discusses recent technological and market developments that have a significant impact on the strategic choices that our clients make about the future of their data centers. We will also discuss the challenges that are encountered on the path to the consolidation and migration of existing data centers.

The sequence of the layers indicates that each layer is necessary for the layer above and that ideally technology choices can be made for each layer that are independent of the other layers. The free market system and open standards means that several technological solutions for each infrastructure layer are available on the market that offer the same functionality. Consider, for example, the industry standards that support specific design formats for IT equipment and equipment racks, standard data transport protocols such as Ethernet and TCP/IP on different platforms, storage protocols like CIFS, NFS and iSCSI, and middleware solutions, databases and applications on the various vendor platforms. A data center includes one or more buildings with technical installations for power and cooling of a framework of network, storage and server equipment. These devices run hundreds to thousands of software applications, such as operating systems, databases, and customized or standard software applications. The data center is connected via fast (fiber) networks to other data centers, office locations or production facilities. With decentralized IT environments, the IT equipment intended for end users or production sites must be close at hand. Given the small size and the decentralized nature of these spaces we do not refer to these as data centers but as Main and Satellite Equipment Rooms (MERs, SERs). The technical installations and IT infrastructure in data centers are primarily dependent on the reliable supply of electricity and dependent on the provision of water for cooling and fuel for emergency power supplies.

What is going on in the data center?


More than one quarter of the annual IT spending of large organizations is devoted to data centers. These costs are further divided up for the data center building and technical installations for power supply and cooling (together totaling eight percent) and server and storage devices (seventeen percent) ([Kapl08]). The economic crisis has put increasing pressure on IT budgets and investments so that the data center has risen higher up on the CIO agenda ([Fria08]). Figure 1 illustrates a greatly simplified layered model of a technical IT infrastructure. A distinction is made between the IT infrastructure that is physically concentrated in a data center (left), the decentralized IT infrastructure for commercial buildings such as workplace automation and process automation in industrial environments (right) and the network connections between the data center and distributed IT environments (middle).
IT infrastructure in the datacenter
Information systems, standard packages & customization

Applications

Connection services WAN services, campus networks and wireless user networks

Decentralized IT infrastructure

Technological developments
This section discusses some recent technological developments that have a significant impact on the strategic choices that our clients make about theTheSansBold 8,5/10 future of their TheSansSemiBold 8,5/10 data centers.

Mail, file & print servers, software distribution


Directories, messaging systems, service bus, P2P interfaces Windows, UNIX, midrange, mainframes, OS virtualization Blade centers, SANs, storage and hardware virtualization Core and storage network, redundant network segmentation Redundant electrical, switching and air conditioning technology Architectural elements, location, access security

Office automation

Monitoring, management and development tools

Middleware and databases

Local office automation Decentralized KA facilities

Operating systems, virtualization

Virtualization

TheSansSemiBoldIt 8,5/10

Server and storage hardware, virtualization Data center network and access points Electrical, cooling and air conditioning

Data center building and fysical security

Public infrastructures for electricity, water and fuel

The virtualization of server hardware and operating systems has a huge impact on how data centers are designed and managed. Using virtualization, it is possible to consolidate multiple physical servers into one powerful physical server running multiple operating systems or instances of the same operating system running logical servers in parallel. The motivation to use virtualization comes from research showing that with respect to time the load experienced on servers is about twenty percent and on web servers is about 7.4 percent ([Barr07], [Meis09]). The crux of virtualization is to greatly increase the utilization of IT equipment and in particular servers.

Figure 1. Simplified model of an IT infrastructure.

Strategic choices for data centers

Application Operating system Server hardware

Application Operating system Server hardware

Combined with server virtualization, SANs not only allow the quick replication of data to multiple locations, but also allow simple replication of virtual servers from one location to another. The article Business continuity using Storage Area Networks in this Compact looks at SANs in depth as an alternative to tape based data backup systems. SANs and central storage equipment are among the most expensive components within the IT infrastructure. A data center strategy should therefore evaluate the investments in data storage systems and the associated qualitative and quantitative advantages.

Application Operating system Virtual hardware

Application Operating system Virtual hardware

Virtualization software Server hardware

Cloud computing
By cloud computing is meant a delivery model providing IT infrastructure and application management services via the Internet. Cloud computing is not so much a technological development in itself. Cloud computing is made possible through a combination of technological developments, including flexible availability of network bandwidth, virtualization and SANs. The main advantage of cloud computing is the shift from investments in infrastructure to operational costs for the rental of cloud services (from capex to opex), transparency in costs (pay per use), the consumption of IT infrastructure services according to real needs (elasticity) and the high efficiency and speed with which infrastructure services are delivered (rapid deployment by fully automated management processes and self-service portals). Cloud computing differs from traditional IT with respect to the following characteristics ([Herm10]):

Figure 2. Virtualization makes it possible to consolidate logical servers on one physical platform.

Figure 2 illustrates how two physical servers can be consolidated into one physical server using virtualization techniques. Virtualization greatly reduces the required number of physical servers. Up to twenty five servers on one physical server to be virtualized depending on the nature of the applications running on these servers. The use of virtualization can cause a substantial drop in data center operational costs because the management effort required is significantly reduced by a factor of five to twenty fewer physical servers. However, this requires significant investment and migration efforts. The data center strategy must evaluate the magnitude of the investment in virtualization technology and the migration of existing servers to virtual servers.

Data storage systems and Storage Area Networks


In recent years, data storage has become fully decoupled from servers through the centralizing of storage and servers using a Storage Area Network (SAN). The SAN is a dedicated network between servers and data storage. These data storage systems contain large numbers of hard disks and are equipped with specialized technologies for efficient redundant data storage.2 This centralization of data storage is transparent to the IT infrastructure layers that support it. This means that the operating system or application is unaware that the data is stored centrally via the SAN system (see also the notes to Figure 1). If data storage systems in various data center locations are connected via a SAN, disk writes can be replicated in real time across multiple locations. Centralization of storage systems has considerably increased the utilization of the capacity of these systems.

multi-tenancy (IT infrastructure is shared across multiple customers) rental services (the use of IT resources is separated from the ownership of IT assets) elasticity (capacity can be immediately scaled up and down as needed) external storage (data is usually stored externally from the supplier)

A cloud computing provider must have sufficient processing, storage and transportation capacity available to handle increasing customer demand for capacity as it occurs. In practice, the maximum upscaling is limited to a percentage of the total capacity of the cloud and involves an upward limit on elasticity. Figure 3 illustrates the variety of forms of cloud services. The main difference between the traditional model of in-house data centers and a private cloud is the flexibility that the private cloud allows. The private cloud make use

2 RAID is an abbreviation of Redundant Array of Independent Disks and is the name given to the methodology for physically storing data on hard drives where the data is divided across disks, stored on more than one disk, or both, so as to protect against data loss and boost data retrieval speed. Source: http:// nl.wikipedia.org/wiki/ Redundant_Array_of_ Independent_Disks.

Compact_ 2012 0

Architecture

Internal private cloud

External private cloud

Public cloud

Customer A

Customer A

Customer B

Customer C

Customer A

Customer B

Customer C

Service

Service

Service

Service

Service

Service

Service

Internet Internal IT customer A

Internet

Internet Provider

Internet

Internet Provider

IT

IT

IT

IT

IT

Figure 3. Overview of different types of cloud services ([Herm10]).

of standardized hardware platforms high availability and capacity, virtualization and flexible software licensing where operational costs are partly dependent on the actual use of the IT infrastructure. The private cloud is not shared with other customers and the data is located on site. In addition, access to the private cloud is not via the Internet. The network infrastructure of the organization itself can be used. According to cloud purists, one cannot speak about cloud computing in this case. The internal private cloud uses the same technologies and delivery models as the external private and public cloud, but without the risk of primary data storage being accessed by a third party. The cost of an internal private cloud may be higher than the other types. Nonetheless, for many organizations, the need to meet privacy and data protection directives outweigh the potential cost savings of using the external private or public cloud. The data center strategy should provide direction on when and which IT applications will be deployed via cloud services. Subsequently, it will not be necessary to reserve capacity for these applications in your own data centers.

other organizations have invested in their own facilities or rent from an IT service provider. The cost for these fall-back facilities is relatively high. This is primarily because of the extremely low utilization of capacity. The previously described technological developments offer cost effective alternatives for a disaster recovery set up. A high degree of virtualization and a fast fiber optic network between two data center locations (twin data centers) are the main ingredients for guaranteeing a high level of availability and continuity. Virtualization allows an application to run in parallel without allocating the processing capacity on the backup site that would normally be needed to run it. In a twin data center, synchronization occurs 24/7 for the data and several times a day for the applications. In the event of a disaster, processing capacity must be rapidly ramped up and allocated for the respective application(s) at the backup site and the users redirected accordingly. The twin data center concept is not new. The Parallel Sysplex technology from IBM has been available for decades. This allows a mainframe to be set up as a cluster of two or more mainframes at sites that are miles apart. The mainframes then operate as a single logical mainframe that synchronizes both the data and processing between both locations. A twin data center also allows you to implement Unix and Windows platforms twice without incurring double costs.

New style of Disaster Recovery


Two thirds of all organizations have a data center that serves as a backup site for the primary data center in case of serious IT disaster. This is called a Disaster Recovery Site. Half of these organizations own such a data center themselves ([Bala07]). This means that about one third of all organizations have no alternate location and that the

Strategic choices for data centers

Cost-effective Disaster Recovery is a strong motivation for cloud initiatives


Cloud computing providers also offer specific services for disaster recovery purposes. An example of a Disaster Recovery service in the cloud is remote backup. These backups are no longer written to tape, but stored at an external location of a cloud provider. These backups can be restored at any location there is an Internet connection. Cost-effective Disaster Recovery is high on the CIO agenda and thus is a strong motivation to invest in data centers and cloud initiatives. Accordingly, a data center strategy should pay appropriate attention to how data center investments address the issue of Disaster Recovery.

Data center in a box


The concept of a data center in a box refers to the development where processing, storage and network equipment is clustered into logical units. Such a cluster is created by linking racks of equipment together that have redundant provisions for guaranteeing power and cooling. A data center in a box can also be constructed in existing data centers. The equipment, power and cooling are harmonized such that high-density devices can be placed in old-fashioned data centers. The advantage of this concept is that physical changes are not required after the one-off installation of the cluster technology until the maximum processing or storage capacity is reached. This allows most management activities to be carried out entirely remotely. A fitting example of a data center in a box is containerbased computing where just such a cluster is built into a 20 or 40 foot shipping container. Similar mini data centers have been used for many years by the military as temporary facilities for use at remote locations. A more recent development is the use of mini data centers in shipping containers as modules in a large scalable data center. A few years ago, Google even applied for a patent for this method ([Goog10]). A data center strategy should indicate what contribution there will be from the data center in a box concept.

High-density devices
Virtualization allows the consolidation of a large number of physical servers on a single (logical) powerful server. The utilization of this powerful server is significantly higher than on separate physical servers (on average eighty percent for a virtual cluster of servers versus twenty percent for a single server). This means that a highly virtualized data center has significantly higher processing capacity per square meter. In recent years, the various hardware vendors have introduced increasingly larger and more powerful servers, such as the IBM Power 795, Oracle Sun M8000/M9000 and HP 9000 Superdome. In the last twenty years, there was a shift from mainframe data processing to more compact servers. It now seems there is a reverse trend toward so-called high-density devices. A direct consequence is a higher energy requirement per square meter, not just to sustain these powerful servers but also to cool them. Existing data centers cannot always provide the higher power and cooling requirements, so the available space is not optimally utilized. In addition, the weight of such systems is such that the bearing capacity of floors in data centers is not always sufficient and it may be necessary to strengthen the raised computer floor. This makes it a challenge for data center operators to balance the increasing density of the physical concentration of IT equipment and virtualization with the available power, cooling and floor capacity. The paradox is that the use of cost-effective virtualization techniques means that the limits of existing data centers are quickly approached and this gives rise to additional costs ([Data]). A data center strategy must allow for the prospect of placing high-density devices in existing or new data centers.

Automation of IT operations processes


A significant portion of the costs to operate a data center is for personnel. In addition, the extensive automation of deployment processes reduces the completion cycle of IT projects from months to weeks. There is a noticeably strong trend to extensively automate IT operations processes in the -data center. This also includes traditional management tools (workflow tooling for ITIL3 administration processes and the CMDB4) integrated with tools for the modeling of the relationship between business processes, applications and the underlying IT infrastructure (Business/IT alignment), performance monitoring, automated testing, IT costs and resource planning, IT project and program planning, security testing and much more. An example of such an IT operations tool suite is HPs Business Technology Optimization (HP BTO) ([HPIT]).

3 Information Technology Infrastructure Library, usually abbreviated to ITIL, was developed as a reference framework for setting up management processes within an IT organization. http://nl.wikipedia. org/wiki/Information_ Technology_ Infrastructure_Library. 4 CMDB: Configuration Management Database, a collection of data where information relating to the Configuration Items (CIs) is recorded and administered. The CMDB is the fulcrum of the ITIL management processes.

Compact_ 2012 0

Architecture

We are a lot closer to widespread use of the lights-out principle


The extensive automation of IT operations processes and the use of central storage and virtualization enables IT organizations to manage data centers with a minimum of personnel. Only the external hardware vendors still need physical access to the computer floors in the data center and only within tight maintenance windows. Otherwise, the data center floor is unmanned. This is called the lightsout principle because the absence of the personnel in the data center means that the lighting can be practically turned off permanently. Again, this is not a new concept. Nonetheless, the use of central storage and virtualization reduces the number of physical operations on the data center floor to a minimum, which brings us a great deal closer to the lights-out principle. The automation of IT operations processes has far-reaching implications for the operational procedures, competencies and formation of IT departments. This should receive sufficient attention in the data center strategy.

Ciscos data center vision ([Cisc]) specifies increased flexibility and operational efficiency and the breaking apart of traditional application silos. Cisco specifies a prerequisite, namely, the improvement of risk management and compliance processes in data centers to guarantee the integrity and security of data in virtual environments. Cisco outlines a development path for data centers with a highly heterogeneous IT infrastructure going through several stages of consolidation, standardization, automation of administration and self-service leading to cloud computing. IBM uses modularity to increase the stability and flexibility of data centers ([IBM10]) (pay as you grow). The aim is to bring down both investment and operational costs to a minimum. Reducing energy consumption is also an important theme for IBM because much of the investment and operational costs affecting the construction of a data center are energy related. IBM estimates that approximately sixty percent of the investment in a data center (particularly the technical installations for cooling and redundant power supplies) and fifty to seventy-five percent of non-personnel operating costs (power consumption by data center and IT equipment) for a data center are energy related. According to IBM, the increasing energy demands of IT equipment requires data center designs that anticipate a doubling or tripling of energy needs over the lifetime of a data center. Just like Cisco, Hewlett Packard (HP) has identified a development path for data centers ([HPDa]) where there is a shift from application-specific IT hardware to shared services based on virtual platforms and automated management and then onto service oriented data centers and cloud computing. In this context, HP promotes its Data Center Transformation (DCT) concept as an integrated set of projects for the consolidation, virtualization and process automation within data centers. The common thread in these market developments is reduction in operational costs, increased flexibility and stability of data center services by reducing the complexity of the IT infrastructure and a strong commitment to virtualization and energy-efficient technologies. Cloud computing is seen as a logical next step in the consolidation and virtualization of data centers.

Market developments
This section discusses the future of data centers as seen by several trendsetting vendors of IT services and solutions. IT service providers such as Atos Origin define their data center vision so as to enable them to better meet the needs of their customers. Atos Origin defines the following initiatives in its data center vision ([Atos]):

reduction in costs and faster return on investment quicker response to (changing) business requirements (agility) availability: the requirement has grown to 24/7 forever security and continuity: increased awareness, partly due to terrorist threats compliance: satisfy industry and government mandated standards increase in density requirements: the ability to manage high-density systems that have vigorously increasing energy consumption and heat production increase in energy efficiency: utilization of more energy-efficient IT hardware and cooling techniques

Challenges in data center consolidation


Data center consolidation is all about bringing together a multitude of outdated and inefficient data centers and computer rooms into one or a limited number of modern green data centers. At first glance, this seems like a technical problem involving not much more than an IT reloca-

Strategic choices for data centers

tion. Nothing is further from the truth. Organizations are struggling with questions such as: How do we involve the process owners in making informed decisions? Do we understand our IT infrastructure well enough to carry this out in planned and controlled manner? How do I limit risks of disruption during the migration? How large must the new data center be to be ready for the future? Or should we just take the step to the cloud? What are the investment costs and the expected savings from a data center consolidation path? In brief, it is not easy to prove that the benefits of data center consolidation outweigh the costs and risks. In the next section, we briefly discuss the challenges associated with data center consolidation and the migration of IT applications between data centers.

A typical data center migration project consists of a thorough analysis of the environment to be migrated, thorough preparation where the IT infrastructure is broken into logical infrastructure components that will be each migrated as a whole and subprojects for the migration of each of the logical infrastructure components. Each migration project requires the development of migration plans and fall-back scenarios, the performance of automated tests, and the comprehensive testing of each scenario. In fact, comprehensive testing and dry runs of the migration plans in advance significantly reduce the likelihood of the need for a fall-back during the migration. Minute-to-minute plans must be drawn up because of the importance of performing all actions in the correct sequence or simultaneously. Examples of such actions are the deactivating and reactivating of hardware and software components. The scale and complexity of these plans requires that these be supported by automated tools that resemble the management of real-time processes in a factory.

Data center consolidations risks


Data center consolidation requires a large number of wellmanaged migrations within a short period of time. Simultaneously, the shop must remain open. This makes these endeavors highly complex and inherently risky:

Reducing migration risks


There are different methods involved in the migration of applications and technical infrastructure. Each of these methods are illustrated in Figure 4 along with a brief listing of their advantages and disadvantages. A physical move, the lift and shift method, has the inherent risk that device hardware failures may arise during deactivation, transport and reactivation. If these hardware failures cannot be resolved quickly, there is no fall-back scenario to rely on. In a physical migration (P2P), an equivalent IT infrastructure is built at site B and the data and copies of the system configurations are transferred via a network migration. The advantage of this method is the relative ease of migration. The disadvantage is that there is no technological progress and thus no efficiency advantages such as the higher utilization of servers and storage systems. In the virtualization approach (P2V), a virtualization platform is built at the new location B and the applications are virtualized and tested. The actual data is then migrated over the network. The disadvantage of this scenario is the uncertainties that are introduced because all applications will be virtualized. Changes in the production application at location A should also be performed in the virtualized environment on site B. The advantage is that a significant improvement in efficiency can be achieved because the same applications will need significantly less hardware after the migration.

The time available for a migration phase to complete is limited and brief. High availability requirements forces migrations to be carried out within a limited number of weekends in a year. The migration or relocation of applications in a way that does not jeopardize data or production requires sophisticated fall-back scenarios. These fall-back scenarios add additional complexity to the migration plans and usually halve the time in which migrations can be carried out. The larger the scale of migrations, the greater the complexity. The complexity of migration scenarios increases with the number of underlying technical components and the number of hardware, applications and management services vendors. This increases the risk incurred through lack of oversight and in making outright mistakes.

In the following sections, we look at mitigation measures within the migration method and organization that reduce the risks of data center migrations to a manageable level.

Reducing project risks


The complexity of a data center migration makes it critical that the migration project be set up in a structured manner to reduce risk. The goal of this process is to identify risks in a continuous, proactive and uniform way during the project, weigh these in a consistent manner, and proactively manage them using the realized mitigation measures.

Compact_ 2012 0

Architecture

Physical relocation Lift and shift


MW, DBMS & Apps Operating systems Servers & storage Network Building & facilities Public infrastructure Location A MW, DBMS & Apps Operating systems Servers & storage Network Building & facilities Public infrastructure Location A Network migration Building & facilities Public infrastructure Location B MW, DBMS & Apps Operating systems Servers & storage Network Building & facilities Public infrastructure Location B

Existing hardware physically relocated.

+ Cost efficient approach No fall-back scenario Risk of damage via cooling off, (dis)assembly and transport

The The The

Physical relocation Physical to physical (P2P)

Equivalent hardware built at location B. Applications and data transferred. Fall-back scenario: revert to old environment.

+ Fall-back scenario No technological progress

Virtualization
App App App App OS OS OS OS Servers & storage Network Building & facilities Public infrastructure Location A App App App App Virtualization Servers & storage Network Building & facilities
Public infrastructure

Physical to virtual (P2V)


App App App App Virtualization Servers & storage Network migration Network Building & facilities Public infrastructure Location B App App App App Virtualization Servers & storage Network migration Network Building & facilities
Public infrastructure

Virtualization platform built at location B. Data with virtualized applications transferred. Fall-back scenario: revert to old environment.

+ + +

Fall-back scenario Technological progress Simple migration Virtualization involves huge effort Relatively costly approach

Virtual migration Virtual to virtual (V2V)

Equivalant virtualization platform built at location B. Virtual applications and data transferred. Fall-back scenario: revert to old environment.

+ Fall-back scenario + Extremely simple migration IT infrastructure is never 100% virtualized

Location A

Location B

Figure 4. Data center migration methods, advantages and disadvantages.

The virtual migration (V2V) assumes a high degree of virtualization at location A so it is fairly simple to transfer data and applications to a similar virtualization platform at location B. This migration approach is similar to how a twin data center replicates applications and data across several sites. The disadvantage of this method is that not all applications are virtualized. In practice, a combination of these migration methods are used depending on the nature of the platform that needs to be rehoused.

Cost-benefit assessments
Choosing the right mix of migration methods requires finding a balance between migration costs and risks. Heavily reducing the migration risks could lead to a final outcome where the same technical standards are used as before the migration. This limits the possibility of achieving cost and efficiency benefits from technological advances. Ideally, the technical architecture of the environment after the migration aligns well with the technical

10

Strategic choices for data centers

The larger the scale of migrations, the greater the complexity


standards of the IT management organization. If the data center management is outsourced then alignment should be sought with the factory standards of the IT service provider. Managing too strictly on the basis of reducing migration risks will lead to disappointment regarding the operational cost savings after the migration because there is insufficient alignment with the service provider standards. The migration scenario is thus a trade-off between an acceptable migration risk, the requirements dictated by an application such as by the CIA classification (Confidentiality, Integrity, Availability) and the costs involved in the migration itself and the operational phase afterwards. Cost considerations also play a significant role in the choice to construct a new data center. Although the utilization of housing and hosting services of third parties at first seems financially attractive, organizations always want to convert recurring monthly expenses into revenue. This is especially true if the return on investment is greater by doing it yourself than by utilizing housing and hosting. Green IT is a major development that affects the choice to construct a new data center. This is especially true for large scale utilization of data center facilities. For many organizations, the choice of constructing and owning a data center is more efficient and cost effective than that of utilizing a provider.

What data center strategy is suitable for your organization?


The technological and market developments described in this article may lead to a reevaluation of the existing data center strategy. One can construct a new data center, redevelop the existing data center, partially or entirely host IT infrastructure with a third party, or in combination with hosting of infrastructure also make use of cloud computing services. By way of explanation, we have selected three possibilities for making choices in data center strategies.

2. Redevelopment of an existing data center


Although the redevelopment of an existing data center may at first appear to be lower in cost, redevelopment can quickly turn into a huge complex project and eventually cost millions more than a new construction project. The complexity arises mainly because the IT infrastructure must remain available while the redevelopment of the data center space takes place. Work activities often take place close to the expensive hardware that is sensitive to vibration, dust and temperature fluctuations. In addition, staff of one or more contractors have access to the data center where the confidential information of the organization is stored, and this gives rise to additional security risks. Nevertheless, the redevelopment of an existing data center also has advantages. Redevelopment does not require a detailed migration plan for moving hardware from location A to location B. Sometimes decisions go beyond just cost considerations and technology motivations. If management of an organization believes in maintaining a competitive advantage by keeping the data center at the headquarters location, management will be considerably less likely to build a new data center at a new location.

1. Constructing your own data center


Constructing large new data centers is a trend that is particularly noticeable with contemporary Internet giants such as Apple, Google and Facebook. Even though renting space from IT providers is relatively simple, the trend of constructing own data centers continues. Enterprises no longer want to be constrained by restrictions that may result from the placement of IT equipment with a third party. In addition, organizations no longer want to be dependent on service agreements, hidden limitations in the services provided, or the everything at additional cost formula. Another consideration when building your own data center is that organizations still want to keep their own data close at hand. This is apparent not just from the popularity of private clouds, but that many organizations are still struggling with concerns about security and control over the underlying infrastructure. This is why organizations that predominantly earn their revenue by providing web services or IT support services would rather remain the owner of the entire IT infrastructure including data center.

3. Outsourcing (parts of) the IT infrastructure or using cloud services


Outsourcing (parts of) the IT infrastructure can also be a consideration in avoiding new construction or redevelopment costs. However, the outsourcing of IT can cost just as much if not more. Many organizations consider cloud services from third parties because they believe that there will be significant cost savings. In fact, the time-to-market is relatively short because there is no need for hardware

Compact_ 2012 0

Architecture

11

selection and installation projects. However, recent research shows that outsourcing where new technology is used does not necessarily reduces costs and deliver flexibility in contrast to construction or redevelopment of your own data center ([Koss10]).

in the sense of consolidating data centers and server farms with server virtualization. This also means that the same processing capacity requires less energy. Do more in the sense of more processing capacity for the same money and new opportunities to accommodate Disaster Recovery in existing data centers. These innovations require large-scale migration within and between data centers and this is coupled with significant investment, costs and migration risks. To reduce these risks to an acceptable level, proper assessments must be made of the costs and risks taken during the migration and during the operational phase after migration. The article draws from experience and provides a few examples of data center strategies, namely, the construction of a new data center, the redevelopment of an existing data center, and the outsourcing of data center activities.

Conclusions
Our experience shows that there is no magic formula that clearly points to modernization, redevelopment or outsourcing of data centers. The principles of a good data center strategy should be aligned with business objectives, investment opportunity, and the risk appetite of the organization. The technological and market developments described in this article make long term decisions necessary. The central theme is do more with less. With less

Examples of data center strategies


Data center strategy within the National Government In the letter Minister Donner sent to the House on 14 February 2011 ([Rijk]), he announced that within the scope of the Government Reduction Program, the number of data centers of the central government would be drastically reduced from more than sixty to four or five. Such a large-scale consolidation of data centers had not previously been carried out in the Netherlands. This involved many departments, benefits agencies and a large number of data centers working with European or international standards. It was a singular challenge. Edgar Heijmans, the program manager of Consolidation Datacenters, states ([Heijm]) that this is a necessary step toward the use of cloud services within the national government. In the long-term plan for the chosen approach, he identified the steps: common data center housing, common data center hosting and finally the sharing of an application store in a government cloud. KPMG has been involved both in preparing the business case for data center consolidation for the government, as well as a comprehensive analysis of the opportunities and risks of cloud computing within the state. International bank and insurer An international bank-insurer combination had a data center strategy where about fifteen data centers in the Benelux would be consolidated into three modern newly constructed data centers. Some years ago when this strategy was formed, it was not yet known that the crisis in the financial sector meant growth projections would have to be revised downwards. Or, that, in 2010, the banking and insurance activities would be split into two separate companies. The crisis and the division had a significant impact on the business case for the planned data center consolidation. KPMG was involved with an international team in the reassessment of the data center strategy and the underlying business case.

12

Strategic choices for data centers

References
[Atos] http://www.atosorigin.com/en-us/services/solutions/atos_ tm_infrastructure_solutions/data_center_strategy/default.htm. [Bala07] Balaouras, Schreck and Forrester, Maximizing Data Center Investments for Disaster Recovery And Business Resiliency, October 2007. [Barr07] Barrosso and U. Hlze, The Case For Energy-Proportional Computing, Google, IEEE Computer Society, December 2007. [Cisc] Cisco Cloud Computing Data Center Strategy, Architecture and Solutions, http://www.cisco.com/web/strategy/docs/gov/ CiscoCloudComputing_WP.pdf. [Data] Data Center Optimization, Beware of the Power Density Paradox, http://www.transitionaldata.com/insights/TDS_DC_ Optimization_Power_Density_Paradox_White_Paper.pdf. [Fria08] Friar, Covello and Bingham, Goldman Sachs IT Spend Survey 2008, Goldman Sachs Global Investment Research. [Goog10] Google Patents Tower of Containers, Data Center Knowledge, June 18th, 2010, http://www.datacenterknowledge. com/archives/2010/06/18/google-patents-tower-of-containers/. [Heijm] http://www.digitaalbestuurcongres.nl/Uploads/Files/T05 _20-_20Heijmans_20_28BZK_29_20-_20Consolidatie_20 Datacenters.pdf. [Herm10] J.A.M. Hermans, W.S. Chung and W.A. Guensberg, De overheid in de wolken? De plaats van cloud computing in de publieke sector (Government in the clouds? The place for cloud computing in the public sector), Compact 2010/4. [HPDa] HP Data Center Transformation strategies and solutions, Go from managing unpredictability to making the most of it: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-6781ENW. pdf. [HPIT] http://en.wikipedia.org/wiki/HP_IT_Management_ Software. [IBM10] Modular data centers: providing operational dexterity for an increasingly complex world, IBM Global Technology Services, november 2010, ftp://public.dhe.ibm.com/common/ssi/ ecm/en/gtw03022usen/GTW03022USEN.PDF. [Kapl08] Kaplan, Forrest and Kindler, Revolutionizing Data Center Energy Efficiency, McKinsey & Company, July 2008: http:// www.mckinsey.com/clientservice/bto/pointofview/pdf/ Revolutionizing_Data_Center_Efficiency.pdf. [Koss10] D. Kossmann, T. Kraska and S. Loesing, An evaluation of alternative architectures for transaction processing in the cloud, ETH Zurich, June 2010.

European insurer A few years back when this major insurance company outsourced its IT infrastructure management activities to a number of providers, it was already known that its data centers were outdated. The insurer had experienced all sorts of technical problems from leaky cooling systems to weekly power outages. The strategy of this insurer was to accommodate the entire IT infrastructure in the data centers of the provider in the Netherlands and Germany. The migration of such a complex IT infrastructure, however, required a detailed understanding of the relationship between the critical business chains, applications and underlying technological infrastructure. At the time of the release of this Compact, this insurer is currently completing the project that will empty its existing data centers and move these to its data center provider. They have chosen to virtualize existing systems and to carry out the virtual relocation of the systems and associated data in a limited number of weekends. KPMG was brought into this project to set up the risk management process.

[Meis09] D. Meisner, B.T. Gold and T.F. Wenisch, PowerNap: Eliminating Server Idle Power, ASPOLOS 09, Washington DC, USA, March 2009. [Rijk] http://www.rijksoverheid.nl/bestanden/documenten-enpublicaties/kamerstukken/2011/02/14/kamerbrief-uitvoeringsprogramma-compacte-rijksdienst/1-brief-aan-tk-compacterijksdienst.pdf.

About the authors


H.J.M. Boersen is a consultant in the Infrastructure and Architecture service line of KPMG Advisory. He is involved in consulting assignments and audits related to IT infrastructure. He recently completed assignments within the context of data center consolidations, relocations, audits and stability problems in large IT landscapes. M.J. Butterhoff is a senior manager in the Infrastructure and Architecture service line of KPMG Advisory. His responsibilities include advisory assignments in the area of IT organizations, IT infrastructure, data centers and IT management processes. S. Peekel is a manager in the Infrastructure and Architecture service line of KPMG Advisory. He is often involved in audits and consulting assignments in the context of performance and stability issues within IT infrastructures, management of IT organizations and environments and preparing various types of IT business cases. R. de Wolf is a partner at KPMG responsible for services in the area of IT infrastructure and Enterprise Architecture. As a lecturer, he is involved in the Executive Masters program specializing in IT Auditing (EMITA) at the University of Amsterdam.

Compact_ 2012 0

Architecture

13

Access to the cloud

Identity and Access Management for cloud computing


Edwin Sturrus, Jules Steevens and Willem Guensberg

E. Sturrus

works as a consultant at KPMG IT Advisory. sturrus.edwin@kpmg.nl

J. J. C. Steevens

works as a consultant at KPMG IT Advisory. steevens.jules@kpmg.nl

W.A. Guensberg

is a partner at Label A. willem@labela.nl

Cloud computing is maturing past the hype stage and is considered by many organizations to be the successor to much of the traditional on-premise IT infrastructure. However, recent research among numerous organizations indicates that the security of cloud computing and the lack of trust therein, are the biggest obstacles to adoption. Managing access rights to applications and data is increasingly important, especially as the number and complexity of laws and regulations grow. Control of access rights plays a unique role in cloud computing, because the data is no longer stored on devices managed by the organizations owning the data. This article investigates and outlines the challenges and opportunities arising from Identity and Access Management (IAM) in a cloud computing environment.

14

SaaS PaaS

Introduction
In recent years, cloud computing has evolved from relatively simple web applications, like Hotmail and Gmail, into commercial propositions such as SalesForce.com and Microsoft Office 365. Research shows that most organizations currently see cloud computing as the IT model of the future. The security of cloud computing and the lack of trust in existing cloud security levels, appear to be the greatest obstacles to adoption ([Chun10]). The growing amount of data, users and roles within modern organizations, and the stricter rules and legislation in recent years concerning data storage for organizations, have made the management of access rights to applications and data increasingly important and difficult. The control of access rights plays a unique role in cloud computing, because data stored in the cloud demands new, often different security measures from the organizations owning the data. Organizations must change how identities and access rights are managed with cloud computing. For example, many organizations have limited experience with the management and storage of identity data outside the organization. Robust Identity & Access Management (IAM) is required to minimize the security risks of cloud computing ([Gopa09]). This article describes the challenges and opportunities arising from Identity & Access Management in cloud computing environments.

Salesforce.com, Microsoft Office 365, Gmail Software + Platform + Infrastructure App Engine, Force.com, Azure Platform + Infrastructure Amazon EC2, Terremark, RackSpace Infrastructure

What is cloud computing?


Although much has been published on the topic of cloud computing, it remains difficult to form a precise definition for this term. One of the commonly used definitions is the following ([NIST11]): Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction. KPMG has taken this definition as a starting point and has narrowed it somewhat to the perspective of a recipient of cloud services ([Herm10]):

Cloud computing, from the perspective of the user, is the usage of centralized computing resources on the Internet. Cloud computing differs from traditional IT via the following characteristics:

Multi-tenancy. Unlike traditional IT, the IT resources in the cloud are shared across multiple users. Paid services. The user only pays for the use of cloud services and does not invest in additional hardware and software. Elasticity. The capacity can either increase or decrease at all times. Internet dependent. The primary network for cloud services is the Internet. On-demand services. Unlike the greater part of traditional IT, cloud services can be utilized practically immediately.

IaaS

Figure 1. Forms of cloud computing.

Different types of cloud services are available. First and foremost is Software-as-a-Service (SaaS) where software is provided as a cloud service. There is also Platform-as-aService (PaaS) where a platform (operating system, application framework, etc.) is offered as a cloud service. Finally, there is Infrastructure-as-a-Service (IaaS) where an IT infrastructure or part thereof (storage, memory, processing power, network capacity, etc.) is offered as a cloud service.

What is Identity & Access Management?


Broadly speaking, IAM is the consolidated management of users and corresponding authorizations via a centralized identity register. IAM allows an organization to control who gets access to what and by what means. KPMG uses the following definition of IAM ([Herm05]): The policies, processes and support systems to manage which users have access to information, IT applications and physical resources and what each user is authorized to do with it. IAM is categorized as follows ([KPMG09]):

User management: The activities related to managing end-users within the user administration. Authentication management: The activities related to the management of data and the allocation (and deallocation) of resources needed to validate the identity of a person.

Compact_ 2012 0

Information security

15

Authentication Management (1)

User Management (2)


User Lifecycle Automated trigger Approve user authorizations based on roles/rules

Autorisation Management (3)

Employees, Suppliers, Customers, etc. Contract

Autorisation model

Autoritative Sources

User Management Services

Desired state

Authentication Management Services

Usage

Data Management & Provisioning (5)


Provisioning Services Data Management Services Monitoring Services

Access Management Services Actual state

Auditing Services

Reporting Services

Access Management (4)

Systems and applications

Monitoring & Audit (6)

Figure 2. IAM reference architecture.

Authorization management: The activities related to defining and managing the access rights that can be assigned to users. Access management: The actual identification, authentication and authorization of end users for utilizing the target system. Provisioning: The propagation of identities and authorization properties to IT systems. Monitoring and auditing: The activities required to achieve monitoring, auditing and reporting goals. Federation: The system of protocols, standards and technologies that make it possible for identities to be transferable and interchangeable between different autonomous domains.

The next section elaborates on the various challenges related to the components of the IAM architecture in a cloud computing environment.

IAM challenges in a cloud computing environment


The existing challenges in managing users and access to information are complemented with new challenges brought along with cloud computing. Originally, the organization itself was responsible for all aspects of IAM. This ranged from maintenance of user administration to the propagation of user rights in the target systems and checking usage based on logging and monitoring. The introduction of cloud computing has made these activities more complex. The boundaries are blurring between what user and IT resources belong to the customer and what belongs to the cloud provider. Who owns what resource and carries the accountability that goes with it? What is the difference between accountability and liability? This section summarizes some of the challenges of IAM.

IAM plays a major role in securing IT resources. IAM faces many challenges when cloud computing is used. IAM processes, such as adding a user, are managed by the cloud provider instead of the organization owning the data. It is difficult for the organization using the cloud service to verify whether a modification has been completed successfully within the administration of the cloud provider. Furthermore, it is harder to check whether the data stored by the cloud provider is only accessible to authorized users.

16

Access to the cloud

User management
User management deals with the policies and activities within the scope of administering the entire lifecycle of users in the appropriate registers (initial registration, modification and deletion). For example, this could be the HR system for the employees of an organization. The HR system records the recruitment, promotions and dismissal of employees. In addition, user management controls the policies and activities related to granting authorizations to the users registered in the HR database. An organization that utilizes cloud services may be faced with challenges in user management that are new compared to the traditional on-premise situation. Managing the user life cycle in the traditional IT environment is a challenge, it is even more so in a cloud environment. The organization cannot always maintain control over user administration via their own HR system (or other centralized resource). The cloud provider usually also maintains a user administration system. What happens when users update their information via the cloud provider? How are the managers of the cloud services and their attributes kept up to date? Which laws and regulations (possibly outside own jurisdiction) apply to the storing of personal information? All these issues have to be dealt with again in a cloud computing environment. The allocation of authorizations is also a part of user management. The customer and cloud provider must agree on who is responsible for granting and revoking user rights.

integration with the cloud provider. SSO is a collection of technologies that allow the user to authenticate for different services once as a particular user, which allows access to the other services.

Authorization management
Authorization management deals with the policies and activities in relation to defining and administering authorizations. This allows authorizations to be grouped into a single role (based on so-called authorization groups). After granting this role to a user, that user can carry out a particular task or sub-task on certain objects. When a manager welcomes a new team member, he has to grant the appropriate role to the new user. Once the association is made, the authorizations that belong to this role are now available to the new user. As previously described, the granting of these predefined roles to users is carried out via user management. Likewise for authorization management, there are new challenges when the organization utilizes cloud services. The cloud provider and the customer must agree upon where the authorizations and/or roles are managed. The IAM system must be capable of exchanging (automated) messages with the means of authentication that the cloud provider uses. In many cases, the cloud provider and customer use conflicting role models and the maturity of the role models differ. For example, the cloud provider may have switched over to centrally organized Role-Based Access Control (RBAC), while the customer still uses direct end-user authorizations that is administered in a decentralized manner. In accordance with user management principles, it is necessary to maintain a trusted-relationship on authorization management that is supported by contractual agreements.

Authentication management
Authentication management includes the processes and procedures for administering the authentication of users. If particular data is very sensitive, stringent authentication may be required to access this data (for example, by using a smart card). Defining and recording these requirements within objects in the form of policies and guidelines is part of authentication management. Authentication management also deals with the issuing and revocation of authentication means (for example, username and password and smart cards). The following challenges in authentication management are new compared to the traditional on-premise situation: the authentication means for different cloud providers may vary. Sometimes, the cloud provider itself may only use mechanisms that do not match the (security) technical requirements of the customer. It can also be complicated to implement the level of authentication in a uniform way. In addition, synchronization of passwords can be a challenge, especially in environments where the user administration changes quickly or where users must change their own passwords. Finally, it requires that a working Single Sign-On (SSO) environment is maintained for technical

Access management
Access management deals with the (operational) processes that ensure that access to IT resources is only granted in conformance with the requirements of the information security policies and based on the access rights associated with the users. The domain of access management has the following new challenges compared to the traditional on-premise situation: access management requires agreements to be made between the cloud provider, third parties and the customer, on how to appropriately organize access to the target systems. For example, the exchange of authorization data (user names, passwords, rights, and roles) must be fast enough to grant or deny access instantly. The customer and the cloud provider can decide to establish a trustedrelationship supported by certificates and/or a Public Key Infrastructure (PKI).

Compact_ 2012 0

Information security

17

The utilization of cloud services creates new challenges for authorization management
Provisioning
IAM must ensure that after a role is granted to a user, the user is created in the relevant objects, and that this user is then granted the appropriate authorizations for corresponding objects. Within IAM, this process is called provisioning. Provisioning deals with the manual and/or automatic propagation of user and authorization data to objects. In other words, provisioning consists of creating a user and assigning authorizations to the user objects. Manual provisioning means that a system manager creates a user with authorizations on request. Automatic provisioning means that the system automatically processes these requests without any intervention by a system manager. When a role is revoked from a user then deprovisioning has to take place, which means that the authorizations are revoked from the user. Provisioning in a cloud environment has the following challenges: the propagation of accounts within the organization and also within the cloud provider is challenging, since technologies and standards are often different for each cloud provider. As more cloud providers deliver services to an organization, it becomes exponentially more complex for the customer to implement provisioning. The creation and modification of accounts and rights on target systems is generally driven by business need. However, it is often the case that less attention is given to deletion because it serves limited business need and it is believed that the security risk does not outweigh the additional effort required to follow through this deprovisioning process effectively. With respect to the contract with the cloud provider, customers often forget to give sufficient attention to the ending of the relationship. It is then unclear what happens to the data and user rights when the cloud provider no longer provides paid services to the customer. auditing compliance with the requirements of the applicable information security policies. One reason for this is that the customer often does not have insight in what resources the cloud provider utilizes to manage and monitor the IT resources. A consequence of this lack of transparency is that it may be difficult for a customer to achieve full compliance. In particular, the use of accounts with high-level privileges is difficult to monitor.

Options for IAM in a cloud environment


Several options are available for managing identity and access to cloud services ([Blak09], [Cser10]). The ownership of the various IAM processes and attendant monitoring is different for each model. This has a significant impact on the relevant risks and challenges. The option preferred thus depends on the requirements of the organization and the level of cloud service adoption in the organization.

Traditional model
If an organization utilizes part of its IT needs as a cloud service, the components of the IAM framework must work together with the cloud provider. This may be achieved by linking the existing IAM with the cloud provider (see Figure 3). In this case, the organization manages identities and access rights locally and then propagates these to the various cloud providers. For each cloud provider, the authorized users must be added to the directory of the cloud provider. There are several packages on the market that automate the processes of creation, modification and deletion by synchronizing the local directory with the cloud. However, the connector that enables the synchronization to occur must be separately developed and maintained for each cloud provider. A drawback is the added complexity in management when there are multiple cloud providers. Identification and authentication for cloud services occurs with the cloud provider. Handing these processes over to the provider requires strong confidence in the provider. There are tools on the market that make it possible to link with local SSO applications. With this method the user needs fewer identities to access services. Checking identification and authentication for cloud services is performed by the cloud provider. Strong confidence in the cloud providers and their policies is required.

Monitoring and auditing


The final piece in the IAM architecture is the monitoring and auditing process. This process focuses on checking compliance with policies utilized within IAM. This consists of continuously monitoring and auditing the systems and processes. In the area of monitoring and auditing, the following are new issues compared to the traditional situation: it is a challenge for many customers to set up monitoring and

18

Access to the cloud

This option is already actively used by a large Dutch retailer that has linked the local IAM infrastructure to their cloud provider of email and calendar services.

thus trusts the IAM of the customer and it is on that basis that the users can utilize the services. Thus, in most cases, duplication of accounts is unnecessary (unless for auditing purposes). If this option is used, the customer may continue to use the existing access methods to manage the user activities. A disadvantage for this option is that, when there are a large numbers of cloud providers, it is necessary to make agreements with each cloud provider about the confidentiality of the customers local IAM. In addition, for many cloud providers, it is impossible to maintain trustworthy and appropriate monitoring of the IAM of all customers.

Trusted-relationship model
Another option to allow IAM to have the cloud provider support the IAM of the customer (see Figure 4). The customer manages the local identities and access rights. The users are stored locally in a directory and an access request for a cloud service is authenticated locally. The cloud provider checks the authorizations and validates these using the directory of the customer. The cloud provider
Local

CSP 1 IAM UDS UDS Directory Directory

UDS IAM User

Application

CSP 2 IAM
Application Application Data Data

UDS Directory

Application

Resources

Figure 3. Connecting with the cloud provider.

Local

CSP 1 IAM UDS UDS Directory Directory

UDS IAM User

Application

CSP 2 IAM
Application Application Data Data

UDS Directory

Application

Resources

Figure 4. Federated cooperation between customer and provider. Compact_ 2012 0 Information security

19

The use of cloud services requires changes in the IAM domain


Identity service provider model
A third option for cooperation between the local IAM of the customer and the cloud provider is using an Identity Service Provider (IdSP) (see Figure 5). The IdSP is a provider of identity services. As in previous models, the customer manages the local identities and access rights. The IdSP is responsible for validating the identity of the user. Both the cloud provider and the customer rely on this third party to validate the identity of users. DigiD and Facebook are examples of organizations that may act as an IdSP and be able to verify digital identities in the future. There are tools on the market that use a third party to manage and validate identities. functioning IAM cloud services available, but it is possible that large IAM providers (for example, IBM, Microsoft and Oracle) will enter this market in the coming years.

Conclusion
By using (public) cloud services, the organization will need to revise its control measures to maintain the required level of security. Whilst security risks may well decrease by transferring selected services to the cloud, risks are likely to increase in certain areas such as IAM. To minimize the risk it is necessary to properly set up the IAM framework. The implementation of necessary changes to IAM in a cloud environment is critical in providing an adequate level of confidence and guarantee security. The fact that some of the IT resources are no longer contained in the organization itself raises several questions in the IAM domain. Even though liability remains with the organization utilizing services, keeping control of the IAM processes is more difficult because these are often part of the cloud providers domain. For user management, it is important that organizations verify whether changes to user data are taken over by the cloud provider. Organizations must comply with company, national and international laws and regulations with regard to personal information. When considering authentication, it is important
IdSP

All-in-the-cloud model The last option is to outsource the entire IAM and utilize it as a cloud service (see Figure 6). In this case, the organization delegates all IAM systems and processes to a third party operating in the cloud itself. The link with all cloud providers is managed and controlled by this third party. Access to local IT resources will also be conducted via the IAM Service. Effectively, all management and control of IAM are outsourced to the cloud.
High trust in the IAM service is required. It is difficult for the customer to monitor the status of the processes for either local or cloud services. Currently, there are no fully

Local

Identity validation UDS

IAM User Directory

CSP 1 IAM
Applicatie

CSP 2
Application Application Data Data

Resources

IAM

Application

Figure 5. Utilization of an identity service provider.

20

Access to the cloud

that the authentication methods and requirements used match those of the cloud provider. Furthermore, the authorization models should align, so that the correct rights are granted to the authenticated users. The processing of both authorizations and authentications must be timely and accurate in order for the partnering organizations to have confidence in the actual use of cloud services. Finally, it is essential that the monitoring and auditing processes meet the requirements of the applicable security policies. Several options are available for managing identity and access to cloud services. Firstly, the IAM framework can be connected with the cloud provider. The customer itself manages and propagates users and the rights to the cloud provider. It may be possible to automate this process. Identification and authentication occur in the cloud provider domain. A second option is to allow the cloud provider to support the customers IAM framework. The use of this trusted-relationship makes it unnecessary to propagate user to all cloud providers. In addition, identification and authentication occurs locally. A third option is to use an IdSP. This is a third party which is trusted by both customers and cloud services providers and validates the identity of users. The last option is to outsource the entire IAM stack and consume IAM as a cloud service altogether. Which option is the most suitable depends on IAM requirements of the organization and on the type and number of cloud services consumed. The IAM framework should be properly established before cloud services are utilized to minimize risk exposure. Furthermore, it is very important to align the IAM framework with the cloud landscape to allow effective cooperation with the cloud provider and adequate security safeguards.

References
[Blak09] B. Blakley, The Business of Identity Services, Midvale, Burton Group, 2009. [Chun10] M. Chung and J. Hermans, From Hype to Future: KPMGs 2010 Cloud Computing Survey, KPMG Advisory, Amstelveen, 2010. [Cser10 A. Cser, S. Balaouras and N.M. Hayes, Are You Ready For Cloud-Based IAM?, Cambridge, Forrester Research, 2010. [Gopa09] A. Gopalakrishnan, Cloud Computing Identity Management, SETLabs Briefings 7 (7), p. 45-54, 2009. [Herm05] J. Hermans and J. ter Hart, Identity & Access Management: operational excellence or in control?, Compact 2005/3, p. 47-53. [Herm10] J. Hermans, M. Chung and W. Guensberg, De overheid in de wolken? (The government in the clouds), Compact 2010/4, p. 11-20. [KPMG09] KPMG, IAM Methodology, KPMG International, 2009. [NIST11] NIST, The NIST Definition of Cloud Computing, Information Technology Laboratory, Gaithersburg, National Institute of Standards and Technology, 2011.

About the authors


E. Sturrus works as a consultant at KPMG IT Advisory. He gives advice in the domain of Identity and Access Management and is involved in IT Audit assignments. He recently completed his research at Erasmus University Rotterdam about Identity and Access Management and cloud computing. J. J. C. Steevens works as a consultant at KPMG IT Advisory. He specializes in advising on Identity & Access Management in which authentication and authorization management plays a major role. In addition to advising, he carries out IT audits on Public Key Infrastructures (PKI) and the like. W.A. Guensberg is a partner at Label A. Label A develops practical and sexy apps and sites. Dummy-proof and high-tech, with a focus on mobile and cloud technologies. Willem was a consultant at KPMG IT Advisory from early 2007 to late 2011. He gave advice in the domain of Identity and Access Management (IAM) and cloud computing. He completed his IT audit course at Vrije Universiteit and received his CISA accreditation in 2009. In 2010, he worked for six months as a cloud computing consultant at KPMG in Boston (US).

CSP 1 IAM User UDS Directory IAM


Application

Organization

CSP 2 IAM
Application

Application

Application

Data

Data

Resources

Figure 6. IAM as a cloud service. Compact_ 2012 0 Information security

21

Social engineering: the art of deception


Matthieu Paques In a typical penetration test (hacker test) attempts are made to gain unauthorized access to systems or data by exploiting technical vulnerabilities. The weakest link in the information security chain is often overlooked in these tests: users. It appears that this link has increasingly become the target of attackers. The media have reported a large number of incidents involving this type of attack ([security.nl]). This is reason enough to also put this link to the test within the scope of an audit or security test. This act of hacking people is called social engineering. This article describes how social engineering tests are performed, provides some real-life examples, and discusses what measures can be taken against such attacks.

What is a social engineering test?


KPMG IT Advisory has performed social engineering assignments for a large number of clients. The purpose of such tests is twofold:

M.B. Paques

is a manager at KPMG IT Advisory. paques.matthieu@kpmg.nl

identify the risks to the organization being evaluated make employees aware of these risks (training)

During the tests, attempts are made to manipulate employees so that unauthorized access to confidential information is obtained. These attempts vary from a simple phone call test in which employees are tricked into disclosing

22

passwords or a so-called phishing attack (in which the attacker uses forged emails and/or websites), to a physical attack where a clients premises are entered by a tester undercover using counterfeited access badges (or sometimes disguised like a pizza delivery person or fireman) to gather confidential information from the inside. The findings are usually quite remarkable. To name just a few, unauthorized access has been gained to safes in banks, heavily secured government areas and large data centers. In several of these cases, the assignment also included a penetration test. In these combined tests, also known as red teaming (Figure 1), the team first has to gain unauthorized physical access to the building and then has to hack internal systems and eventually leave with confidential information without being caught. The main difference between a penetration test where you can attempt to access systems multiple times and a social engineering test is that in the latter the tester usually has only one chance of success. There are no try-outs, it must be successful the very first time. The tester has to be prepared for unforeseen situations and must have a made-up story (the pretext) ready in case his presence is

being questioned. If his story is not credible, there is a risk of being taken away in handcuffs. The employees of the organization for which the test is performed are generally not informed in advance about the test. Often, only a few executives are aware of the test and even they do not know exactly when the test will be carried out. Security staff is not meant to be put on the alert and take extra precautions. This approach makes it possible to obtain a realistic impression of the risks. As a result of this approach security personnel may take drastic measures if the tester is unmasked as an intruder (especially when he has a stack of confidential documents in his possession).

The ingredients of a successful attack


There are two decisive factors that determine the success of a social engineering attack: information and timing. Thorough preparation is crucial. In such a test as much information as possible is assembled about the target prior to the actual attack. About 90% of the time is spent to research and make preparations for the actual hit. Information is gathered not only about the organization in scope (e.g., via the corporate homepage, Google Maps, search engines, newsgroups or job vacancy websites,) but also about the organizations employees, their hobbies, address and contact info (Facebook, Hyves, LinkedIn etc are very useful). After this step the tester usually makes several telephone calls to the companys general telephone number and the phone numbers of employees found through publicly available sources. Large organizations often use series of telephone numbers. The known numbers in a series allow for other numbers in the series to be determined and called. When an employee answers, they are told that it must be a wrong number as it is Mister X who is needed (Mister X being a name that was found in the earlier research, for example on LinkedIn). The correct number of Mister X and their name, job title, and department are then verified. Information obtained in this manner can then be used to extract further information. All information that is gained is potentially interesting. For a test on a highly secure data center we went there months before the test and photographed the building from all sides with a camera with a 500mm lens to determine the location of all the cameras and entrances, to observe how employees were dressed, what time they went home, etc. Information like this is used for elaborating a detailed attack scenario. We determine who we will impersonate, what time we must arrive (to walk in with the rush of the daily crowd), the best clothing to wear, and what route to take once we are in to avoid as many risks as possible (e.g., cameras and security guards).

War dialing

Red Teaming

Social engineering

...

Penetration Testing
Database security testing Host security testing

Physical security testing

Hardware testing Network security testing Google scanning

Vulnerability Assessment
Mapping Scanning ...

Stress testing

Application (web, client/ server) security testing Network extrusion testing

Network data analysis

Exploiting

...

War driving

Phishing

Figure 1. Red teaming is a test approach where different attack techniques are combined to simulate an actual attack. Compact_ 2012 0 Information security

23

The timing of an attack is also very important. Often, the help of an employee is required to get past a gate, fence, reception or other secured entrance. The exact moment that a suitable employee is present may be a matter of seconds. With a good story, improvisation skills for unanticipated situations, the ability to make contact easily and sometimes nerves of steel an attacker might even be able to penetrate the most secure environments. During an attack it is useful to know what people to approach and who to avoid. For example, secretaries often know a lot about what is happening in a company. Their knowledge can be of tremendous value. However, because they know a lot about what is happening in the company, a good story that is well supported is a prerequisite when you approach them. Complete improvisation may be like a game of Russian roulette and result in a premature and undesired end of the test. Case study 1 describes a test case in which the individuals approached were specifically selected to make the chance of success as high as possible.

Employee training
An important aspect of a social engineering test is to make the employees aware of the risks. Nevertheless, the attack scenarios should be selected in such a way that the impact experienced by employees is kept to the absolute minimum required. Therefore, we do not give the client any details (insofar as possible) about which employees played a role in the tests (for example, which employees did provide their password). Details are anonimized as much as possible. The least sensible thing a client can do (and, of course, highly undesirable, but not inconceivable) is taking disciplinary action against these employees. The outcome of such an action is that employees who do become victims of a real social engineering attack may not report it in fear of reprisals and the organization does not become aware of the attack until it is too late and it has to deal with the consequences. A good follow-up to a social engineering test is to present the results back to all employees so that the test can be a learning experience and they are better prepared against a real attack. We experience that in practice, most untrained employees are susceptible to a social engineering attack and employees can be misled at every level in the organization.

Case study 1
A colleague and I carried out an advanced phishing attack on one of our clients. My colleague placed himself at the entrance of the client office building and selectively asked employees who entered whether they wanted to take part in a survey about the upcoming Christmas activity. We focused our selection of employees on the younger female employees to minimize the risk of accidentally speaking with managers or IT staff. (They would know whether such a survey existed and thus figure out quite quickly that there was an attack underway.) Beforehand, we had examined the LinkedIn and Facebook profiles of key people in the organization so we could recognize and avoid these risky people. Participants would be included in a raffle for an iPod Touch. The employees who wanted to participate were given a sealed envelope containing a letter explaining the activity and a link to our forged web page with the survey that we set up beforehand. After logging in with their credentials, the employees were presented with ten questions about their ideas for the perfect Christmas activity. They could also supplement these with their own suggestions. After submitting their responses, they were thanked for their participation. Of course, we were not at all interested in the employees party ideas, but just in their login details. I had taken position around the corner to keep an eye through the window to see whether anything suspect happened inside. If it became necessary, I could warn my colleague via our two-way radio transceiver and inform him that it was time to take to his heels. At the same time I watched my smartphone that provided real-time updates on the number of users that logged in on the web page. In a matter of minutes several users had entered their passwords on our web page already. After about 35 minutes, we both left the location in different directions. We estimated that this was the minimum amount of time it would take to be detected. In the discussion with the client afterwards we discovered that only a few minutes passed after we left until two alarmed people came outside to demand an explanation.

Psychological tricks
For each test the attack scenario is completely different because it is tailored to the clients specific circumstances. Nonetheless, some fundamental psychological principles or tricks are regularly used:

Making a personal connection: mentioning a common problem or interest is typical. Social media can be a valuable source of information. Indicating that you have worked for the same company or play the same sport builds trust. You can also say you have a friend or acquaintance in common. After the connection is made, it is harder for the victim to refuse a request. Time pressure: create a situation where the victim does not have enough time to make a proper decision because circumstances are described in such a way that a quick decision must be made. The Windows operating system often shows the name of the last user that logged in (but not the password). Sitting at a users (locked) PC, you can usually block that users account by entering the wrong password five times. After blocking the account, you can call the help desk and say that you must give an important presentation within five minutes and need to get into your blocked account. Due to the time pressure the help desk employee (after checking that the account

24

Social engineering: the art of deception

Figure 2. Security badge costing a few dollars that a social engineer can use to exude authority.

is actually blocked) may issue a temporary password and give it over the telephone. Now, you have access to the system. Referring to a senior person in the organization (authority). This trick often works very effectively combined with the time pressure element. Indicate that the victim is hindering the actions of a high ranking person in the organization and that the victim must immediately assist with the request. A variation of this is using clothing and accessories that exude authority (see also Figure 2). Wearing a suit and tie makes it sometimes much easier to get into a building without being questioned than wearing jeans and a T-shirt. I once entered a bank in a soaking construction workers jacket announcing that there was a leak on the floor above. I said something like: I just want to take a quick look to see if any water is coming through the ceiling. The staff were happy that they had been warned in time and without asking questions allowed me access to the restricted areas in the building that should only be accessible by bank staff. Asking for help: for example, ask someone to print a file from a USB memory stick that is infected with malware that infects the pc of the victim as soon as the file on the stick is accessed, or borrow an access badge because you left yours on your desk. A request made by a man (the tester) to a woman (the victim) and vice versa is usually fulfilled easier than when the gender is the same. Using recognizable items related to the organization that is being evaluated. Employees may believe they are dealing with a co-worker because you have an access badge (possibly forged), similar style of clothing, business cards, jargon, knowledge of work methods or names of information systems or colleagues (name dropping). All are less likely to prompt critical questions. If the name on the (fake) badge also has a LinkedIn or Facebook profile that refers to the company being

evaluated, even the most suspicious people may be convinced that they are dealing with a co-worker. Another method is to request one employee to give information to another employee (for example, communicate with an internal department to have them forward a wrongly addressed email). Using these internal reference points increases credibility. Another example is recording the hold music that companies use when callers are put on hold. You can call an employee, then say after a few minutes: Wait a minute please, I have to get the other line. You then put the victim on hold and play the hold music that you previously recorded making the victim unconsciously think: Hey, thats our music, he must work for our company. Indicating that all colleagues of the victim have acted the same way so that it makes the request seem completely normal. People are inclined to believe something is correct when others have made the same choice. A variation of this is the gradual escalation of requests (for information). If someone has already fulfilled a number of requests (for example, they looked up trivial information) it is then more difficult to refuse a request for confidential information. Creating the need to return a favor. Giving people something creates an emotional obligation where they feel they owe you something back. This makes it easier than usual to get someone to fulfill a request. When you have done something for someone (even when they did not ask for it), it becomes more difficult for that person to refuse a request. Creating the impression that the actual request already is a concession. When all that is needed is five minutes inside, it can be useful to request a tour on the premises. If this is refused, insist that it will only take five minutes to have a quick look around. Offering something that leads to a personal benefit. For example, send a phishing email with a code to receive a personal Christmas packet. Creating unexpected situations so that employees (especially security guards) are no longer able to follow their usual routine. We once dressed up as Sinterklaas (a traditional Winter holiday figure celebrated in the Netherlands) and his helper and have even penetrated a high security data center in this manner (Figure 3). The data center was at a secluded location and surrounded by high fences with barbed wire, dozens of cameras and an earthen wall that hid the building from view. We called security on the phone a week in advance and pretended to be from the HR department. We told them that we were calling about the Sinterklaas activities at the different locations. To get onto the premises, we first had to get through a checkpoint where a security guard behind bullet-proof glass consulted his colleagues inside the building when we showed up. Somewhat to

Compact_ 2012 0

Information security

25

Figure 4. A button camera that surreptitiously films security sensitive actions such as password keystrokes.

Figure 3. The Sinterklaas and helper who managed to penetrate the data center.

of the guards with a chocolate letter, they allowed us access. We made a tour through the building and we left again with no problems. Using distraction such as bringing along that attractive female colleague with a short skirt and high heels.

our surprise, we were allowed to enter the premises and the door was locked again behind us. When we arrived in the data center itself, we walked straight up to a glassed-in security area with five security guards. A quick peak in our heavy bag of pepernoten (traditional Sinterklaas cookies) would have sufficed to reveal the recording equipment of the spy camera (Figure 4) and unmask us. Hello! Well, here we are then!!, we called out, and instead of putting identification into the tray filled it with pepernoten. After bribing one

As mentioned before, for each social engineering test specific attack scenarios are elaborated depending on the specific situation of the client. These scenarios often use one or more of the aforementioned techniques . In Case study 2, a personal connection was made with the victim, recognition was induced by referring to internal departments, a personal benefit was offered (not losing data) and a compromise was agreed upon (last paragraph). That help had been previously given also created the obligation for compensation.

Case study 2
In a test case where the goal was to gain unauthorized access to a system, I called an employee to report that there was probably a problem with her system as it was causing an enormous amount of traffic on the network. I said that it would eventually crash her system and in the worst case prevent access to existing data. When I asked whether her laptop was very slow lately, I did indeed receive an affirmative answer (of course). After some random tapping on my keyboard, I said that I had found the problem, emphasized how very difficult it was to solve, but that I was working on it. I hung up and called again after half an hour to indicate that the problem was solved. After she had thanked me emphatically, I hung up.
Attack scenario
constitutes

Methods
based on

Psychological tricks

Figure 5. The relationship between methods, tricks, and attack scenario.

Two days later, I called again and said that, unfortunately, it turned out that the problem was still present and it appeared that changes needed to be made to her laptop. I asked her whether she could bring her laptop along to the local IT department (that I had already called earlier to determine how the process worked and to verify that there actually was a local service point) to give the impression that I actually worked within her company. The employee said she was very busy and it was very bad timing. I said that we could make an exception and that I could try to solve the problem remotely. I said that we, because of security reasons, never asked users for their passwords over the phone, and therefore I asked her to temporarily change her password to welcome123 so that I could fix the problem remotely. Two minutes later I was able to login to the laptop and I had access to the confidential data that I wanted.

26

Social engineering: the art of deception

Figure 6. Audio bug with which one can listen in via cell phone calls.

Figure 7. Key logger that collects all keystrokes.

Methods
Some common methods that are used in a social engineering attack are presented below. These methods partly rely on the previously described psychological tricks. The combination of methods constitutes the attack scenario.

Case study 3
It was just after eight oclock in the morning when I parked my car a few hundred feet from the building of one of our clients. I had earlier determined that most employees came to work with their car and parked behind the head office in the private parking lot. It seemed best to mimic this habit because walking through the car park would probably draw attention to my presence. In my car mirror, I kept an eye out for employees driving up to the lot. After about ten minutes, a gray car appeared. Once the car passed me, I merged and followed closely behind. Unfortunately, the car drove past the building of todays target and I was forced to circle back to my starting position. The second time, I had more luck and after the employee used his access badge to open the gate I could follow closely behind to get into the private car park behind the building. I waited until the employee left his car and entered through the staff entrance at the rear of the building. I walked to the smoking area near the entrance. I grabbed a new pack of cigarettes out of my pocket and lit one. Fortunately, there were no cameras on this side of the building, so I could just quietly wait until an unsuspecting employee joined this non-smoker who was flaunting a cigarette for the occasion. A woman wanting a smoke appeared after a little while. We talked a little and walked back together through the door opened with her employee badge into the building. I was inside! I immediately decided to follow her up the stairwell because it appeared that this client had placed card readers on the doors of each floor. I followed her to the fourth floor and entered the office, once again she politely opened the door for both of us. Luckily, there was a coffee machine so I could stay there for a while and observe the floor without walking myself into a dead-end part of the building. A little further away, I could see some rooms set up for meetings. I took my coffee with me to a meeting room, removed the cable from the VoIP phone and inserted it into my laptop. While my laptop booted up, I cast a glance at the stack of paper that I had grabbed from the bin near the printer while walking by. It included emails with a lot of addresses of employees in the To and CC fields. Perfect! These would be the victims in my next attack.

Phishing: this is an attack method using forged email messages or web pages that appear to be legitimate such as those of the employer, but which in reality are controlled by the attacker. These email messages and pages are often aimed at collecting employee data (for example, passwords). Dumpster diving: searching for valuable information by looking through garbage bins, bins by copiers, or containers outside an organizations premises. Pretexting: obtaining information under false pretenses (the pretext). For example, calling an employee and pretending you are a colleague. Tailgating: hitching along with an employee through a secured entry gate to get physical access to a secured location. Reverse Social Engineering: a method in which the victim is manipulated so that they ask the social engineer for help. The social engineer creates a problem for the victim and then makes himself known as an expert who can solve the problem. The social engineer then waits for the victim to make a request. Trust is more likely because the victim takes the initiative. Shoulder Surfing: watch when someone enters a password or PIN code. You do not actually have to watch. In several tests we used miniature spy cameras such as a button camera (Figure 4) with which you replace one of the buttons on your jacket. After the entry of a password has been recorded, it can be played back later. Placement of listening devices (bugs), wireless access point or key logger. Once access is gained to a building, it is often easy to place listening devices. Modern listening equipment is available at low cost. For instance, such a device can dial a previously programmed cell phone number when sound is detected so that the attacker can listen along via the phone (Figure 6). Alternatively, a key logger can be installed (Figure 7). This device can be plugged in between the keyboard and the computer in a few seconds and will then record all keystrokes that

are typed in. Current versions of key loggers can then automatically send an email with captured keystrokes to the attacker through a wireless network. Hiding an access point inside a building may also be useful (for example, by hiding it behind a radiator). After it is connected to the network, the attacker can then leave the building. On the outside, say in a car, the attacker then connects to the newly installed access point and

Compact_ 2012 0

Information security

27

After my laptop booted, I performed a port scan on port 80 on nearby IP addresses to look for internal web pages. I also used my web browser to try open a few obvious URLs like intranet.clientname.com, intraweb. clientname.com, search.clientname.com, directory. clientname.com, and so on. It did not take me long to find an internal web page. I copied the page and adjusted some text and after fifteen minutes I had put together an employee of the month voting page that looked exactly like the company web pages including logos and colors. Then, I started a web server on my laptop so that the newly created page could be accessed via the internal network. A second limited port scan allowed me to identify an internal mail server that had mail relaying enabled (allowing anonymous email to be sent out). At that moment, I had been in the building for at least twenty minutes and had not been questioned by anyone about what I was doing there. Then, I focused again on the victims. First, I sent an email via the mail server identified that contained the content of an email that I had copied from my spam folder, to some of the addresses in the printed emails. I hoped that this email would trigger an out-of-office message from one of the employees. When I then received just such an email, I copied the signature from it and changed the name and function to fictional ones. I now had a web page and an email message that looked exactly like those used in the organization. Then, I created an email with a reminder for the invitation to vote for the employee of the month. The message indicated that a random selection of employees could nominate their colleagues for this award. This could be done via an internal web page included in the link at the bottom of the email. Naturally, logging in was required to prevent people from making duplicate votes. The reminder indicated that those who missed the first mail still had the chance to enter their vote up until 12:00 oclock the same day. I switched to a second window and calmly waited

until the password of the first enthusiastic employees appeared in the second window. This took exactly two minutes after sending out the reminder email. By logging in at the site, the employees, in addition to their password and username, also automatically left behind their IP address. This was all the information that I needed. I started Metasploit (a hacker toolkit) that allowed me to remotely login to the PC of the first survey participant. Meanwhile, I had also found the user in the internal online telephone directory. Unfortunately, it turned out that the first employee worked in the finance department. At this stage, I was really looking for an IT administrator because they often have privileges to access a large number of systems. I decided to dump the local password hashes on the users system. Using the hash of the local administrator account, I tried to authenticate against the system of an arbitrary user on the network. This trick has worked at several client sites and was now also successful. Since all (or at least a lot of) desktops where installed from the very same image, the passwords for the local accounts were also identical. At this point, I had been inside for about three quarters of an hour without anyone noticing and I had already taken full control of two systems. Unfortunately, the password hash did not work on the domain controller, so I decided to keep logging into desktop systems until I found a system with a user (or process) that was running with the highest privileges (for example, the IT administrator). After twenty minutes, I found a system where an IT administrator was logged on. The freeware Metasploit tool has a built-in feature allowing you to take over the identity of a user and with it all his privileges. After I took over the identity of IT administrator, I had domain administrator rights and full access to all Windows systems and the data present on the network, including all servers with financial administration and the mailboxes of the board of directors. I made some screenshots and decided that it was time for a second cup of coffee.

then accesses the internal network with little chance of being detected and arrested. Malware: malicious software that, for example, collects and forwards passwords to the email address of the attacker. Malware can be installed on the systems by, for example, using an infected PDF file ([Paqu01]). The PDF file can be circulated in different ways, for example, by leaving a USB memory stick containing files titled 2011 payroll or fraud investigations in

2011 or similar. Ideal places to leave these sticks are in the restrooms or by the coffee machine. When the victim opens the PDF the malware is being run in the background automatically. In Case study 3, some of the above methods are used. This example shows, amongst other things, how information obtained from one attack can be used in another attack to get even more information.

28

Social engineering: the art of deception

Knowing about possible attack techniques and the weaknesses of the target builds real awareness
Case study 3 shows that it is not always important how many employees are tricked by social engineers. In this particular situation, it was enough for an outsider to deceive only two employees to compromise the entire IT environment. 6. Use secure waste bins for confidential information. 7. Verify the identity of the caller when asked for confidential information. (For example, in case of a telephone request, ask the caller to call back on a specific number.) 8. Never save confidential information locally or on a private PC or device (drive, USB stick). 9. Immediately alert the security officer about any suspicious activities. 10. Keep your access badge visible and request colleagues to wear their badge. Any unknown person without a badge should be escorted out of the building and handed over to the reception and/or security. To ensure that such rules are followed, it is necessary to monitor that employees are actually complying. The outcome of the monitoring (both positive and negative) should be given as feedback to the relevant employees.

Countermeasures
Awareness
The keyword in countering social engineering attacks is awareness. More specifically, it is what the targets know about possible attack techniques and their own weaknesses. In one of my assignments, in addition to the usual paper bins alongside printers, the client also placed large enclosed bins for any paper containing confidential information. Nonetheless, the bin for ordinary waste paper provided a huge stack of confidential documents (reports of security incidents, HR information, passwords, and so on). Why? It was probably too much trouble to push the piles of paper through the small slot in the bin for confidential paper and it was just easier to throw it all away in one go. When clients hear how a trick works at a presentation or training, people often say things like: you have to be really naive to fall for that, it would never work on me. Our test results shows differently. Therefore, it is useful to perform a test and confront employees with the results within their organization to really raise awareness. It usually shows that people are not so ready for such an attack as they think they are. It is this that leads to real awareness. In addition to promoting awareness, a test is also quite useful in identifying risks.

Conclusion
After reading this article, you may doubt that the cases described ever happened and that such incidents can succeed in real-life. Unfortunately, the reality is that these and similar attacks occur every day, despite the various security measures. Security personnel, barbed wire fences, access cards, CCTV, alarm systems, and so on, are not enough. Social engineers know how to penetrate into the heart of an organization. Performing a social engineering test can be a good way to identify risks in an organization and raise employee awareness.

References
[Hadg01] Christopher Hadgany, Social engineering the art of human hacking, 2010. [Mitn01] Kevin D. Mitnick and William L. Simon, The Art of Deception, 2002. [Paqu01] Matthieu Paques, Hacking with PDF files, http://www. compact.nl/artikelen/C-2009-4-Paques.htm. [security.nl] : articles concerning social engineering attacks, http:// www.security.nl/tag/social%20engineering.

Guidelines
Alongside awareness, it is essential to draw up guidelines and continue to check compliance with these. Consider drawing up ten rules for information security. An example is as follows: 1. Never reveal your passwords to others (including IT employees). 2. Do not share internal information with outsiders. 3. Adhere to the clean desk and whiteboard policy. 4. Lock your computer when you leave your workstation. 5. Do not leave any information behind at the printer.
Compact_ 2012 0 Information security

About the author


M.B. Paques is a manager in the ICT Security and Control team for KPMG IT Advisory. He has experience with security testing, social engineering, technical security reviews and the security of new technologies.

29

Toward a successful Target Operating Model


Application of demand-supply concepts in the design of a Target Operating Model for IT outsourcing

Gerard Wijers, Rudolf Liefers and Oscar Halfhide The quality of the internal organization is becoming progressively crucial to the existence and eventual success of an enterprise. At the same time, the organization must have the potential to own or develop capacities, knowledge and dynamic competencies to become a distinctively outstanding organization and to be able to maintain a sustainable competitive advantage. Modern organizations need sophisticated leadership and entrepreneurship to steer the organization and to maintain its course. A balanced and well-designed flexible governance, also called an operating model, is essential for such organizations. A governance model is a compass that not only supports the steering but also enhances decision-making that is required for proper steering. This article portrays various possible forms for a Target Operating Model for an IT organization in an outsourcing situation, as seen from the demand-supply management perspective. A variety of designs are elaborated, based on different fundamental starting points.

Introduction
G.M. Wijers

is a partner at KPMG Advisory. wijers.gerard@kpmg.nl

R.J. Liefers

is a senior manager at KPMG Advisory. liefers.rudolf@kpmg.nl

O. Halfhide

is a senior manager at KPMG Advisory. halfhide.oscar@kpmg.nl

Organizations today operate in interesting, fast paced, but also uncertain times. A few examples are globalization, shifts within and between markets, horizontal and vertical integration of operations in value chains, declining customer loyalty and the changing nature of competition. There are countless variables in the continuously changing and shifting playing field. Organizations are constantly working on increasing internal efficiency, improving external effectiveness and lowering costs. The concept of sourcing plays an important role in the optimization, rationalization and innovation of value chains within and between organizations. In this context, we examine services, processes and business functions and focus on how to achieve sustainable competitive advantages. In addition, we examine whether there is added-value in taking a different perspective concerning quality, time, and costs, or, to reorganize or outsource part(s) of the value chain. Examples are concentration-deconcentration, centraliza-

tion-decentralization, and insourcing-outsourcing. The risks with outsourcing are diverse, but usually are related to overspending or even unpredictability of costs, service degradation and loss of critical knowledge and expertise. Governance plays an important role in the repositioning or rearranging of activities within the value chain.

Outsourcing requires changes in governance


In recent decades, organizations have undergone changes from both a social and an organizational structure perspective. In some cases, this has been accompanied by a transformation where the organization has undergone a true metamorphosis both internally and externally. Such internal changes or transformations to organizations require adjustments in governance. However, such adjustments do not always occur. If governance, operations, organizational structure and objectives are not aligned then this leads to

30

Most frequently ignored aspects


Re-aligning the retained organization Effectively managing the change Adequately resourcing the transition project

Ensuring adequate involvement of the client business units in the transition Establishing an effective project oversight committee and joint governance framework Focusing on the process transformation required for the parties to effectively work together

suboptimal results, even in the most favorable situation. Companies that outsource IT have exactly the same problem ([Beul10]). The annual vendor performance study carried out by KPMG EquaTerra showed that 63 percent of the outsourcing IT organizations in the Netherlands characterized the quality of management as weak or average ([KPMG11]). One aspect of the Pulse study in 2010 by KPMG EquaTerra ([KPMG10]) focused on the question of how organizations perform their sourcing transition. This study revealed that the realigning and redesigning of the retained IT organization, needed for an adequate connection between business demand and IT supply, often seems an undervalued element of a sourcing transition. The role of the business itself as well as well-applied demand management and supply management concepts are both crucial. When organizations choose to outsource, the internal management of organizations is often not adequately co-developed and adjusted. The internal governance structures often remain unchanged and traditional, fragmented and not very goal oriented. In an outsourcing situation, this hinders effective cooperation between parties in the demand and supply chain and leads to a partial or complete failure to meet sourcing and company objectives. The question arises as to what a modern governance

Ensuring checks and balances are in place to validate go-live readiness Validating the contract scope of services 0%
Advisors

50%

100%

Service Providers

Figure 2. Changing the own (retained) IT organization is not dealt with sufficiently during the transition.

model should look like, that can be flexibly utilized and is easy to adapt for a certain arbitrary outsourcing situation.

The role of a Target Operating Model


In the case of a reorganization, a design is (usually) drawn up for the future organization model. Previously, the development of a organogram was deemed adequate. Today, companies are analyzed, developed and optimized from an added-value perspective. Accordingly, rationalizing from the perspective of the value chain(s) upon which the organization is built, added-value activities are to be bundled into logical units. Elaboration of added-value units makes it possible to outline a Target Operating Model, creating a clear picture of how a company or organization wants to serve its markets, what services will be offered and how to make use of the available subcontractors market. The Target Operating Model serves as the foundation on which further detailed designs can be elaborated for processes, workflow, organizational structure, roles, responsibilities and so on (see also Figure 3). A Target Operating Model will thus show the most distinctive strategic design choices that an organization makes to achieve its objectives. It shows the relationships between components in the value chain and the organizational units and business functions and corresponding governance structures. A Target Operating Model allows senior management to explain what the organization looks like and how it should function. The nature of a Target Operating Model also makes it a powerful communication tool for reorganizations.

4%
Excellent

16% Weak

33% Good

Average 47%

Figure 1. The quality of governance for IT organizations in the Netherlands in 2011. Compact_ 2012 0 Demand and supply management

31

Determine strategy What? How?

Which customers? Which products? Which channels? Which differentiating


factor?

Macroprocesses People Governance mechanisms

Structure Culture TARGET OPERATING MODEL Locations (geography)

Workflow Microstructure Job descriptions Duties and Mechanisms for


responsibilities cooperation and coordination

Governance and KPIs Technology and data


Detailed design

High level

Operationalization of the strategy

Figure 3. Using the TOM to go from strategy to detail design.

Outsourcing as a necessity for a new TOM for IT


In recent decades, it has become clear that the use of economies of scale and a focus on core competencies can lead to large cost savings and that external service providers play an important supporting role. Providers work on a larger scale and allow the customer to focus on other important issues. However, the success or failure of outsourcing is largely determined by a close cooperation between customer and supplier. Many external IT outsourcing suppliers are large multinational companies that must have clearly formulated requirements about new or upgraded IT facilities to be optimally effective. This allows the manage-

ment of such supplier organizations to be successful and become an added-value activity. Also, creating well-organized connections between business demand and IT supply is deemed a value-adding activity and hence deserves to be placed in the Target Operating Model (see Figure 4).

Designing a Target Operating Model


When constructing a Target Operating Model for an IT organization within a full outsourcing context, two main building blocks are important:

Strategic assumptions

Markets, services, sourcing strategy Target Operating Model

the first building block, demand management, is focused on the formulation of needs (the what). Demand management is customer-facing, ensures that the demand is well-defined and that the supply conforms with the demand. The value strategy is Customer Intimacy. the second building block, supply management, is focused on attaining the right services for these requirements (the how). Supply management is supplier-facing (internal/external) and ensures that the required services are provided. The value strategy is Operational Excellence.

Customers, suppliers

Process design

Organizational structure

Figure 4. Sourcing strategy as a factor influencing the Target Operating Model of the IT value chain.

The third building block, delivery, is focused on the actual delivery of the service(s). This building block may be internal (within the organization) or external (a supplier). The delivery building block deals with the development of IT solutions (project oriented) and the management of the solutions (management oriented). Management includes IT infrastructure, application management and database management. Figure 5 illustrates this structure.

32

Toward a successful Target Operating Model

Business

Demand management

Supply management

Delivery (internal or external) Focus on efficiency doing things right

Focus on effectiveness doing the right things

Why? What? Business processes Functionality Information Requirements

Which? Who? Applications IT infrastructure Data IT solution

Figure 5. The building blocks of a Target Operating Model for IT.

Case study
CIO Office Application Development & Maintenance Business Application Services Information management
per business unit

As a result of distinguishing between


Application service provider

business demand management supply management, and internal and/or external delivery

a clear demarcation of responsibilities becomes visible, in such a way that everyone knows which part they should play in the demand-supply value chain.

Infrastructure Services Infrastructure Services Supply management Infrastructure service provider External delivery

Playing with building blocks


Playing with the various building blocks reveals the first outline of the desired value chain for the IT services for an organization and thus with the Target Operating Model. The choices made here are mainly concerned with:

Demand management

Figure 6. Demand and supply management units located in the IT value chain.

In Figure 6 we show an example of the Target Operating Model for an IT organization, that has a number of different delivery units that deliver IT services that are business unit specific.

products and services delivered by the organization the complexity and specificity of the application landscape the chosen sourcing strategy

The demand management (business information management) is organized within each business unit. The supply management is mainly grouped into realizing business applications on the one hand and providing infrastructure on the other. o The business application teams are application oriented and identifiably aligned with the business: each group focuses on an application landscape that either supports an enterprise domain or a business domain. o The business application teams focus especially on supply management (management, specifications and testing) and manage application suppliers. There are three infrastructure service teams: the own service desk, supply management for hosting and networking, and supply management for workplaces and telephony. The actual management of this infrastructure is outsourced.

The above building blocks can be used to derive strategic design possibilities for a future Operating Model. In our experience, the following options are possible: 1. demand management per business domain and/or generic use 2. supply management aligned to service/technology domain and/or generic use 3. internal delivery combined with supply management 4. combine demand management and supply management 5. strategic management processes per business domain and/or generic use (enterprise level) Each option is briefly described below.

Compact_ 2012 0

Demand and supply management

33

1. Demand management per business domain and/or generic use Demand management focuses on the customer. Organizations that have very dissimilar business units, each with their own strategy, will benefit from demand management that directly supports each business domain. Organizations that are focused on global and straightforward processes and products will benefit from a central/generic demand management organization. See Figure 7.

Business (group)

Demand management (generic)

Business domain or Business (group) Demand management (generic)

Business domain or Business (group)

Demand management (business domain)

Business domain

Demand management (business domain)

Figure 7. Generic demand management (enterprise wide) and/or by business domain.

2. Supply management aligned to service/technology domain and/or generic use Supply management can govern many dissimilar services. The nature of these services can be so different that we need to make a distinction between the possible types of supply management. In practice, it is categorized into the following three IT service delivery domains: 1) development, 2) application, and 3) infrastructure management. See Figure 8.

Supply management

IT service delivery

IT service delivery Domain N or Supply management IT service delivery Domain 1

Supply management

IT service delivery Domain N

Figure 8 . Supply management per service or several services together.

34

Toward a successful Target Operating Model

3. Internal delivery combined with supply management When the provision of IT services is outsourced, it becomes necessary to identify the components of supply management. For internal delivery, it is somewhat less clear-cut. It is quite possible to combine the responsibilities for supply management and IT delivery. See Figure 9.

Supply management or

Internal service delivery

Internal supply management & service delivery


Figure 9. Internal or separate delivery with supply management.

4. Combine demand management and supply management For specific business domains and application services, it can in some cases be advantageous to combine demand and supply management. For example, it is convenient in business domains that use their own specific application suite. See Figure 10.

Demand management or

Supply management

Demand & supply management (management)


Figure 10. Demand and supply management separate or together.

5. Strategic management processes per business domain and/or generic use (enterprise level) Fulfilling this option is the result of identifying three levels of management which can lead to further refining of the Target Operating Model for IT organizations:

Business (group)

Strategic management (CIO Office)

The strategic processes determine the path of the enterprise in the middle and long term and also define the scope. It is necessary to consider strategy and policy, compliance, portfolio management, architecture and annual budgeting processes. The tactical processes concern the acquiring and maintaining and allocating of assets (money, people, means of production and support services) so that business objectives can be met. This may include project portfolio management, financial management, contract management and so on. The operational processes actually make use of the business assets for realizing the services, where one part can be performed by the organization itself and the other part can be performed by the supplier.

Business domain or Business (group) Strategic management (CIO Office)

Business domain or Business (group)

Strategic demand management

Strategy and policy, compliance, portfolio management, architecture and the annual budgeting cycle can be set up for each business domain or for the whole company (and sometimes at both levels). This choice is largely determined by the prevailing business governance. See Figure 11.

Business domain

Strategic demand management

Figure 11. The different types of strategic management layers.

Compact_ 2012 0

Demand and supply management

35

Lessons learned from designing a TOM for IT


Design the management organization timely During the outsourcing process, it is often the case that insufficient attention is given to managing IT demand and it is then often reflected in the design of the demand & supply management model: the design of the demandsupply management organization turns out to be insufficient. This leads to the business case and keenly negotiated contract being undermined by ill-defined management of demand and ill-defined specification of IT services that the supplier should deliver. This leads not only to frustration for the customer, but also for the supplier. In addition, the combination of an unbalanced governance model with an immature demand-supply management organization makes the outsourcing an extremely risky proposition from a business perspective. As a result, the business benefits calculated in the original business case will not materialize and this will cause irritations and escalations that will jeopardize the collaborative relationship. Allow for the optimal sizing of demand-supply management organization There is an optimum size for demand and supply management activities. An organization that is too small often leads to ill-defined specification of services and this may lead to uncontrollable throughput. This is characterized by a large number of small contracts and unmanageable hourly rates. A demand-supply management organization that is too large leads to overly divided responsibilities and thus to superfluous internal discussions, that impede momentum and sink productivity. This means that problems in governance and management cannot be solved by simply hiring additional people. Benchmark based research shows that governance and management of demand and supply after outsourcing of processes and/or services is extremely dependent on the type of work that is outsourced. For example, for outsourcing in the IT domain, this lies between 12 and 24 percent of the total IT expenditure (contract value plus management costs).

of the outsourcing objectives and the service performance itself. If the value chain and Target Operating Model are not clear, it is senseless to have discussions with suppliers about the effectiveness of the collaboration. Thus, it is extremely important to have an effective Target Operating Model for IT. This TOM must include a clear new results-oriented governance structure with corresponding processes, responsibilities, roles, jobs, competencies and an appropriate organizational sizing. The precise design of the TOM is determined, among other things, by the business strategy and the sourcing strategy. A good TOM ensures that the demand is driven by the business in consultation with the suppliers and safeguards the collaboration at all levels in the IT value chain. A successful outsourcing engagement ensures that designing the demand-supply management structure is already taking place when starting the outsourcing selection process. So when outsourcing must be managed (more) effectively, it is an important precondition that sufficient attention is given to applying demand-supply concepts.

References
[Beul10] Erik Beulen, et al., Managing IT outsourcing, Routledge, 2010. [KPMG10] KPMG EquaTerra Pulse Q3-2010. [KPMG11] Dutch Strategic Outsourcing Study 2011, KPMG EquaTerra, 2011. [Wije10] Gerard Wijers, Oscar Halfhide and Erik Cazemier, De regieorganisatie op maat (bespoke management), Outsource Magazine, June 2010.

About the authors


G.M. Wijers is a partner at KPMG EquaTerra. He has over twenty years experience in consulting on IT strategy, IT sourcing, IT governance and customer-supplier relationships within IT services. He works both at home and abroad in various market sectors. He is also affiliated with the post-graduate information management program of Delft TopTech and Nyenrode. He received his Ph.D. (cum laude) from Delft University of Technology. R.J. Liefers is a senior manager at KPMG EquaTerra. He has over fifteen years experience in IT management and IT & business alignment topics and focuses on designing, implementing and improving IT governance and the demand-supply position of customer organizations. He studied Business Information Systems at the Saxion Uiversity and received his executive masters degree in Information Management from TiasNIMBAS Business School. O. Halfhide is a senior manager at KPMG EquaTerra. He is an experienced enterprising consultant and management trainer who has worked for more than twenty years in the field of national and international outsourcing. He is active as a facilitator, speaker and author of articles within several professional networks. He received his executive MBA from RSM/Erasmus University and studied Business Engineering at the Saxion University.

Conclusion
A well-designed Target Operating Model is essential for successful outsourcing Organizations that rely heavily on IT with respect to their service or product cannot be effective when the IT value chain is not set up effectively. This is especially true with respect to governing demand and supply in IT outsourcing situations. When internal governance is not in order, it endangers the creation of added-value, the achievement

36

Toward a successful Target Operating Model

Adaptive IT service providers: fact or fiction?


Albert Plugge Many companies and organizations find that outsourcing the IT function is an effective way to implement their business strategy. Enterprises that outsource expect their service providers to provide high-quality IT services. Nonetheless, experience shows that not all service providers succeed in providing the consistent performance agreed upon for IT services. The multidisciplinary nature of IT outsourcing leads to increased complexity and this impacts the realization of consistent performance. The purpose of this paper is to provide insight into a methodology that will enable service providers to achieve consistent performance for their customers. The methodology we developed and the corresponding measuring tools are also discussed.

A.G. Plugge

is a senior manager at KPMG Advisory. plugge.albert@kpmg.nl

Introduction
In the past 15 years, globalization, deregulation, and consolidation have played a significant role in how companies develop a business strategy. IT strategy derived from the business strategy invariably raises the question: what activities should we perform ourselves and what should we outsource? Examples of these IT activities include IT infrastructure, business applications, communication networks, and so on. In recent decades, the number of companies choosing to outsource all or part of their IT

environment has increased significantly across the globe. In 2010, the IT outsourcing market was US$ 270 billion with an annual growth of between 7 and 10% ([IDC10]). Enterprises that outsource IT activities expect their service providers to provide high-quality IT services that satisfy the agreed upon service level agreements. Factors that affect the quality of service include relationship building, contracts management ([Beul11]), insight into hidden costs ([Bart01]) and change management ([Plug09]). The multidisciplinary nature of IT outsourcing leads to increased complex-

Compact_ 2012 0

37

ity for service providers and this impacts the realization of consistent performance. Both scientific and market research ([Feen05]) have shown that many service providers are deficient in or incapable of providing consistent performance for their customers during the period of the outsourcing contract. By consistent performance we mean that the IT services delivered satisfy the agreed upon service level agreement. This article describes a work method for IT service providers that is based on adaptivity. First, the necessary background is covered that will throw some light on providing consistent performance. This is followed by an elaboration of the adaptivity concept. Next, the work method to achieve consistent performance is explained and then the corresponding measuring tools.

Background
IT service provider performance that does not meet the agreed upon requirements of the outsourcer often has a direct impact on the primary business processes. In practice, the deficiencies in consistent performance gives rise to onerous (financial) discussions between the outsourcer and service provider that put extreme pressure on the relationship. There is good reason for the sharp increase in the number of outsourcing mediation cases in the last five years. Lowered service provider performance also leads to lowered customer satisfaction and an attendant lowering of the recommendation rating. This is the degree to which an outsourcer recommends a service provider to other enterprises. Inconsistent performance is strongly related to the sourcing expertise (capabilities) of a provider and the manner in which the expertise is organized. The sourcing capabilities can be seen as the relationship between knowledge, experience, processes and procedures that support the development and delivery of IT services. This involves capacities that are both tangible (hardware, software) and intangible (attitude, behavior). Interestingly, enterprises that are
Relationship
Planning & Contracting Organizational design Governance Customer development

motivated to outsource make the assumption that providers actually have sufficient sourcing capabilities. When sourcing capabilities are further elaborated, it elicits the well known IT processes with relation to information management, service management and change management. A sourcing capability model is a convenient way of forming an impression of what sourcing capabilities are important ([Feen05]). The model (see Figure 1) describes a dozen capabilities divided into three areas of competency: Relationships, Delivery and Transformation. Providers must have a sufficiency in these sourcing capabilities to be capable of delivering quality IT services. The sourcing capabilities partially make use of IT processes. Thus, a relationship arises that is supported by the internal information services within the organization of the outsourcer. This affects the domain of IT auditing with regard to the specific monitoring of IT risks and management of IT processes. In addition, the question arises as to how these sourcing capabilities are organized within the organization. Is it clear where these capabilities are available in the organization and are these easy to gain access to? When sourcing capabilities must be made available internationally, it increases the complexity of organizing them. Moreover, dimensions that play an important role are decision making, hierarchy, communication, horizontal integration (specific or generic knowledge) and the degree of formalization. The developments on the customer side also appear to have an impact on the sourcing capabilities and organizational structure of providers ([Plug09]). Examples of changing customer circumstances include the changes in the sourcing strategy of the customer (from single sourcing to multivendor sourcing), the need for innovation and the need for flexible provision of manpower and resources. These business needs call for constant monitoring of providers and assessment of the impact on their own capabilities and organizational structure. Organizing IT services brings various orientations together including organizational structure, IT processes, competencies, HR, laws and regulations, and, of course, information technology. In a word, the delivery of IT services is multidisciplinary. Remarkably, many providers manage the changes only in specific knowledge areas, but not all areas as a whole. In fact, the different disciplines are interdependent and this complexity means that these can no longer be managed separately. This increasing complexity obligates service providers to pursue an interdisciplinary approach within the said orientations. Changes on the side of the outsourcer may mean that existing sourcing capabilities and organizational structures need to be adjusted. This demands a high degree of adaptability from the board and senior management of the providers. Given that many providers base the delivery of IT services on the value discipline model called operational excellence, adapting to the changes in the customers circumstances leads to internal conflicts. Offering tailor-made solutions

Leadership Business management Program management Behavior management Sourcing Process re-engineering Technology exploitation

Delivery

Domain expertise

Transformation

Figure 1. Sourcing capability model ([Feen05]).

38

Adaptive IT service providers: fact or fiction?

is always completely at odds with delivering IT services at the lowest possible cost. The key to resolving the conflict can be found in achieving a balance between sourcing capabilities and the manner in which these are organized. This balance will lead to the realization of consistent performance.

Adaptivity
The delivery of consistent IT performance requires the ability to adapt. Two factors play an important role here. The first is the willingness of providers to adapt themselves. It is not a given that this will occur automatically. Other influences within the organization can affect the willingness to adapt. Examples include a re-evaluation of the business strategy, shrinkage of market share, or loss of revenue in a specific market segment. After all, adaptation costs time and money. This also requires that management work to effect the changes and to ensure these are realized. In addition to willingness to change, the second significant factor in achieving consistent performance is the ability to actually implement these adaptations. An organization must have the right people and resources (processes, systems and tools) to be able to implement changes. The combination of willingness and ability to cope with change determines the degree of adaptivity.

During the second phase, the identified developments are assessed for their impact in relation to sourcing capabilities and organizational structure. An impact analysis shows which specific sourcing capabilities must be boosted or built from scratch. Additionally, an assessment is made on whether the organizational structure oriented toward delivering IT services to the customer must be adjusted. The analysis also includes a substantive review of the agreed upon service level agreements to determine whether, and if so, what changes should be made. The outcome of the analysis is then presented to the board or senior management. This step makes it possible to assess the impact of changes on the customer side on your own organization. Decisions can now be made within a much broader context. The third phase focuses on developing improvement initiatives. The impact analysis is a basis for developing focused initiatives that strengthen the sourcing capabilities and guaranteeing the possible changes to the organizational structure. Discussing these improvement initiatives with the customer positively influences their perception of the provider. Managing expectations helps restore the relationship between customer and provider and their trust in each other. The fourth and final stage is the implementation of the proposed improvement initiatives. Experience shows that people get caught up in day-to-day issues that regularly prevent improvement initiatives being implemented. This means that supervision of the actual implementation of the changes is very important. Setting up programs or projects is an effective way of ensuring improvements are realized. In particular, this is the responsibility of senior management within the provider organization. The adap-

Work method
A specific work method (Provider Performance Approach) was developed to assist providers in the change process required to achieve consistent performance ([Plug11]). The provider audience can be divided into two subgroups: external service providers and shared service center organizations within an enterprise. Both situations involve the delivery of IT services to end users (customers). The work method developed (see Figure 2) is a phased design and consists of four phases. The first phase focuses on monitoring and discussing customer developments in the relationship with the outsourcer. Developments occur on the customer side during the contract period of an outsourcing agreement that may affect the provider organization. Examples of these developments are the decision to be active in other markets and adjustments to the portfolio. The regular monitoring and discussion of customer developments may seem trivial. In practice, many providers focus more on operational activities, such as resolving incidents and the implementation of changes in IT infrastructure and applications. This attitude takes away from the task of mapping developments that will occur in the medium and long term. This approach often leads to recognizing problems too late and further delays in implementation of necessary changes. This gives rise to additional costs in the long run when catching up on those changes.

Im

Initiating the plans Education, coaching,


training Testing the solution

t en em pl

Mapping
changes

Discussions on the Performance


developments measurement

1M on ito customer-side

Transition and

Balance between sourcing capabilities & organizational structure

for processes and m capabilities

Change management Draw up plans


pr

transformation plan

Impact analysis Test sourcing


capabilities & organizational structure Diagnosis

ov

al Ev

Figure 2. Phased approach ([Plug11]).

ua

te

Compact_ 2012 0

Demand and supply management

39

tive ability of a provider begins with the willingness of management to deliver consistent IT performance. After improvements are implemented, the monitoring of changes begins again, which accounts for the cyclical design of the methodology.

Service providers should pursue an interdisciplinary approach


tomer team within a market segment. The checklist can be supplemented with customer-specific developments. This provides a good snapshot of the different types of developments. Subsequently, the results and conclusions with respect to the changing customer circumstances are discussed with the customer. This is partly for verification and partly to create client awareness that the provider is taking the changes seriously and is dealing with them. A second instrument is a questionnaire that is used to measure the performance of the provider. The questionnaire (via the web) is used to the map the perceptions of the provider employees with respect to four different themes. These four themes are customer developments, sourcing capabilities, organizational structure, and performance. An example of some of the results of the questionnaire is shown in Figure 3. In the example, the previously described competencies are worked out, namely relationship, transformation and delivery. For each phase, each of the statements is given a score (under the Xs) and number of respondents (under the Ns). In addition to the specific snapshot of each phase, it also gives an overall general perception of the performance (Grand Total). This provides insight into each phase and whether there are any bottlenecks. The questionnaire is completed by employees who are selected because they are actively involved in the
Relationship X N 3,0 3,4 3,3 3,5 3,2 2,3 3,2 2,5 2,4 2,9 5 5 1 2 3 7 9 6 4 5 Transformation Delivery X N X N 3,2 3,4 2,6 2,7 3,2 2,6 3,1 4,0 3,4 3,7 17 19 17 16 12 15 11 20 20 20 2,8 3,3 2,2 2,9 3,3 1,8 3,0 3,2 2,4 2,7 17 9 4 6 13 10 14 12 10 10

Measuring tools
Each phase of the developed methodology is translated into appropriate methods. This provides a measuring tool that traverses through the entire cycle of the methodology. Working in this manner, we work step by step toward the realization of consistent IT performance. The work method and associated measuring tool is used by different service providers (national and international). The material developed for this purpose is available in both Dutch and English. The methods for each phase, and experiences with the methods, are explained sequentially.

Phase 1: Monitor
To gain insight into the changing customer circumstances, the developments for each market segment are investigated. The reason is that developments in a specific market segment, say the retail sector, can have an unusual effect on the sourcing capabilities and organizational structure of the provider and thus also their performance. The most significant developments in a market segment are added to a checklist. This checklist is used for each cusCustomer developments
K01 The provider regularly monitors changing customer circumstances

Grand Total X N 3,0 3,4 2,6 2,8 3,2 2,3 3,1 3,5 3,0 3,3 39 33 22 24 28 32 34 38 33 34

K02 The provider explicitly identifies customer changes K03 Customer developments are assessed for the impact within the organization K04 Customer developments affect the sourcing capabilities K05 Customer developments affect the organizational structure K06 Customer developments affect performance K07 The business strategy is based on the customer intimacy K08 The customer sourcing strategy impacts sourcing capabilities K09 The customer requirement for innovative solutions impacts sourcing capabilities K10 The customer requirement for flexible deployment of staff impacts sourcing capabilities

Figure 3. Example (part of a completed questionnaire).

40

Adaptive IT service providers: fact or fiction?

IT outsourcing contracts. A distinction is made between three groups: relationships (sales), transformation and delivery. The reason is because employees in these different groups often have a different perspective about the above themes.

Phase 2: Evaluate
During the evaluation phase, the information collected in the previous phase is analyzed. The identified customer developments and the results of the questionnaire are evaluated with respect to the subsequent impact on the organization (impact analysis). This is both a qualitative and quantitative evaluation. It is possible to compare the results by selecting different target groups (sales, transition and delivery). Experience shows that this comparison provides surprising insights into how different groups look at current IT performance. Bottlenecks are identified based on the first analysis. In-depth interviews are used to discover the reasons for bottlenecks. The in-depth interviews should be conducted with participants working within the previously mentioned groups. Supplementary interviews are held with representatives from customers who receive services from the provider. This allows the outcome from the impact analysis by the provider to be tested against the perceptions of the customer regarding the IT services delivered.

an outsourcing contract is pre-eminently a people business. It is crucial to include employees who are affected by the changes. Indeed, employees are the key to realizing change. After a certain period, the outcome of the implementation is checked against the proposed improvement initiatives and, if necessary, adjustments are made. This allows improvements to be embedded within the organization. A case study follows that shows the work method and tools in action.

Case study: Regional IT service provider


The case study describes a service provider based in India that is specifically active in Europe and provides IT services to customers via delivery centers across the globe (development and management). As of 2001, the provider is active in IT outsourcing with a focus on target companies that have between 8,000 and 50,000 employees. The business strategy of the provider is based on three pillars: customer intimacy, supporting a limited number of market segments, and the pursuit of a cultural fit with its customers. The portfolio of IT outsourcing services focused specifically on IT infrastructure, workplace services, and application management.

Phase 3: Improve
The bottlenecks identified in the impact analysis in the third stage are translated into a number of improvement initiatives. This requires interaction with the most important stakeholders, including senior management, application experts and technical managers. To support this process, a workshop is developed in collaboration with the responsible stakeholders to discuss the outcome of the analysis. The bottlenecks are then iteratively reworked a few times into improvement initiatives. For each improvement initiative, it is determined what specific activities must occur and who is responsible for these activities. Examples of improvement initiatives include design or re-design of IT governance processes, the designing or re-designing of organizational structures, the strengthening of specific sourcing capabilities and developing of an adaptation process.

Work method
As part of the first step of the work method, a questionnaire is disseminated among employees of the provider actively involved in outsourcing contracts. Subsequently, in-depth interviews are held with employees who are actively involved in a specific customer relationship. The customer was an international insurance company with headquarters in the Netherlands and operating globally. In addition, the customer relationship was investigated over a five year period with respect to the delivered performance. The analysis (Step 2) revealed four key events that had affected the performance of the provider. These events were related to: the transition phase, the transformation to an eService organization (on-line insurances), the safeguarding of IT continuity, and the need for more flexibility regarding the use of resources (FTE). The analysis showed that on the provider side, there were two significant causes that played a role in the events. The first was a lack of adequate sourcing capabilities (knowledge, skills, support processes) to adequately deliver IT services. The second was because the organizational structure was not aligned with the organization of the customer. During the third step of the work method, the events and bottlenecks are translated into a number of solutions. During the transition phase (first event), it appeared that the provider did not have the necessary sourcing knowledge and experience available to carry out or complete actions

Phase 4: Implement
In the last phase, the established improvement initiatives are actually implemented in the organization. An appropriate activity is sought for the type of improvement initiative, e.g. workshops, training and coaching. In particular, attention is given to the soft side of change. Experience shows that change invokes resistance. This is certainly true for changes in sourcing capabilities and organizational structures. The delivery of IT services based on

Compact_ 2012 0

Demand and supply management

41

in the appropriate manner. The transfer of people and resources (assets), translating the contract into workable procedures, and redesigning of the IT landscape required senior program managers and project team members with extensive knowledge and experience. These proved to be lacking in practice. The senior managers who replaced some team members brought more structure to the approach and this led to increased performance. In the transformation process towards an eService organization (second event), there was a need to be able to quickly start developing applications for projects. The dilemma that occurred here was that the customer could not take advantage of developments quickly enough (new eServices) because the provider did not have sufficient IT resources. This resulted in long lead times and difficult discussions between customer and provider. To solve this problem, the provider developed a resource and capacity tool (forecasting) based on certain attributes (initial work activities, type of application, required skills) to obtain an estimate of how many resources were needed. By incorporating experiences with the application projects into the tool, it was possible to substantially increase the predictive capability. This made it possible to enter changes during a project, such as enhancement work, directly into the tool and have it automatically adjust the planning and resource usage. This resulted in a significant reduction in turnaround time when developing applications. A phenomenon that a lot of providers especially in India have to deal with is frequent employee turnover (third event). The downside of the low-cost development of IT services is that workers can rapidly develop their knowledge and experience and then change employer. This development puts pressure on continuity in the delivery of IT services to the customer. This problem is solved by deploying so-called shadow resources. By staffing up an extra 30% above the existing workforce in the onsite and onshore team it is possible to deal with the turnover. The extra employees fulfill tasks that broadly cover the activities oriented toward the customer. This provides better safeguards in terms of continuity. The need of the customer for more flexibility (fourth event) is translated into a change in the functional organization. Here, a model was developed that ensures the physical support of provider employees at both the customer side as well as the onshore location in the Netherlands. Thirty provider employees are now permanently located at the customer site. This group of employees is mainly involved with defining functionality (specifications) for applications. In addition, there are about 40 employees present at the onshore location in the Netherlands. The group is focused on translating the new requirement for IT services into the development of solutions and managing colleagues working in offshore locations. The rapid scaling up of resources with specific knowledge and experience was an important requirement.

The work method developed as applied during the events had made a demonstrable contribution to improving provider performance. Another outcome was that the customer satisfaction with the providers performance increased significantly. Finally, it is worth mentioning that the proactive attitude of the provider resulted in gaining market share (additional assignments and activities) at the expense of a competitor. This shows that the adaptive ability of this provider played a crucial role in the customer awarding them additional projects.

Conclusion
Despite the fact that there so much has been written about the importance of the adaptive ability of providers, experience in sourcing shows that its existence is regularly a fiction. Service providers must develop themselves past their current practices so that they have the ability to adapt to changing customer circumstances. This will put them in a much better position to actually safeguard the agreed upon performance levels for IT services. Adaptivity particularly requires active management involvement. Given that IT outsourcing contracts is multidisciplinary, managers of service providers must take an interdisciplinary perspective and act accordingly in the handling of adjustments of sourcing capabilities and organizational structure. The work method and measuring tools developed for the Provider Performance Approach have provided demonstrable results for both national and international service providers. The deliberate monitoring and evaluating of changing customer circumstances and the subsequent adjustments to sourcing capabilities and organizational structure increase the adaptive ability of service providers. Adaptivity is no longer a fiction but a fact!

References
[Bart01] J. Barthlemy, The hidden cost of IT outsourcing, Sloan Management Review, 42 (3),2001, p. 60-69. [Beul11] E. Beulen, P. Ribbers and J. Roos, Managing IT outsourcing. Governance in Global Partnerships, 2nd edition, Routledge, London, 2011. [Feen05] D. Feeny, M.C. Lacity and L.P. Willcocks, Taking the measure of outsourcing providers. Sloan Management Review, 46 (3), 2005, p. 41-48. [IDC10] IDCs second quarterly revision for 2010 Western European IT services, 2010. [Plug09] A.G. Plugge and M.F.W.H.A. Janssen, Managing change in IT outsourcing arrangements: an offshore service provider perspective on adaptability. Strategic Outsourcing, 2, 2009, p. 257-274. [Plug11] A.G. Plugge, Managing change in IT outsourcing arrangements: Towards a dynamic fit model. Unpublished Dissertation, Delft University of Technology, Delft, 2011.

About the author


A.G. Plugge is a senior manager within the sourcing unit KPMG Advisory. He is responsible for the design and implementation of Shared Service Centers and redesigning of IT service provider organizations. Part of these assignments are the design and implementation of Demand and Supply Management, the setting up of IT governance processes and improving the performance of IT delivery departments. Besides his work at KPMG, he is a Senior Research Fellow at Delft University of Technology, where he conducts research internationally in the field of IT outsourcing. Furthermore, he regularly organizes lectures, both at Delft University, as well as at the Nyenrode Business University and the University of Tilburg.

42

Adaptive IT service providers: fact or fiction?

A closer look at
Guido Dieperink and Jeroen Tegelaar Companies, now more than ever, are under continuous pressure to demonstrate improved performance. Government regulations, price pressure, lack of trust, critical shareholders and strategic objectives are increasingly triggering the need for structural changes in daily business operations. The transformation is born! This article is about transformation, its shapes and forms. We look at transformation through the eyes of midsize and large corporations, and analyze the journey these companies must go through to achieve the objectives of the transformation. If enterprises succeed in prevailing over the many bumps and setbacks on this journey, the transformation will lead to a new and stronger position in the market place in which they operate. In this article, we provide a definition of transformation and describe the various types that exist. We also outline which dimensions of the organization are impacted by the transformational change. Finally, we support our viewpoint with practical examples.

Transformation: a definition
Ask ten people to describe a business transformation and you will get ten different answers. Most describe it in terms of business related change, but what it exactly encompasses, is not clearly defined. It is not strange that the term is so commonly misused, even though the need for transformation is greater now than it has been for years. Many examples of true business transformations can be given. In all cases, however, the relationship between change and transformation should be delineated. Lets first identify what cannot be classified as a transformation. A change arising bottom-up, initiated because the daily business is inefficient or needs to be improved, is not a transformation but rather an optimization. An implemen-

G.H. Dieperink

is a director at KPMG IT Advisory. dieperink.guido@kpmg.nl

J.A.C. Tegelaar

is a senior manager at KPMG IT Advisory. tegelaar.jeroen@kpmg.nl

Compact_ 2012 0

43

tation of an IT system is, in itself, not a business transformation because it only affects part of the organization and only yields a limited strategic advantage. In our opinion, a transformation should always be initiated upon a strategic (burning) platform. This means that a transformation will only be partially successful if implemented from the bottom-up or when relevant for only a part of the organization. We view the following definition as most suitable:

Strategic value

A large-scale business transformation is characterized by an intervention from senior management, driven by situational factors, technological or internal changes that impact all dimensions of the organization, with the long-term goal of increasing the performance of the entire company.

Dimensions of an organization

Figure 1. Complexity of transformation.

The most striking transformations often stretch several decades as was the case with IBM. The company originally focused on clocks and typewriters in the early twentieth century. As the years passed, it responded so well to the rise of the computer that it is still considered in the present age as a leader in the domain of business machines. Nokia is another good example. Formerly known for paper, rubber and cables, it gradually transformed itself into a mobile communications giant. These organizations are continuously adapting, driven by a keen vision and clear business strategy, to (re-)create their future through structural transformation. If transformation can be characterized, as outlined above, as a (continuous) intervention by senior management, it is always strategically significant because the performance of the company will progress up to a higher level. Such interventions may be imposed via external causes such as a government mandate. A good example is the separation of a large international financial institution in Europe into a separate bank and an insurer. A self-initiated, but EU determined strategic change where the company wants to reposition itself in the market. This brings us to the question of what change actually is and when serious change is involved, whether or not it qualifies as a transformation. In our opinion, a change can be characterized as structural modifications that cannot be reversed without cost. A change becomes a transformation when all dimensions of the business are impacted and the change has a significant strategic objective (see Figure 1). A transformation is further characterized by a high level of ambition and a substantial gap that must be

bridged between the current and future business state. It represents a fundamental discontinuity in the current business operations. Although we are aware of some exaggeration, a business transformation can be seen as the mother of all projects. Companies that are involved in a business transformation consider this to have the highest priority and it is their main focus besides the normal going concern activities. The company has one key strategic theme to focus on and that is the transformation. In addition to the elements mentioned in the transformation definition and the characteristics above, there are some significant factors that determine how large and complex the transformation will be:

Geographic: The more countries and time zones that are involved, the greater the complexity of the transformation will be in terms of requirements, communication and time differences. Scope: The number of business units and employees involved will influence the complexity as well. Stakeholders: The number of stakeholders, each one bringing a particular interest to the table. Third parties: The number of (external) third-parties involved in the transformation, such as shareholders, product suppliers and consultants. Knowledge: The number of disciplines that must be mobilized to realize change and the experience needed for the change to take form.

44

A closer look at business transformations

Co m an Tr n io at m or sf st Re ru ct ur tim Op tio iza n g in

pl ex i

ty

Change is not by definition transformation

Culture: Differences in corporate culture(s) that affect the transformation and lead to additional challenges during implementation in terms of core values, cooperation and behavior. Duration: The duration of the transformation and the necessity to continuously reinvigorate those involved in the transformation to stay committed. Technology: The number of technologies and innovations being implemented often causes unexpected problems and setbacks. financial processes has an impact on all business units supplying information and thus far beyond just the finance department. 4. An IT-enabled transformation is triggered by major investments in technology. Usually, technologically driven transformation impacts different business processes and thus goes far beyond just the IT department. The sponsor is usually the CIO, but depending on the impact, it may also be the CEO or CFO. It should be clear that these types of transformations are very different from each other. Nevertheless, all of these can be described in a generic context in which corresponding organizational dimensions are affected by the transformation. Let us first examine the generic context and then move on to an example for each type of transformation.

Complex transformation journeys are usually divided into phases or plateaus that have distinct milestones or stages to achieve. This has the advantage that the complexity is divided into smaller, more manageable and logically related parts. The organization also sees at the end of each plateau clear results and benefits and can learn from these steps. Usually, the planning of plateaus starts with the relatively easier chunks of work and ends with the more complicated challenges.

A transformation impacts all business dimensions!


A transformation journey is never easy, and is filled with opportunities and hazards that can arise within all the dimensions of the company. Given the nature and objective of transformations, you can expect significant (internal) resistance. To transform successfully, management must dare to take risks and be persistent and consistent in its execution. The ability of management to overcome
Integration/separation transformation

Types of transformations
Every business transformation is unique, but does have a generic aspect that is similar in most transformations. There are also specific aspects, mostly related to the domain of the transformation. We have identified four types of business transformations, each one triggered by a different starting point. 1. The integration/separation is addressed when a company merges with another company or is separated from another company, triggering a variety of implications for the company and its employees. The CEO is the key sponsor of this type of transformation. 2. An operations transformation involves a fundamental change of all core processes of the company and the resources deployed within the core processes. For example, it might be an organizational transition from a supply-oriented organization to a customer-focused organization. The COO is the key sponsor. 3. A finance transformation involves substantial changes in the structure of the financial processes (starting at the source through to the reporting), the organization, and the systems within the enterprise. The CFO acts as the main sponsor. The fundamental restructuring of

Operations transformation

CEO COO

Finance transformation

Company
CIO

CFO

IT-enabled transformation

Figure 2. Four types of transformation with their sponsors.

Compact_ 2012 0

Transformations

45

Strategy Organization People Business processes Applications Infrastructure Information Products Data

Figure 3. Dimensions of an organization.

hurdles is one of the key success factors. The complexity of the transformation lies in the interdependencies between the business dimensions as shown in Figure 3. Each of these components will change individually and in combination with the other dimensions. As mentioned earlier, the complexity lies in the interdependencies and the relationship with the going concern. In other words, implementing the change-world alongside the run-world is a challenging effort that requires a lot of management attention. Clear, consistent and frequent communication is essential and should be considered the responsibility of management as well. In particular, management must communicate the why and how of the transformation, and must be able to clearly explain the strategy, future situation and the roadmap. Regardless of the type of transformation and the domain, the generic part of the transformation involves changes across each of these dimensions. Figure 4 lists topics that can be relevant on each of these dimensions. The transformation is aimed at achieving the target operational situation for each of the dimensions in a coherent way leading to higher levels of performance.

other companies. The executive board decided on the separation of the banking and insurance activities and began a transformation that had a huge impact on all its operations throughout the world including the operational, legal, technological, contractual and HR domains. The complexity of this separation can be illustrated by considering the scope of 60 countries and 150 business units and the time constraints under which this transformation had to take place (about two years). A multidisciplinary team was formed to accomplish this complex operation with the help of many stakeholders around the globe and guided by external advisors.

The License of a mid-sized retail bank


This case involves a company that is a retail and investment bank for wealthy individuals. To position its new strategy, the executive board selected two themes, License to Grow and License to Operate. The combination of these two themes had a huge impact on the entire organization and brought about large-scale changes from top to the bottom in the organization. In particular, it involved changes in the responsibilities of account managers and administrative personnel. In addition to cultural and behavioral changes, the Operational Transformation resulted in widespread changes in the core banking processes, IT systems and the corresponding organizational departments. Within the IT landscape, applications and infrastructure were outsourced and IT services professionalized. The transformation was phased over a period of four years and is now in the final phase.

Financial transformation within the international division of a large retail bank


Ten years ago, an IT-based project was started to establish a global financial technological infrastructure. About seven years ago, the CFRO engaged an external advisor to review the program in terms of ambition and objectives. The aim was to gain a common perspective of what steps had to be taken but this time giving Finance the lead. Developing a vision and setting the right goals and the appropriate level of ambition that was necessary for a global roll out of a common standard now became a priority. One of those goals was to bring all the regional CFOs into line as quickly as possible. An outcome of this action was that the finance function was changed. Financial processes were changed in all countries. New systems were purchased which replaced old ones. Finally, it resulted in very different set of competences of employees both centrally within Finance as in the local countries. As a result, IT regained its previous role, that of a facilitator.

Business Transformation in practice


Now that we have determined the context and content of a (business) transformation, we will explain each type of transformation using a client example or case study.

Breaking up a large international financial institution into a bank and an insurer


Before the credit crisis, an international bank and insurer combination benefited from scale and a stable revenue stream. During the crisis in 2009, the European Commission mandated the restructuring of the financial institute. This was a complete turnaround for a company that had been focused for a long time on integrating brands and

46

A closer look at business transformations

Dimension of organization Strategy Organization

Change The new strategy sets the tone for the change journey and provides the framework within which this will be achieved.

Examples Increase in revenue Operational efficiency

Translate the strategy in terms of a new business model resulting in a Target Operating Model (TOM). The impact on personnel in terms of required knowledge and skills. External and internal information requirements are defined in the context of the new strategy, laws and legislation. Structural change in the design of business processes aimed at improving efficiency, better quality and greater customer focus.

People Information Business processes Products Applications Data Infrastructure

Target Operating Model Core values Formation of organization Culture and competencies Customer focused Cooperation

Improved management reporting KPI reporting and management Lean process change Straight Through Processing (STP) Operational excellence

The product range is revised, old products are made obsolete, and new ones are introduced.

Reduction in complexity of application landscape, replacement of legacy systems, internal and external integration, are commonly occurring themes in large-scale transformations.

Product rationalization Product standardization and customization

Application rationalization From custom built to package-based solutions Standardization of data Data management Data quality and data cleaning Virtualization Rationalization

Data storage, processing and quality are key issues that require attention during a transformation. Replacement and rationalization of infrastructure and technologies are a consequence of the IT strategy and direction.

Figure 4. Potentially relevant issues per dimension during a transformation.

IT driven transformation of a large lottery company


A large lottery organization had chosen to outsource its operational processes and entire IT landscape. This was due to problems arising in the existing complex infrastructure and the new opportunities offered by Software as a Service (SaaS). The transformation was divided into two parts. First, setting up the new logistical processes and SAP based logistics system at the service provider. Second, the modifications of the lottery organization and processes to align with the new situation in which the service provider plays a role. The IT enabled transformation has led to a dramatic change in organizational structure and a different type of IT service and support. This change affected not just the organization of the lottery. Changes were also necessary for the processes and systems for the 8000 distribution points.

is key. Failure to thoroughly consider the transformation readiness drastically reduces the chances of success. Experience shows that during the transition there is limited opportunity to fill in the missing prerequisites. At the very least, this leads to significant delays in most cases.

Tip 1. Do we understand the transformation?


Transformations are characterized by a strategic context and focus on the structural improvement of all organizational components. Determine the type of transformation and create insight into the complexity. An impact assessment contributes to understanding the required change within each dimension of the organization.

Tip 2. Is the goal clear enough?


Ensure that the future (To-Be) situation is clear for all stakeholders. Confusion about where the company is going rarely results in the right course of action and outcome. It is important to outline and describe the future and to make it sufficiently concrete so that those involved can work towards achieving it and answer the question: Whats in it for me?.

Think before starting the transformation


The moment before starting a transformation is extremely important. For many companies, creating the right prerequisites before embarking on a significant transformation

Compact_ 2012 0

Transformations

47

Creating the prerequisites is the key to successful transformation


Tip 3. Are we ready?
The most important question at the start of the transformation process is: are all prerequisites addressed and in place? This is a very different question from: are the risks identified? The first question is often confused with the second, which results in a large chance of failure. The prerequisites are the crucial starting points required for successful change. Prerequisites can have different perspectives, some examples include:

Conclusion
Not every change is a transformation. A transformation is characterized by the scope, complexity and strategic value, but mainly by the fact that it impacts the entire company. Only then is a change also a transformation! Each transformation takes place in a context where the elements that are changing are similar. Nonetheless, every transformation stands by itself. Fortunately, these can be classified into four different types: the integration / separation transformation, the operational transformation, the financial transformation, and finally, the IT enabled transformation. Make it explicitly clear what the transformation is trying to achieve and do so in such a way that it gives everyone a sufficiently clear picture of the future situation. Before starting each transformation, it is important to understand the complexity it entails and what the impact will be on the various business units and components of the organization. Ensure that the prerequisites are properly addressed to ensure that the transformation has a higher chance to succeed. If the conclusion is that the prerequisites are insufficiently in place, it is recommended that it becomes the highest priority to ensure they are addressed before the transformation begins. Transformations that are done right can bring major benefits to the organization, repositioning them in the market place, securing a solid future and market share and sometimes even projecting the company to market leader and example for all others to follow.

Is it clear who the key stakeholders are? Are they involved? Is the governance set up properly? If is it clear to everyone how the transformation will take place and what their role is? Are the expectations sufficiently clear in terms of results, finances, and completion time? Does the organization have sufficient capacity to change and does it have the relevant experience? To what extent is the organization mature and familiar enough with large-scale and complex change programs? In other words, does it have the right skills to complete the tasks? For an organization that is not sufficiently mature, it is advisable to seek external assistance both on the business side and the IT side. Is a mature architecture discipline being used to guide the change process and to manage the complexity and risks? Has a readiness assessment been conducted? Most transformations that fail only provide an after-thefact evaluation from which lessons can be learned. Such evaluations are commendable, but it makes more sense to perform a readiness assessment beforehand to determine how ready the organization is to start the transformation.

About the authors

Tip 4. Sell it to all involved!


If the time is ripe for the transformation to start, then the objectives and journey ahead need to be communicated explicitly throughout the entire organization. Assigning proper roles and responsibilities is of great importance. This must preferably be done in combination with a concrete view on the future state so that people relate to it and feel committed.

G.H. Dieperink is a director with IT Advisory at KPMG and has extensive experience in leading complex transformations in the financial sector. As program manager, he has guided several major Dutch and international companies in transforming their businesses. He conducts program readiness scans for new programs and health-checks of existing programs mainly in the financial sector. J.A.C. Tegelaar is a senior manager at KPMG IT Advisory and has been involved in various roles in the implementation of IT enabled transformations in different sectors. He was a Risk and Data Security manager sharing responsibility for managing data-related risks for a bank. He is also a program manager at the forefront of digitization projects and change management.

48

A closer look at business transformations

IT a meaningful factor health care sector


Stan Aldenhoven and Jan de Boer The health care sector faces some serious challenges: rising demand for care, increasing personnel shortages, more privatization and merit pay. At the same time, the quality of health care has been put under a microscope and everyone agrees that care can and must improve, not only in quality but also in affordability. How can health care institutions work with these challenges in times of budget cuts? One thing is certain. IT is the dominant factor in all solutions.

in evolving

C.G.R.M. Aldenhoven
is a director at KPMG IT Advisory.

aldenhoven.stan@kpmg.nl

J.C. de Boer

is a partner at KPMG IT Advisory. deboer.janjc@kpmg.nl

IT was first used in health care as a means to improve the management of the organization. The objective there was to gain insight into production agreements, operating results, occupancy rates, sickness absence, cost trends, waiting list data, and so on. In the past, IT used to play a less central role in the health care process. International research carried out by KPMG shows that the most successful and sustainable changes in health arose from examining the health care process from the perspective of the patient. This perspective should also take center stage in the strategy of the health provider. The organization uses this as a basis for formulating a vision for information services. Health care institutions are information processing organizations where it is essential that information services run smoothly. It is the task of the administrators to completely integrate the vision for information services into the entire strategy of the institution. IT is a determining factor in all domains, from health care innovation and collaboration with other health care providers to e-health and new construction. Thus, providers would do well to devote time to IT in board meetings and put it in the weekly agenda. If there is no strategy in place, investing in IT means "doing things better" rather than "doing better things". In that case, IT is not much more than the selection of vendor products, when it could be a resource that is strategically deployed to achieve objectives in terms of suitability and quality.

Towards Health 2.0


Patients expect quality and transparency from health care institutions. They are increasingly better informed and want control over their own health. This is called patient empowerment or self-management. Patients gain insight into the quality of institutions and health care providers via rankings by and communities of their fellow sufferers. Patients are increasingly willing to travel for quality. This means they may choose a highly regarded specialist abroad for a specific treatment rather than a local specialist. A doctor becomes the advisor of the patient who is well-informed via the Internet and his family caregivers. They all work together in determining a diagnosis and the subsequent treatment plan. The patient is part of the treatment team, Health 2.0. Furthermore, a patient controls their own medical records. A doctor receives access to what the patient believes is relevant. Over time, almost everyone will have their own personal health record (PHR) which will be used to exchange information with the family doctor EHR and the hospital EHR. The PHR also contains regularly uploaded information about weight, blood pressure, heart rate, and so on. From cradle to grave. In the future, the emphasis will not just be on illness. There will be a focus on "wellness": health and remaining healthy. Understanding health improves lifestyle. Prevention is the area where the most good can be won. The word patient is really no longer appropriate. They are consumers who (sometimes) utilize health care. There are already some vendors who offer a PHR. However, a trusted party is needed for the protection of privacy, and there are always commercial interests in the market waiting to pounce on profitable opportunities such as the use of profiling for targeting sales.

will lead to dramatic changes in health care. A number of serious roadblocks must be overcome before it is possible to reap the benefits of implementing e-health on a large scale. Current legislation is unclear about physicians liability when they are involved in e-health activities. In addition, the current costs for health care constitute a barrier preventing the stimulation of the development and utilization of e-health. The use of e-health applications is inadequately stimulated at this time. Only cautious initiatives are being taken around the globe. At the local level, patients may have the option of using the Internet to consult with a health professional or make an appointment. At the national level, health care providers and patient organizations collaborate to stimulate the growth of communities for fellow sufferers. These are all good examples. Overall, however, these developments are still taking too long. Recent research by KPMG in the Netherlands shows that online "health convenience services" have a direct positive outcome on patient self-management. These services include online registration for a health care institution, entering case history, making appointments, ordering (repeat) prescriptions, and consulting with a health care provider. Government and relevant parties in the health care domain are wise to initially focus on these seemingly simple e-health applications.

An EHR allows an institution to focus predominantly on customer intimacy and less on efficiency
ning and logistics of (health care) resources and the registration of patient information, the processing of Diagnosis-related groups (DRGs), and the planning of admissions/ surgeries and appointments. The ERP can be described as the IT system that supports the logistical and administrative back-office functions (finance and control, human resource management, purchasing and warehousing). These workflow-supporting systems focus primarily upon operational excellence. It is aimed especially at gains in efficiency and less on gains in quality in health care. The EHR supports the creation of the digital medical file for the patient (including clinical documentation and medication data). It is a portal for the exchange of information between parties in the health care chain: patient, referring physician, pharmacy, and so on. An EHR allows an institution to focus predominantly on customer intimacy and less on efficiency. The EHR does not or minimally reduces caregiver workload. More information must be recorded to provide for transparency and quality of care. Practice shows that the need for information in a digital world always increases because data recorded digitally is much easier to exchange than data on paper.

EHR as imperative
The Electronic Health care Records (EHR) can make all the difference for the information services of a health care institution. It establishes the degree of accessibility and interchangeability of medical information and the transparency for a hospital. The EHR is a crucial IT system in the development of the information strategy. Health care institutions will do well to differentiate between the EHR system and the Health care Information System (HIS) in the IT systems landscape. Currently, IT vendors consider these systems to be so interwoven that they are sold as a single package. It is presented as if the client will get the best of both worlds. Unfortunately, nothing could be further from the truth. There is indeed some overlap, but the EHR and HIS are substantially different systems. The HIS must be considered congruously with the Enterprise Resource Planning (ERP) system, both of which are logistical support systems. The HIS is focused especially on the health care domain, while the ERP system focuses on general business activities. The HIS is the IT system that predominantly supports the health care logistics and administrative process. It is focused on the efficient plan-

Uniformity of language

It is the task of the administrators to completely integrate the vision for information services into the entire strategy of the institution
50
IT a meaningful factor in evolving health care sector

The emergence of E-health


E-health is an emerging development in health care. It is the health sector equivalent of e-business, e-commerce and e-government. Innovation today utilizes IT and especially the Internet. E-health reflects the increasing desire of the patient to be in charge of their own health. Research conducted in the United States shows that about two out of three patients want to use the Internet to communicate with their doctor or hospital. This relates to personal medical information, making appointments, viewing examination results, e-mailing the doctor, and the ability to carry out measurements at home and transmit these electronically to the medical file. The potential for IT and especially the Internet being used in health care is limitless. E-health

Much more than is currently the case, HIS, EHR, ERP and departmental IT systems must become a smoothly running whole and offer the option to be browsed through. This increases transparency and patient safety. In the future, if an event such as a medical complication occurs, all relevant information will have to be available. From the patients screening through to the maintenance history of the infusion pump used. This information can serve not only to record activities but also for accountability purposes. For example, it can be used to trace the origin of a medical complication. Thus, the cause of a complication might be that the physical examination was not thorough enough or that the infuse pump used was not connected by qualified staff or not properly maintained, and so on. All of this type of information already exists in most institutions except that it is not in a standardized format or linked together in any way. Indeed, there is still no adequate technological solution for this matter. Service Oriented Architecture (SOA), a sort of multiple socket software framework where you can plug in different systems, is still under development. When integration of systems and devices is no longer an issue, institutions

Compact_ 2012 0

Transformations

will need protocols, standardization and uniformity much more than occurs now. However, standardization will be required beyond that of the internal operations in institutions. An unambiguous framework of concepts and definitions is inevitable for communication among all partners in the health care chain just as there is for research and training. Uniformity of language is a prerequisite for effective use of IT in health care and the subsequent conversion from paper to digital.

Privacy discussion
Research by KPMG shows that many people are worried about privacy and security on the Internet and attendant risks linked to its use. The concern that confidentiality of data on the Internet is not guaranteed is increasing. Conversely, the willingness to exchange confidential data on the Internet is increasing if there some added value involved. And this is the case with the health care sector. The security of electronic patient data will always be an issue and cannot be guaranteed 100%. This is also true of course for paper files. There is a greater chance of accessibility in the digital domain compared to paper. It is important that technology be used in the most optimal manner to protect the information. And the patient must be able to authorize who can access their information. But that is easier said than done.

Education and research


For health providers with an academic emphasis, the impact of IT choices will go beyond the domain of information services for patient care. It also largely determines the quality of and the options for information transfer to other core competencies, namely, medical research and academic education. These institutions need to work towards a cohesive IT environment where all specialists, nurses, doctors and nurses in training, medical students and researchers can have authorized access. Thus, there will be a requirement for real-time browsing through the EHR and department specific systems. Training could not be more realistic. And the research data could not be more trustworthy. Obviously, approval from the patient will be needed, the data must be anonymized and privacy guaranteed. Standardization is also a prerequisite for this vision of the future. This is particularly applicable to research because international cooperation between research institutes is a prerequisite if a particular country wants to remain meaningful on the international scene.

About the authors

C.G.R.M. Aldenhoven is a director at KPMG and is jointly responsible for Health IT within the KPMG practice. He manages the international IT consultancy practice of KPMG in health care and also advises clients on strategic and complex issues that impact both IT and the primary health care processes.

J.C. de Boer is a partner at KPMG international and is responsible for Health IT. In the Netherlands, he advises a large number of hospitals, health insurers and governments in the areas of strategic IT issues.

International players
There are few IT systems currently on the shelf that are adequately equipped to meet the innovation needs of health care. A comforting thought is that the needs of most health professionals still do not vary much from the functionality already offered. And, vendors are indeed increasingly capitalizing on the changing needs. Institutions that are taking steps forward in IT must be careful to keep the door open for future developments. The health care market still operates from a "replace" perspective and hardly at all from a "change" perspective. This means that there is more "following" than "renovating". When selecting vendors, health care institutions are well advised to evaluate whether the vendors also demonstrate commitment to innovation and attendant best practices. The market for EHR systems is becoming more international. Why should we in the Netherlands know better and do things differently than in the U.S.? The HIS market is dominated by national and international players who know how to incorporate localization as part of their systems and focus on timely compliance with changes in national legislation. The national or local health care market is too small. Selecting IT solutions that only work locally rather than internationally can lead to isolation and the inability to capitalize on international developments and innovations. The future belongs to systems based on an international perspective on health care, which have an international market and which unfailingly deal with national and international changes in a timely manner. This development makes extensive standardization inevitable.

Funding problem
Health care institutions will make considerable investments in IT in the coming years. Most health care providers are facing shrinking budgets, which means that institutions must seek innovative funding options. It is "hot" to invest in the health care sector, so there are opportunities. Solutions are conceivable where IT is no longer purchased but leased. There are already hospitals where a single vendor brings in all medical equipment and ensures that all of it is functional and up-to-date. It is always possible for such a hospital to keep pace with innovation at reasonable management costs. A future trend is that ownership of IT, just as with medical equipment, is not an imperative for a health care institution. It is the actual utilization of the equipment that sets the health provider apart. Considered from this perspective, it is clear that institutions will arise where the only assets are the employees themselves. It is possible that property, workplaces, medical equipment, and IT will all be leased for a fixed amount per month based on yield. In any case, a large part will be outsourced to other parties. A consequence will be that other parties will be more involved in the day-to-day operations.

Read the full article on www.compact.nl/ artikelen/C-2012-0Aldenhoven.htm or scan the QR-code


51

Dealing with technology


A health provider that improves the care process from the perspective of the patient will also educate their caregivers differently. Patients will enter their own case history at home. A doctor in training should not then just simply repeat this registration again, but focus on the essence of the examination instead. To gain the requisite experience without placing an additional burden on the patient means that simulation programs will continue to have greater prominence. The training of doctors and nurse practitioners requires that they have to learn to work with medical technology and IT resources much more than occurs now. And, they must learn to record information in a manner so that real knowledge transfer occurs without it being interpreted in any other way than intended by the writer. It is equally important that future health workers learn to deal with very well-informed patients who may know more about their illness than the health workers themselves. In other words, they must be trained as a health worker 2.0.

52

IT a meaningful factor in evolving health care sector

Compact_ 2012 0

Transformations

53

This article on www.compact.nl is an adaptation of Chapter 2 of the Dutch book ICT in de zorg. Probaat middel, maar lees voor gebruik de bijsluiter! (Health IT. Good medicine, but read the instructions before use!). This book explores visions of IT developments in health care. It covers nine strategic themes, including the Electronic Health care Records (EHR), information security, Health 2.0, project realization and IT investment. The international version of this book will be published in the spring of 2012.

Compact_ 2012 0

International Edition:

_ Strategicchoicesfordatacenters _ Accesstothecloud _ Socialengineering:theartofdeception _ TowardasuccessfulTargetOperatingModel _ AdaptiveITserviceproviders _ Acloserlookatbusinesstransformations _ ITameaningfulfactorinevolving healthcaresector

Compact_ 2012 0

Vous aimerez peut-être aussi