Vous êtes sur la page 1sur 17

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/1463-7154.htm

BPMJ
17,3

IT infrastructure refresh planning


for enterprises: a business
process perspective

510

Tugrul U. Daim
Portland State University, Portland, Oregon, USA

Matthew Letts, Mark Krampits, Rabah Khamis and


Pranabesh Dash
Intel Corp, Portland, Oregon, USA

Mitali Monalisa
Intel Corp, Hillsboro, Oregon, USA, and

Jay Justice
Motorola Mobility, Beaverton, Oregon, USA
Abstract
Purpose This paper aims to research literature to describe the business processes used when
planning IT infrastructure refreshes.
Design/methodology/approach The paper uses analytical hierarchical process (AHP) and
pairwise comparisons to model and quantify the decision process for IT infrastructure refreshes.
Findings The research found that most companies keep their refresh processes private and very
little academic research is available on this topic. While supportability, manageability, compatibility,
cost, and scalability are important factors to large organizations, performance and availability of the
systems are important for smaller organizations.
Originality/value AHP was not ever used to evaluate the refresh planning. The paper
demonstrates that it would be a very useful tool.
Keywords Business process re-engineering, Information technology, Analytical hierarchy process
Paper type Literature review

Business Process Management


Journal
Vol. 17 No. 3, 2011
pp. 510-525
q Emerald Group Publishing Limited
1463-7154
DOI 10.1108/14637151111136397

Introduction
The rapid growth of computing technologies such as cloud computing and
virtualization is driving companies to refresh their IT infrastructure, sometimes
sooner than planned, in order to support these new technologies (Gartner Inc., 2009). In
mid to large enterprises with a complex IT infrastructure, a refresh can be costly and
require careful planning in order to get the largest return on investment over time. The
authors of this paper researched literature to describe the business processes used
when planning IT infrastructure refreshes. Our research found that most companies
keep their refresh processes private and very little academic research is available on
this topic.
This paper uses the analytical hierarchical process (AHP) decision-making model
to help companies plan their IT infrastructure refresh. The model is intended to have the
flexibility for any IT organization regardless of its size or strategic alignment. The model
uses pairwise comparison in order to tailor the criteria to the specific IT organization.

When implemented, the model will recommend one of seven possible strategies to
refresh IT infrastructure.
Literature review
There are two important aspects to this paper the AHP model and applying it to an IT
infrastructure refresh. In researching these two areas, we found plenty of literature on
decision-making models, specifically AHP, but very little academic research has been
conducted on the processes used by businesses to update their IT hardware. Our
literature review will start with an overview of AHP and conclude with the research
found on refreshing IT infrastructures.
Saaty and Kearns describe the AHP as a decision-making framework for dealing
with problems with multiple criteria to be considered (Saaty and Kearns, 1985).
The AHP model consists of four stages (Roper-Lowe and Sharp, 1990):
.
The first stage, building the decision hierarchy, is the most important stage as it
sets the framework for the rest of the process. At the highest level is the goal to
accomplish. To reach that goal, a number of criteria need to be met; these are
represented in the second tier. If necessary, the criteria can be further divided into
sub-criteria. At the bottom of the hierarchy are the possible alternatives to be
considered to meet the goal at the top.
.
In the second stage, values are assigned to each criterion. This weighting can be
done one of two ways using quantitative engineering requirements or using a
qualitative method that can turn preferences into values such as pairwise
comparison (Kocaoglu, 1983).
.
The third stage is scoring each alternative with respect to each criterion. This
requires finding people qualified to make judgments, i.e. a finance person could
score the alternatives with relation to cost but an engineer would be needed to
judge the technical specifications (Roper-Lowe and Sharp, 1990).
.
The final stage is applying the weights to the alternatives to obtain an overall
score and determine the preferred option to reach the goal.
In stages two and three above, a technique called pairwise comparison is used to find the
weighting of criteria and alternatives. There are several models that can be used to do the
comparisons but typically an expert panel is used. The expert panel method is
commonly used when there are intangible criteria to be considered in the decision
(Roper-Lowe and Sharp, 1990). One of the disadvantages of using the weights derived
from pairwise comparison is that it assumes a linear relationship between the value and
the user preferences. Gerdsri and Kocaoglu (2007) suggest using desirability curves that
present a higher resolution of the users preferences could solve this problem. However,
creating the curves requires a detailed dataset for each criterion and in the interest of
time linear weights were used in this model.
Interactions between criteria and sub-criteria can change depending on the
application of the AHP. In some situations, it makes sense to weigh all criteria against
each other but in others, direct comparisons would not make sense. In order to
accommodate this, AHP models have the flexibility to handle both situations. Gerdsri
and Kocaoglu (2007) show in Figure 1 the application of a three-level decision model
where criteria are selected such that they are independent of each other and have no

IT infrastructure
refresh planning

511

BPMJ
17,3

Objective
(O)

Strategic technology evaluation

Criteria
(Cb)

512

C1

Factors
(Fnk)

F21

F11

F13

Ck
F23

F31

F12

Fjk1

Figure 1.
AHP model with
independent criteria

C3

C2

Technologies
(Tn)

T1

F22

T2

F32

F1

Fjk2

T3

F2

FjkK

TN

Source: Gerdsri and Kocaoglu (2007)

interaction with another group. This simplification makes the model very practical to
use as judgment quantification experts are not asked to compare two sub-criteria that
belong to different groups and difficult to evaluate.
AHP is a very effective and robust tool as evidenced by its widespread use (Chen
and Kocaoglu, 2008). The following list is provided as a sample of AHP applications:
.
Sevkli et al. (2008) supplier selection.
.
Hafeez and Essmail (2007) evaluating core competences.
.
Bhagwat and Sharma (2007) performance measurement.
.
Al-Subhi and Al-Harbi (2001) project management.
.
Karami (2006) adoption of irrigation methods.
.
Leung et al. (1998) evaluating fisheries management.
.
Wong and Li (2008) intelligent building systems.
In researching the strategies companies use to refresh their IT infrastructure,
specifically server systems, there are not many academic studies available. A common
theme in the articles was a need for better planning in regards to IT. Management finds it
difficult to devote resources to improving infrastructure because it is hard to correlate to
business needs and often considered immeasurable (Duncan, 1995). Weill (1992)
presented studies that show a correlation between an increase in business performance
and investment in IT infrastructure; meaning that investment in IT could be a critical
factor in competing with other companies.
However, spending resources on new IT infrastructure to keep a competitive
advantage without a plan could lead to high costs in the future. Brill (2007) found that
less than 20 percent of 100 data center operators surveyed felt that their infrastructure
planning procedures were above average. In his report, Brill found that the
infrastructure costs of power, cooling, and space will exceed the initial cost of the IT
hardware in a three-year period. This means that any new IT deployments that only look
at the cost of hardware could be completely inaccurate. One of the solutions Brill (2007)

suggests is virtualizing older, inefficient hardware onto a single more powerful server,
thereby reducing the power requirements.
Our research showed that the AHP is a useful tool when making decisions with
tangible and intangible criteria to consider. In order to keep the intangible and tangible
factors separate from each other, we will create a four-level decision hierarchy with
independent criteria and use pairwise comparison to obtain the weighting of our criteria.
We have also shown that implementing an efficient and flexible IT infrastructure
can be a strategic advantage over competitors but there are many factors to consider in
order to get the best return on investment.
Methodology
This section explains the research stages used in this paper. The first step was to
develop the hierarchy. A set of industry experts was used for this. Then, in parallel
stages, the model was applied to two cases.
Expert panel
An expert panel made up of the authors was used to determine the criteria and
sub-criteria required to choose a strategy to refresh the servers in an IT infrastructure.
The expert panel consisted of seven individuals from the high-tech industry with
experience ranging from five to 27 years. The job roles of the experts included: senior
database engineers, enterprise technical marketing, and validation engineers. The
choices of criteria and pairwise comparisons only had small inconsistencies between
the experts and were deemed negligible. If the inconsistencies were large then a more
advanced method such as Delphi would have been used to reach a consensus.
Decision hierarchy
The primary requirement used when creating the hierarchy was to keep it flexible
enough to be applicable from small to large businesses. Based on experiences from the
expert panel, we found that, regardless of size, businesses consider the same criteria
when updating their IT infrastructure. Two of the experts used were involved in sever
refresh decision making in the past and helped validate our criteria.
Once the criteria were defined, the expert panel identified possible strategies to
evaluate. Determining the possible alternatives for the lower level of the hierarchy
turned out to be the most difficult part. There are too many possible strategies to
refresh servers so the expert panel narrowed the scope to seven possible alternatives
that would represent the majority of the strategies.
Application
The decision hierarchy should be applied on a regular basis to determine is a server
refresh would have a positive return on investment and, if so, what strategy would best
align with company preferences. In order to apply the model, weights representing the
companys preferences need to be applied to the criteria. One of the ways to get a weight
is to take a survey of employees in relevant decision-making positions. In a small
business, this could be an IT manager and database administrator while in a large
company there could be multiple decision makers from groups such as management,
engineering, or finance. In order to reduce the time needed to compile the results of a
survey, it is recommended to create an electronic version so the results can be

IT infrastructure
refresh planning

513

BPMJ
17,3

Model development
As discussed previously, the decision hierarchy contains four levels, the goal, criteria,
sub-criteria, and the seven alternative strategies. Starting at the top is the goal to
choose a server refresh strategy. All pairwise comparisons of the criteria and
alternatives are done with respect to this goal.
The second level of the hierarchy is the criteria. Each of the criteria in this level
requires a weight that calculates into scoring the alternatives. The values of these
weights are found through pairwise comparisons of each criterion by an expert panel in
the company considering a server refresh. The four criteria identified in this model are:
reliability availability serviceability (RAS), performance, total cost of ownership (TCO),
and roadmap.
The third level in the hierarchy contains the sub-criteria for each criterion. The
sub-criteria under each criterion are independent from sub-criteria under another
criterion, i.e. the sub-criteria under performance cannot be compared to the sub-criteria
under roadmap. The total contribution of any factor for any alternative is calculated by
the product of the criterion weight and the sub-criterion weight. For example, if the
pairwise comparison at the criteria level for RAS indicates a 20 percent weight and the
pairwise comparison for serviceability indicates a 30 percent weight, the final weight
for serviceability will be 20 *30 percent 6 percent. Descriptions of the sub-criteria can
be found in Appendix 1. The complete model is shown in Figure 2.
The final layer of the model is the actions which include the possible strategic
alternatives. In order to derive the preference scores for each alternativem several
assumptions were made. In a real-world application of this model, the actual specifications

Figure 2.
Server refresh
decision hierarchy

Actions

Sub-criteria

Criteria

Goal

514

calculated automatically. This also gives participants instant feedback on how their
responses compare to others similar to the Delphi model. Once the weighting has been
applied, the company can evaluate the possible alternatives and determine which the
preferred option is to refresh their servers.

Evaluating server refresh strategies

Reliability availability
serviceability (RAS)

Serviceability

Reliability

Manageability

Availability

Full
replacement

Hardware
upgrade

Performance

Performance/
watt

Transactions

Total cost of
ownership
(TCO)

Recurring
cost

Initial cost

Roadmap

Scalability

Legacy
support

Service life

Upgradability

Density

Partial
replacement

New addition

Virtualization

Software
upgrade

Delay refresh

can be found through research such as benchmarks or retail pricing. The following are the
assumptions made in this paper in order to test the model:
.
virtualization has not yet been implemented;
.
current servers have one year left of service life, while any new servers will have
a three-year service life;
.
new servers are twice as energy efficient as old servers;
.
new servers have four times the processing power and memory as old servers;
.
no previous software or hardware upgrades have been performed on the existing
servers; and
.
the new servers will only take up half the rack space as the old servers.
Based on the assumptions above, the following seven strategic alternatives were identified:
(1) Full replacement. Replace all existing servers with new hardware.
(2) Partial replacement. Replace about 50 percent of the existing servers.
(3) Hardware upgrades. Replace and/or add additional components in the
existing servers. Assumption is any addition will only increase performance
by 20 percent.
(4) New addition. Addition of 10 percent more servers to the existing infrastructure.
(5) Software upgrade. Installation of operating systems and/or applications that
lead to a more efficient computing environment.
(6) Virtualization. Consolidate several servers onto one server using virtual machines.
This model assumes a consolidation ratio of five old servers to one new server.
(7) Delay refresh. Do nothing because the value added of the other alternatives does
not outweigh the upfront cost factor.
Each of the alternatives has a score from zero to one for each of the sub-criteria. For
example, the delaying the refresh alternative could have a very high preference score
for upfront cost since initial cost is zero, but will have lower preference scores for
performance since newer servers will outperform the current hardware. In order to
score each alternative, they are compared to the sub-criteria. If the alternative is the best
option for a given sub-criterion, it is given a score of one. The other six alternatives are
weighed against the best to derive a value that is some percentage from zero to one.
Tangible sub-criteria can be found through specification research. Expert opinion can be
used when insufficient data are available to support specification comparisons.
After all the pairwise comparisons for the criteria are calculated and the preferences
for the alternatives have been identified, a score can be calculated for each alternative
by multiplying the alternative score by each sub-criteria weight and summing the
products. The alternative with the highest sum is an indication which strategy is most
in line with the company preferences.
Applying the model: small vs large businesses
We conducted two case studies to verify our model one large IT environment and
one small IT environment. Two case studies were done to compare and contrast the
differences and determine if the model could work in different situations.

IT infrastructure
refresh planning

515

BPMJ
17,3

516

A panel of six IT professionals (Appendix 2) evaluated the seven alternatives based


on the descriptions and assumptions discussed above. The resulting scores are shown in
Table I.
The pairwise comparison survey was given to two sets of IT professionals small IT
deployments and large IT deployments. The roles in the large IT environment
organization included server support personnel, a data center manager and other key IT
decision makers. In the small IT environments, the interviewees were the server
administrator and his manager. The survey can be seen in Appendix 1. The surveys were
distributed either in person or via e-mail and the results were compiled in Appendix 3.
The large IT environment serves multiple groups of people all over the world
performing different workloads. As customer demands change over time and servers
become dated the servers are considered for refresh. Supportability, manageability,
compatibility, and scalability are important factors to this organization. Cost is also a very
important and careful consideration is used when factoring upfront and reoccurring costs.
The small IT environment considers a single server administrator and his manager
maintaining three servers for a group of 20-30 people. Like the large IT environment, the
group is spread out around the world so the server needs to be accessible at all times. The
factors that are important are the performance and availability of the systems.
The results of the surveys were applied to the scores of the seven alternatives.
The results are shown in Table II.
Full
Hardware
Partial
New
Software
Do
replacement upgrade replacement addition upgrade Virtualization nothing

Table I.
Preference scores for
seven alternatives

Table II.
Case study results

F1 servicability
F2 reliability
F3 manageability
F4 availability
F1 energy efficiency
F2 transactian/sec
F3 density
Fl recurring cost
FZ upfront cost
Fl scalability
F2 legacy
compatibility
F3 upgradability
F4 service life

Virtualization
Full replacement
Partial replacement
Software upgrade
New addition
Delay refresh
Hardware upgrade

1.00
1.00
0.75
0.90
0.20
1.00
0.40
0.80
0.00
1.00

0.25
0.50
0.25
0.85
0.05
0.30
0.20
0.50
0.60
0.30

0.63
0.63
0.15
0.82
0.15
0.63
0.30
0.65
0.50
0.60

0.32
0.32
0.25
0.85
0.10
0.32
0.15
0.53
0.90
0.27

0.45
0.50
0.75
0.80
0.10
0.30
0.30
0.50
0.80
0.30

1.00
1.00
1.00
1.00
1.00
0.25
1.00
1.00
0.80
0.30

0.25
0.25
0.25
0.80
0.10
0.25
0.20
0.50
1.00
0.20

0.30
1.00
1.00

0.90
0.10
0.20

0.50
0.60
0.60

0.80
0.27
0.27

0.50
0.10
0.30

0.20
1.00
1.00

1.00
0.20
0.20

Small environment

Large environment

0.26
0.20
0.15
0.108
0.106
0.105
0.096

0.21
0.20
0.15
0.12
0.113
0.110
0.10

Discussion and conclusions


This paper presented a decision-making framework using the AHP model that could be
used by businesses of different sizes. AHP was chosen because it is flexible enough to
handle the changing requirements over time and easily implemented through
electronic surveys and consulting experts within the companies. There are many
examples of the AHP used for decision-making especially in multi-criteria situations
(Vaidya and Kumar, 2006). The models output is a ranking by preference of possible
server refresh strategies. When analyzing the results, it is important to remember that
this is not a final decision, only a structured way of looking at the decision (Roper-Lowe
and Sharp, 1990).
Two cases were applied to this model, in order to test the validity of the model.
Surveys were sent to two businesses, one with small and one with large server
environments. We expected to see different strategy preferences between the small and
large businesses but the results showed that both environments ranked the
alternatives in the same order, with minor differences in scores.
There are a few possible reasons why the results were so close. In the interest of
time, an expert panel made up of the authors was used to make assumptions in order to
generate scores for the alternatives. It is possible, the assumptions for the alternatives
were not truly representative of the costs or benefits of each alternative. It could also be
that the number of surveys was too few to get a significant difference in results.
Alternatively, both groups may have coincidentally had similar organizational
objectives or goals and testing the model with different companies might yield
different results. We recommend that future implementations of the model use actual
values for the refresh strategies in order to get a true representation of the preferences.
Even though the results did not match our expectations, the model itself worked
well. The survey was simple for users to complete and feedback on the decision
hierarchy said that we covered the required criteria for this type of decision. The model
presented is expected to be used by decision makers every time a decision for a server
group needs to be made. If the goals of the company change, some of the criteria might
be scored differently. One advantage for working through the model is that is
documents the preferences at the time and can be reviewed is a particular decision is
questioned in the future.
The growing need for efficient IT infrastructure means companies need plans in
place for refreshing that infrastructure. This paper provides a framework for a decision
hierarchy that can rank possible server refresh alternatives. We recommend as the next
step for this model is to test the validity in a real-world scenario and verify that the top
ranking preference is truly feasible by the company. Another test of validity is to apply
the model to a refresh that is already complete and compare the results with the actions
taken by the company. The final stage of testing would be to implement this model in
companies across multiple industries to test the flexibility of the model.
References
Al-Subhi, K.M. and Al-Harbi, A.-S. (2001), Application of the AHP in project management,
International Journal of Project Management, Vol. 19 No. 1, pp. 19-27.
Bhagwat, R. and Sharma, M.K. (2007), Performance measurement of supply chain management
using the analytical hierarchy process, Production Planning & Control, Vol. 18 No. 8,
pp. 666-80.

IT infrastructure
refresh planning

517

BPMJ
17,3

518

Brill, K.G. (2007), The Invisible Crisis in the Data Center: The Economic Meltdown of Moores
Law, Uptime Institute, Santa Fe, NM.
Chen, H. and Kocaoglu, D.F. (2008), A sensitivity analysis algorithm for hierarchical
decision models, European Journal of Operational Research, Vol. 185 No. 1, pp. 266-88.
Duncan, N.B. (1995), Capturing flexibility of information technology infrastructure: a study of
resource characteristics and their measure, Journal of Management Information Systems,
Vol. 12 No. 2, pp. 37-57.
Gartner, Inc. (2009), Gartner Identifies the Top 10 Strategic Technologies for 2010, Gartner,
Orlando, FL.
Gerdsri, N. and Kocaoglu, D.F. (2007), Applying the analytic hierarchy process (AHP) to build a
strategic framework for technology roadmapping, Mathematical and Computer
Modelling, Vol. 46 Nos 7/8, pp. 1071-80.
Hafeez, K. and Essmail, E.A. (2007), Evaluating organisation core competences and associated
personal competencies using analytical hierarchy process, Management Research News,
Vol. 30 No. 8, pp. 530-47.
Karami, E. (2006), Appropriateness of farmers adoption of irrigation methods: the application of
the AHP model, Agricultural Systems, Vol. 87 No. 1, pp. 101-19.
Kocaoglu, D.F. (1983), A participative approach to program evaluation, IEEE Transactions on
Engineering Management, Vol. 30 No. 3, pp. 37-44.
Leung, P., Muraoka, J., Nakamoto, S.T. and Pooley, S. (1998), Evaluating fisheries management
options in Hawaii using analytic hierarchy process (AHP), Fisheries Research, Vol. 36
Nos 2/3, pp. 171-83.
Vaidya, O.S. and Kumar, S. (2006), Analytic hierarchy process: an overview application,
European Journal of Operational Research, Vol. 169, No. 1, pp. 1-29.
Roper-Lowe, G.C. and Sharp, J.A. (1990), The analytic process and its application to an
information technology decision, Journal of Operational Research Society, Vol. 41,
pp. 49-59.
Saaty, T.L. and Kearns, K.P. (1985), Analytical Hierarchy Process, Pergamom, New York, NY.
Sevkli, M., Koh, S.C.L., Zaim, S., Demirbag, M. and Tatoglu, E. (2008), Hybrid analytical
hierarchy process model for supplier selection, Industrial Management & Data Systems,
Vol. 108 No. 1, pp. 122-42.
Weill, P. (1992), The relationship between investment in information technology and firm
performance: a study of valve manufacturing sector, Information Systems Research,
Vol. 3, pp. 307-58.
Wong, J.K.W. and Li, H. (2008), Application of the analytic hierarchy process (AHP) in
multi-criteria analysis of the selection of intelligent building systems, Building and
Environment, Vol. 43 No. 1, pp. 108-25.

IT infrastructure
refresh planning

Appendix 1. Server refresh survey


SURVEY PARTICIPANT: ______________________________________________
POSITION (as it relates to Server Refreshs): _____________________________
A.1 Criteria and factors

519
Criteria

Factors

Reliability Availability Serviceability (RAS)

Serviceability
Reliability
Manageability
Availability

Performance

Energy Efficiency
Transaction/sec
Density

Total Cost of Ownership (TCO)

Recurring Costs
Upfront costs

Roadmap

Scalability
Legacy Compatibility
Upgradeability
Service Life

A.2: Factors definitions

Factor

Description

Serviceability
Reliability
Manageability
Availability

The ability for a failure to be serviced while the server remains operational
The history of failure and perceived future failures of the server.
The ease of supporting the server hardware and software in the companys IT infrastructure.
The percentage of uptime designed into the server implementation

Energy Efficiency

The energy used to perform a set of transactions (use of a benchmark or baseline)


The amount of transactions the server can perform in one second (use of a benchmark or
baseline)
The amount of server hardware in a given footprint (per sq. ft, per server rack, etc)

Transaction/sec
Density
Recurring Costs
Upfront costs
Scalability
Legacy
Compatibility
Upgradeability

The cost of incurred over the life of the server. Including energy used, cooling costs,
maintenance, licensing, ROI.
The cost of new server hardware, the loss of depreciation, the installation labor for the new
hardware and software
The ability to incrementally add one or more systems to an existing cluster when the overall
load of the cluster exceeds its capabilities
The servers ability to integrate with previous software and applications used in the computing
environment.
The servers ability to have its components replaced or additional components added to increase
its capabilities
The serviceability life of the server hardware or the application it runs on until it reaches its end
of service life (EOL).

(continued)

BPMJ
17,3

A.3 PAIRWISE COMPARISONS


For each pair, give a weighting for each criteria. The values can be from 1 to 99 and the total between the
two needs to add 100.
Example

520

RAS
60

PERFORMANCE
40

CRITERIA COMPARISON
RAS

PERFORMANCE

ROADMAP

RAS

TCO

RAS

TCO

PERFORMANCE

PERFORMANCE

ROADMAP

TCO

ROADMAP

FACTOR COMPARISON
RAS CRITERIA
Reliability

Availability

Serviceability

Manageability

Reliability

Serviceability

Reliability

Manageability

Availability

Serviceability

Availability

Manageability

PERFORMANCE CRITERIA
(continued)

Energy efficiency

Transactions/sec

Transactions/sec

Density

Energy efficiency

IT infrastructure
refresh planning

Density

521

TCO criteria
Upfront cost

Recurring cost

Roadmap criteria
Scalability

Legacy compatibility

Scalability

Upgradability

Scalability

Service life

Upgradability

Legacy compatibility

Upgradability

Service life

Service life

Legacy compatibility

Thank you for your participation

Appendix 2. Expert panels


Expert panel for
alternatives

Job title

Industry

Expert B

Sr database engineer/information
technolgy manager
Technical marketing engineer

Expert C
Expert D
Expert E

Software engineer
Validation engineer
Platform validation engineer

Expert F
Small organization
Survey respondent
1 small organization
Survey respondent
2 small organization
Large organization
Survey respondent
3 large organization
Survey respondent
4 large organization
Survey respondent
5 large organization
Survey respondent
6 large organization
Survey respondent
7 large organization

Solutions specialist

Microprocessor design and


manufacturing
Enterprise products design
and manufacturing
Microprocessor architecture
Microprocessor architecture
Processor/platform product
development
Telecommunications

Expert A

Server administrator

Years of
experience
27
5
7
11
7
5

Enterprise products design


and manufacturing
Enterprise products design
and manufacturing

10

11

Sr database manager

Microprocessor design and


manufacturing
Microprocessor design and
manufacturing
Microprocessor design and
manufacturing
Telecommunications

11

Sr database manager

Telecommunications

10

IT manager
Data center managers
Data center managers
Data center managers

10
12

Table AI.

BPMJ
17,3

522
Table AII.
Criteria preferences

Appendix 3. Survey results

RAS
Performance
TCO
Roadmap

Small environment

Large environment

0.22
0.41
0.14
0.22

0.32
0.25
0.25
0.19

Figure A1.
Small business
criteria preferences

Figure A2.
Large business
criteria preferences

Table AIII.
RAS sub-criteria
preferences

Serviceability
Reliability
Manageability
Availability

Small environment

Large environment

0.24
0.26
0.08
0.42

0.18
0.27
0.24
0.31

IT infrastructure
refresh planning

523
Figure A3.
Small business RAS
sub-criteria preferences

Figure A4.
Large business RAS
sub-criteria preferences

Energy efficiency
Transactions/second
Density

Small environment

Large environment

0.35
0.37
0.38

0.28
0.53
0.19

Table AIV.
Performance
sub-criteria preferences

Figure A5.
Small business
performance sub-criteria
preferences

BPMJ
17,3

524
Figure A6.
Large business
performance sub-criteria
preferences

Table AV.
TCO sub-criteria
preferences

Upfront costs
Recurring costs

Small business

Large business

0.35
0.65

0.54
0.46

Figure A7.
Small business roadmap
sub-criteria preferences

Figure A8.
Large business roadmap
sub-criteria preferences

Table AVI.
Roadmap sub-criteria
preferences

Scalability
Legacy compatibility
Upgradeability
Service life

Small business

Large business

0.23
0.20
0.26
0.30

0.26
0.18
0.20
0.36

(1) TCO (small environment):


.
Person 1: 20/80.
.
Person 2: 50/50.
(2) TCO (large environment):
.
Person 1: 30/70.
.
Person 2: 80/20.
.
Person 3: 50/50.
.
Person 4: 70/30.
.
Person 5: 40/60.
Corresponding author
Tugrul U. Daim can be contacted at: tugrul@etm.pdx.edu

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

IT infrastructure
refresh planning

525

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Vous aimerez peut-être aussi