Vous êtes sur la page 1sur 16


J. Multi-Crit. Decis. Anal. 13: 6580 (2005)

Published online in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/mcda.372

Comparison Study of Multi-attribute Decision Analytic Softwarez

Manchester Business School (MBS),The University of Manchester, Booth StreetWest, Manchester Ml5 6PB,
In this paper, we discuss the functionality and interfaces of ve MCDM software packages. We recognize that no
single package is appropriate for all decision contexts and processes. Thus our emphasis is not so much to compare
the functionality of the packages per se, but to consider their t with dierent decision making processes. In doing
so, we hope to provide potential users with guidance on selecting a package that is more compatible with their needs.
Moreover, we reect on the further functionality which we would believe should be developed and included in
MCDM packages. Copyright # 2006 John Wiley & Sons, Ltd.

decision processes and culture; multi-attribute decision analytic software

With the advance of modern computing technology, there are more and more software packages
available oering to support decision analysis. In
OR/MS Today 2002 biennial survey 27 software
packages were listed; in the 2004 survey there were
45 packages (Maxwell, 2002, 2004). All the
packages are seemingly versatile, oer userfriendly graphical interfaces and more than
sucient power to tackle substantial problems.
With such choice, it is natural to ask what are the
dierences between the packages. Are some more
powerful, more intuitive or whatever than others?
In our case we are concerned with the t of
MCDM packages with the culture and form of the
decision process. Some decision analyses are
carried out on a desktop for a client, some in a
decision support room with a group of decision
makers present, and some o-line for an organization with the results being fed back via a report
or decision seminar. Belton and Hodgkin (1999)
remark similarly on dierent contexts of use and
their requirements implications for decision support software. Are some packages more suited for
*Correspondence to: Manchester Business School
(MBS), The University of Manchester, Booth Street
West, Manchester Ml5 6PB, UK.
E-mail: simon.french@mbs.ac.uk
E-mail: ling.xu@mbs.ac.uk
This study is based upon the second authors MBA
dissertation at Manchester Business School.

Copyright # 2006 John Wiley & Sons, Ltd.

one or more of these processes than others?

Moreover, the designer of the decision analytic
software will inevitably have his or her own
worldview and philosophy and that may reduce
the value of the software for others with dierent
worldviews. Clearly, this is obviously the case if we
compare software emanating from distinct schools
of decision analysis such as the multi-attribute
value analysis (MAVT) (Keeney, 1992; Keeney
and Raia, 1976), the French outranking approach (Roy, 1996) and analytical hierarchy
process (AHP) (Saaty, 1980). But within schools
there are also subtle}and not so subtle!}dierences; and there are those who straddle two or
more schools.
In this paper we make a small step towards
exploring the dierences between ve software
packages, taking a perspective that is a little
broader than the purely that of functionality and
software design. Broadly, our objectives are to:
compare methods used for problem structuring, value elicitation, sensitivity analysis and
presentation within the packages;
* identify their weaknesses and strengths, such
as limitations, user friendliness, information
handling, and exibilities, paying regard to
their use in dierent decision making processes;
and thus both to:

help potential users identify the functionality

that they may need in an MCDM package;



suggest new functionality that may be

incorporated into MCDM packages.

risk and uncertainty; which behavioural

biases and heuristics do they exhibit.

However, we recognize that these objectives are

hopelessly ambitious given the enormous number
of dierent possible ways in which these packages
may be used. Thus we have a more focused
objective of contributing to the discussion of
methodologies for evaluating decision analytic
software for dierent purposes by providing
exemplar comparisons. We should also be clear
about what we are not seeking to accomplish. We
do not intend to provide a comprehensive survey
of the functionality of the packages: the Aiding
Insight series of surveys in OR/MS Insight are
available for that (Maxwell, 2002, 2004). We see
our work as complementary to that of Belton and
Hodgkin (1999).
The paper is organized as follows. In the next
section, we note the variety of contexts for decision
making and dierent decision support processes.
We then describe the methodology for comparing
the software and the packages themselves. Finally,
we turn to the comparison and some conclusions.
We do not summarize the underlying theories and
technical details of decision analytic methodologies in the following, referring instead to the
literature (Belton and Stewart, 2002; French, 1988;
Goodwin and Wright, 2003; Keeney, 1992; Saaty,
1980; Watson and Buede, 1987).

For discussion of such factors, we refer to, inter

alia, (Bazerman, 2002; Belton and Stewart, 2002;
French and Geldermann, 2005; French and Rios
Insua, 2000; Kleindorfer et al., 1993; Watson and
Buede, 1987).
In addition to the variety in decision contexts,
there is only slightly less variety in decision
processes. For instance:



No two situations which call for a decision are ever

identical; they dier due to a wide range of factors:

Problem context: For example, what are the

external characteristics of the problem; is it
well structured; is uncertainty present; how
many options and possibilities need to be
Social Context: For example, what are the
characteristics of the social organization in
which the decision has to be made; who are
the decision makers and how many are there;
what are their responsibilities and accountabilities; who are the stakeholders?
Cognitive factors of the DMs: For example,
how intelligent, imaginative, knowledgeable
are the decision makers; can they live with

Copyright # 2006 John Wiley & Sons, Ltd.

Who are the dierent players: decision

makers, experts, stakeholders and analysts
(French and Rios Insua, 2000)? When and
how do they become involved? On some
occasions not all may be involved and some
individuals may serve in two or more roles.
Is there a single decision maker to whom
ultimately all decision analyses are addressed
and what decision analytic methodologies do
her1 worldview favour? Or are there several
with conicting beliefs, preferences and,
perhaps more fundamentally, dierent
How much time is available for the analysis?
It may be anything from a couple of hours to
a couple of months or, occasionally in the
case of societal decisions, seemingly decades.
Will the analyses have to be communicated
to stakeholders as part of a subsequent
communication implementation process?
Will the analysis be conducted by the
decision maker themselves? Will an impartial
analyst be involved? Will the analyst and
decision makers work together throughout
or will the analyst take the analysis away and
report back to the decision makers with a
solution. If there are a group of decision
makers, how will the analyst interact with
them: one to one or in a plenary group?

Given this enormous range of contexts and

decision processes, it would be surprising if
decision analytic software came in one size ts
all packages}although the vendors of the
packages might try to persuade you otherwise.
Moreover, it would be equally surprising if we

We refer to decision makers in the feminine and

decision analysts in the masculine.
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)


could analyse all software packages from all these

perspectives. Thus we shall be more selective in our
exemplar comparisons and consider three contextprocess pairs which to some extent span the range
of possibilities. But only some extent: we do not
explore any serious approach to modelling uncertainty such as found in decision tree and
inuence diagram packages. We focus on multiattribute problems with conicting objectives.
Thus we consider the softwares suitability for
use in:
A. A single user context in which a single
decision maker analyzes the problem for
herself on her own computer: a self-help or
DIY context (Belton and Hodgkin, 1999).
B. A group meeting context in which the
decision makers gather with an analyst and
analyse the problem in a plenary decision
conference (Belton and Hodgkin, 1999;
Eden and Radford, 1990; French, 1988;
Phillips, 1984), i.e. in which the analyst runs
the software and the results are projected for
the group to see together.
C. A consultancy role in which the analyst
meets with the decision makers}perhaps as
a group, perhaps one-to-one}and then
analyzes the issues o-line reporting back
to the decision makers with recommendations in due course.
As noted in Belton and Hodgkin (1999), these
contexts lead to dierent requirements. For
instance in case A, the decision maker may be a
non-expert user and will need more support in the
form of help and other explanation features than
an expert analyst. In case B, the software should
project well and oer the audience clear, uncluttered screens without distracting information and
options that would only be of use to an expert
user; however, since the decision makers do not
operate the software nor interpret the analysis
themselves, there need be less in the form of help
and explanation. In case C, the software may oer
complexities to an expert user, but should also
oer report writing features and simple plots to
enable the insights gained from its use to be
conveyed to a wider audience. In case A, the
software may need to help the non-expert decision
makers structure the problem quickly and eectively; whereas in cases B and C the analyst may
use his expertise to structure the problem before
inputting it into the model format assumed by the
Copyright # 2006 John Wiley & Sons, Ltd.


software. In all cases, the means of inputting

judgemental information may need to support the
elicitation process itself.
Note that in case B, we have very much in mind
a decision conference with plenary analysis of the
models. There is also a group meeting context in
which the decision makers use networked software
to explore the problem both individually and in
plenary as a group (Nunamaker et al., 1988). We
do not discuss this case because it requires the
software to be group enabled. While some of the
packages we consider either have some of this
functionality or oer a version which has, we did
not have the means to explore their use in such a
group context.


We chose to look at ve packages: HiView,
V.I.S.A, Web-Hipre, Expert Choice and Logical
Decisions. Our choice was in part guided by
availability and our experience with some of the
software. But we also sought to consider two
MAVT packages and three hybrids between
MAVT and AHP, although we recognize that
these distinctions are becoming blurred as the
packages develop and functionality is added. We
did not look at outranking packages, being aware
that the philosophy and perspective behind them
was far removed from our own and thus we could
not hope to provide a fair evaluation.
To compare the packages in detail, we rst
explored the functionality of each package rstly
using a common example relating to the choice of
a motorcycle (Isitt, 1990). As all the ve tested
packages use a weighted sum method for attribute
aggregation (albeit with dierent interpretations)
and certain core functionalities, such as ranking,
are the same, they will not be discussed further.
Instead we concentrate on those aspects in which
they dier, in particular:


problem structuring, such as attribute hierarchy construction and modication;

weight and value elicitation;
data presentation and sensitivity analysis.

Note that by problem structuring we mean the

initial stages of an analysis in which the decision
makers perceptions of the issues and concerns
facing them are brought to the surface and
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



modelled as a decision table. This process is often

supported by a range of soft modelling and
brainstorming methods (Pidd, 1996, 2004; Rosenhead and Mingers, 2001). Of course, we recognize
that the need for formulation varies from context
to context. In some cases, the attributes and
alternatives are all but explicit; in others there is
just what Acko termed a mess (Acko, 1974;
Pidd, 1996). We would note, however, that when
substantial formulation is necessary then we lean
towards value-focused methodologies (Keeney,
We recognize that terminology in the MCDM
eld is not entirely standard: cf. Belton and
Stewart (2002), Goodwin and Wright (2003) and
Keeney (1992). We use attribute and criteria
essentially synonymously, tending to use the
former, to describe characteristics of the alternatives, e.g. cost. An objective is an attribute plus
an imperative, e.g. minimize cost. We use the term
value tree generally, recognizing that others might
use attribute tree or criteria hierarchy.
One nal point: our more generic aim of
discussing the appropriateness of dierent MCDM
software for providing support in dierent decision making contexts means that sometime we
describe and discuss a feature in one package at
some length, thus giving it apparent emphasis for
that package. That feature may be present in
several or all the other packages, but since we do
not discuss it in detail again, we may seem to imply
that their implementations are less eective. That
is not our intention: absence of comment should
not be taken as an indication that the feature is
absent from a package.
4.1. HiView
HiView is one of the earliest software package for
supporting MCDM. It implements standard
MAVT approaches to supporting ranking decisions. It was developed in the 1980s, initially for
facilitating decision conferencing by Larry Phillips
and Scott Barclay at the London School of
Economics (Barclay, 1984). The original version
worked under MS DOS and used special fonts
designed for legibility via three-lens data projectors. The functionality and screen design was kept
simple in order to focus conference participants
attention on the core analysis needed in a 2-day
conference. Simplicity of use with more complex
Copyright # 2006 John Wiley & Sons, Ltd.

functionality hidden from immediate view to leave

a clean screen remain very much part of HiViews
design, consistent with the underlying requisite
modelling methodology championed by Phillips
(1984). Nonetheless, some complexity has been
added in the developments of later versions.2 For
instance, originally HiView had no pairwise
comparison options for eliciting values and
weights: values and weight were input numerically
or via thermometer scales. The latest version
includes the Macbeth qualitative pairwise comparison approach (Bana e Costa and Vansnick, 2000).
No networked version of the software is available.
HiView does have an easy-to-use reporting function. The generated report has both graphics and
text. The report can be viewed through web
browsers and will help in developing reports,
especially necessary in case C.
4.1.1. Problem structuring. Attributes can be
added, moved and linked easily to form a value
tree: and the linking may be performed after the
attributes have been brainstormed. The nished
value tree can be displayed either vertically or
horizontally}and the package will re-organize the
tree neatly. There is no technical reason why the
value tree cannot be very large, but the windows
have clearly been designed to display most
eectively trees with 515 attributes, i.e. the scale
of tree that can be built and analysed easily in a
2-day decision conference.
4.1.2. Value and weight elicitation. Relative or
absolute values for leaf attributes can be entered
numerically: support for their elicitation is provided graphically via a thermometer or histogram,
or verbally via the Macbeth module. The attributes can be rescaled subsequently via a non-linear
value function: piecewise linear, discrete (for
verbal grades) and logarithmic value functions
are currently supported. The graphic window for
value function denition is simple and straightforward to use. For piecewise linear value function,
new points can be added at any location of the
considered attribute value range.
HiView supports numerical weight elicitation by
direct assignment and swing weighting (Figure 1);

The current third version, HiView 3, is built by IPL

Information Processing Ltd., marketed by Catalyze
Limited (www.catalyze.co.uk) in association with
Enterprise LSE.
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



Figure 1. Support for swing weighting in HiView.

and the Macbeth module provides support for

verbal pairwise elicitation. The direct weight
assignment is relatively simple and easy to operate.
It considers one family of attributes at a time. The
weights can be assigned either by typing or mouse
drag-dropping in the interactive graphics window.
For a relatively large tree, the swing weight
window does not display all leaf attributes at
once; again there is a presumption that the value
tree will be of the moderate size common in
decision conferences.
4.1.3. Sensitivity analysis. HiView provides an
overview picture of the local sensitivity of the
ranking, showing when changes in the weights of
individual attributes will lead to a change in the
highest ranked alternative: see Figure 2. This is a
unique and very valuable feature of HiView. It
identies sensitive weights and guides further
investigation. In the picture, the attribute will be
coloured red if less than 5% change of its weight
will cause a change in alternatives ranking, amber
if 515%, and green if above 15%. There is also a
sorts function which identies the relative advantages and disadvantages in terms of weighted or
unweighted attributes scores between any pair of
alternatives. While not strictly a sensitivity technique, this can guide exploration and certainly aids
understanding of the reasons that certain alternatives rank highly.
4.2. V.I.S.A
V.I.S.A stands for Visual Interactive Sensitivity
Analysis for Multi-criteria Decision Making. It
was developed by Valerie Belton (Belton, 1984;
Belton and Vickers, 1988). The rst version
Copyright # 2006 John Wiley & Sons, Ltd.

Figure 2. Overview of local sensitivity to weights on all


appeared in 1988; its latest release is V.I.S.A

5.0.3 Originally written for MS DOS, it is now a
fully compliant MS Windows program, which
implements standard MAVT approaches to supporting decisions. There is also a group version

Developed and marketed by Visual Thinking (www.visualt.com).

J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



Figure 3. The main window in V.I.S.A.

allowing networking and a variety of joint working.

In many respects V.I.S.A is more a research and
education tool than a widely marketed MCDM
package. Valerie Belton and her team at Strathclyde
University are continually exploring and evaluating
extra functionality in prototype versions.
The opening screen looks busier than some,
though there are in truth no more buttons and
menus than average, because there is a watermark
explaining how to get started and get help and an
empty window for inputting alternatives. Those of
us persuaded by value-focused thinking arguments
(Keeney, 1992) may be a little discomforted by this
initial subliminal lead towards alternative-focused
thinking; but it is more than possible to build
Copyright # 2006 John Wiley & Sons, Ltd.

the value tree rst and look to creating alternative

to evaluate. In use it is both easy and productive
to have many sub-windows open leading to a
much busier screen than some of the other
packages. Thus more skill is necessary in using
V.I.S.A with groups lest they are distracted by
windows which are not the current centre of
attention. Furthermore, font sizes tend to be small
by default though there are zoom buttons to resize
the tree quickly. For individual use they are ne,
but in projection to moderate-sized groups, there
can be diculties (see Figure 3). There are no
report writing features per se; the user is left to cut
and paste plots using the standard Windows
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



Figure 4. Interfaces for entering assessment data in V.I.S.A.

4.2.1. Problem structuring. In its main window,

attributes can be added and moved easily. All new
attributes join to one nominated or chosen by
default earlier, but attributes do not have to be
linked into the tree immediately (using Create
rather than Add). The alternative sub-window,
which is always on screen}though it can be
minimized}allows users to enter or modify
information such as alternatives and their values
on all the bottom-level attributes. It is probably
the easiest package of the ve for individuals to
learn and use. The quick reference in the help
system can help a user to start using the package in
a few minutes.
One point is that the value tree is plotted from
left to right with the overall attribute leftmost.
Other packages oer top to bottom plotting which
corresponds with much of the terminology used in
decision analysis: e.g. higher level attributes,
breakdown of overall success into contributing
factors, top-down or bottom-up construction,
indeed the very word tree have vertical associations. While for individuals this may not make a
lot of dierence, for a facilitator or analyst
explaining the analysis to a group there can be a
constant dislocation between natural language and
the plotted tree.
The close association of the V.I.S.A team and
COPE/DecisionExplorer4 teams at the University
of Strathclyde has led to an exploration of the
interface between cognitive mapping in the early

See www.banxia.com

Copyright # 2006 John Wiley & Sons, Ltd.

stages of problem formulation and the development of a multi-attribute model (Belton et al.,
1997; Eden and Ackermann, 1998). There are nowclear methodological guidelines which help in
problem structuring.
4.2.2. Value elicitation. V.I.S.A can handle quantitative and qualitative attributes. Grades or scores
can be used to assess an alternative on a
qualitative attribute. Each of the grades has an
associated underlying score which is set at the
outset of the assessment. There are two interfaces
for entering the assessment data of an alternative
on a quantitative attribute, shown in Figure 4(a)
and (b), and one on a qualitative attribute, shown
in (c). The data can be entered using a mouse to
drag and drop an alternative along the thermometer in (a) or the bar corresponding to the
alternative in (b), or selected from a drop down list
of a pre-dened set of grades. They can also be
typed in directly into the alternative window: see
Figure 3. For qualitative attributes, data can be
entered only from the alternative window. The
interface for value function denition is very userfriendly and exible. It provides great control on
the shape of the value function. The value function
does not have to be monotonic.
Weighting can be elicited in two ways in
V.I.S.A: across tree and within family. Across tree
is a swing weighting method used to elicit a
cumulative or global weight for each bottom-level
attribute. All bottom-level attributes are considered from the tree view window. Weights can be
changed by dragging a red square along the thin
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



line at the end of each bottom-level attribute or

they can be set in a sub-window using histograms
or thermometers. The weights of higher-level
attributes are calculated automatically by addition
and renormalization. The within-family method
breaks the tree into families of attributes, each
consisting of a parent and its immediate children.
In each family, the relative importance of a child
attribute is considered in relation to the parent and
a weight assigned accordingly. This process is
repeated for all the families. Unlike some other
packages, e.g. Expert Choice and HiView, V.I.S.A
initially defaults to equal weights. This has the
advantage that users can begin exploring the
model as soon as a few criteria and alternatives
are dened; but there are disadvantages.
Firstly, the exploration might distract users
before they have nished brainstorming attributes
and building the tree. Secondly, to suggest any
weights risks anchoring biases (Bazerman, 2002;
Kahneman et al., 1982). Thirdly, an exploration
process in which the nal ranking may visible as
weights are set is open to manipulation, deliberate
or subconscious.
4.2.3. Sensitivity analysis. V.I.S.As strength is its
interactive sensitivity analysis functions: they are
simple, exible yet very powerful. Indeed, the
development of V.I.S.A was stimulated by the
desire to provide such interactive exploration of
sensitivity arising from Beltons early work (Belton, 1984). For some time this interactivity was the
programs unique selling point. Specically, sensitivity analysis is carried out by displaying performance scores of all alternatives (i) on a selected
attribute; (ii) on two selected attributes on a twodimensional plane; and (iii) on a selected attribute
as a function of a selected attribute weights. The
eects of changing any weight or score on a leaf
attribute is shown dynamically on the graphs; i.e.
the eects of any changes are automatically
updated on all current displays, at all levels of
the hierarchy. Recent work by Hodgkin et al.
(2002) has explored novel sensitivity plots which
one hopes will soon be incorporated into the next
version of V.I.S.A.
4.3. Web-Hipre
Web-Hipre has been developed by Raimo Hamalainen and programmed by Jyri Mustajoki. It is
based upon an earlier MS DOS package, Hipre
3+. It implements both MAVT and AHP
methodologies for supporting decisions. As its
Copyright # 2006 John Wiley & Sons, Ltd.

name suggests, Web-Hipre is a web-based package

written in Java and part of the Decisionarium site
(Hamalainen, 2003).5 As a web-based application,
there is no need to download and install any
software in order to use it. It can be accessed from
Internet anywhere in the world. Although it has
been used in many applications, see, e.g. Mustajoki et al. (2002), the package, as is the case for the
whole decisionarium site, has been built with a
strong emphasis as a research and education tool.
Web-Hipres great advantage is that it is deployed
via the Web. Anyone with a browser can run it.
This alone makes it excellent for teaching. For
consultancy and decision conferencing, it is better
to buy and run the program locally, as interacting
with local les is less straight forward from the
website. In one sense, the help function is one of
the most substantial available, since the decisionarium site oers e-learning tools in addition to
more conventional help facilities within WebHipre itself. There are no report writing features
at the time when the test was carried out
(November 2003): however some have been built
into a version of Web-Hipre used in nuclear
emergencies (Bertsch et al., 2005).

4.3.1. Problem structuring. Layout of the value

tree construction window looks like a table without grids. Each tree element occupies one table
cell. Although elements can be moved from one
cell to another, the software limits the layout of the
tree much more than the other packages we
examined. There is no support for problem
structuring other than the fact that attributes can
be placed onto the screen and linked into the value
tree later. The tree has to be built left to right with
left most element always representing the top
attribute. Moreover, the alternatives are included
as the lowest level}rightmost}nodes in the tree.
In this respect, Web-Hipre shares a perspective on
decision modelling with AHP. Moreover, the
alternatives must each be linked to every leaf
attribute, which can make the tree look very busy
if not messy (see Figure 5). Interestingly, WebHipre allows more than one top goals. This means
that Web-Hipre allows two or more value trees to
be built on the same screen. Those trees can have
some common attributes and alternatives. It is a
unique feature not found in other packages.

J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



Figure 5. The value tree plus linked alternatives in Web-Hipre.

4.3.2. Value elicitation. Attribute scores are input

into Web-Hipre via a table and there is little direct
support for elicitation. It is possible to assign a
marginal value function. The shape of a value
function can be linear, piecewise linear or exponential. Also value functions can be non-monotonic. In contrast with some other packages, it is
possible to set points on the value function exactly,
but this operation is somewhat less intuitive for
beginners. Web-Hipre does not allow attributes to
be scored with qualitative value judgements,
except that in the sense of paired comparisons in
the AHP screens.
The input to Web-Hipre supports ve methods
for assigning attribute weights: Direct, Smart,
Swing, Smarter and AHP. It is possible to mix
AHP and MAVT elicitation, reecting the view of
Raimo Hamalainen that the two methodologies
Copyright # 2006 John Wiley & Sons, Ltd.

have more in common than some other may think

(Salo and Hamalainen, 1997).
4.3.3. Sensitivity analysis. There is only one type
of graph for sensitivity analysis in Web-Hipre. The
graph shows the relationships between the scores
of alternatives on a selected attribute and the
weight of the selected sub-attribute: see Figure 6
for a similar plot from Expert Choice. The current
weight is marked with a xed vertical line. Clicking
the mouse anywhere on the graph will generate
another vertical line at the clicked position to
simulate the eect of the weight change. However,
the package lacks the interactivity of V.I.S.A on
the one hand, and the clarity of screen design of
HiView, on the other. Notably, Web-Hipre is the
only one of the ve packages not to provided
Pareto plot functionality; a tool that many of us
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



use to explore issues such as cost benet compromises (see e.g. French et al., 1992).
4.4. Expert Choice
The rst version of Expert Choice6 was developed
by Professors Forman and Saaty in 1983. They
realized that the rate of adoption of AHP, the
decision analytic methodology developed by Saaty
(1980), would be greatly enhanced by the availability of user-friendly software. They were right in
their assessment and one of the great strengths
that the AHP community have had over the years
is the availability of excellent software. Thus
Expert Choice has a long history and a large user
community which have ensured that now it is a
very mature and well designed piece of software.
Now in addition to AHP, Expert Choice oers
support to a wider range of linear weighting
methods, including direct weighting and nonlinear attribute value functions.
4.4.1. Problem structuring. Expert Choice has two
dierent interfaces for structuring a value tree: the
TreeView pane and the ClusterView pane. In both
panes, attributes can be easily sorted, automatically organized and moved around into dierent
branches of the tree. It also has a feature which is
not seen in many other packages: the ProCon
pane. This pane is used to support the formation
of the value tree. Pros and Cons can be entered
into the ProCon pane at any location and sorted
automatically or moved around manually after
being added. Then the pros and cons can be
converted into attributes. The conversion is
manual. To convert, the ProCon pane and one
of the TreeView and ClusterView panes are
displayed side by side, the pros and cons can be
drag-dropped into TreeView or ClusterView pane
to become an attribute in the value tree. Users are
prompted to rename the Pro or Con into a proper
attribute name during the process. Finally, for
those of us susceptible to losing data in the highly
unstructured process of problem formulation,
Expert Choice has excellent data backup functionalities to prevent accidental data losses, such as
from program crash, which is an important feature
to corporate users.

ratio (such as attribute x being as three times more

important as attribute y), verbal expression (such
as attribute x being moderately important than
attribute y), and two sliding bars with one
representing the importance of attribute x and
the other that of attribute y. For scoring an
alternative on a leaf attribute, in addition to
pairwise comparison, the user can directly score an
alternative on an attribute by using a number
between 0 and 1, or enter a number in original
attribute unit or a verbal rating using a pre-dened
grade. For those attributes assessed by methods
other than pairwise comparison and direct scoring,
value functions need to be dened and Expert
Choice provides such interfaces to do so. Those
features show that Expert Choice is no longer just
an implementation of AHP. For eliciting the
attribute weights, the pairwise comparison interface is automatically presented to users; but other
methods are also available, such as direct assignment of weights. Throughout, the weights and
scores are clearly treated and presented separately.
4.4.3. Sensitivity analysis. Expert Choice provides
several graphs for analysing the eects of weight
changes on the ranking. Some are standard, see
Figure 6. As pioneered by V.I.S.A, some are
interactive: e.g. weights, represented by bars or
lines, can be dragged to desired levels, while the
scores of the alternatives on a selected attribute
will follow the weight changes dynamically. There
are two further types of graphs which also respond
to the weight changes. One is for comparing two
alternatives head-to-head on a group of attributes.
The other is for comparing all alternatives on two
attributes in a two-dimensional plane, each dimension representing an attribute, which is ideal for
costbenet or cost eectiveness analysis.

4.4.2. Value elicitation. There are three dierent

interfaces for facilitating pairwise comparisons:

4.5. Logical Decisions

Logical Decisions for Windows7 was one of the
rst packages to provide support for both MAVT
and AHP methodologies. It also was one of the
rst to allow for some uncertainty in the attribute
values via probabilistic approaches and implemented a multi-attribute utility model, including
assessment of interaction weights (French and
Rios Insua, 2000; Keeney, 1992; Keeney and
Raia, 1976). While all the other four packages
have very strong connections with active research


Copyright # 2006 John Wiley & Sons, Ltd.

J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



Figure 6. A sensitivity plot produced in Expert Choice.

groups in academia; Logical Decisions is essentially a commercial package produced by an

independent software house and consultancy.
4.5.1. Problem structuring. The package uses very
dierent terminology from much of the MCDM
literature, referring to all (leaf) bottom level
attributes as measures and middle layer attributes
in a value tree as goals. It uses a level of a measure
instead of a value of an alternative on an attribute.
It calls a value function a common unit. It is
logically clearer than the other packages to
distinguish bottom-level attributes from middle
layer attributes as they are indeed dierent and are
treated dierently in all packages and underlying
decision analytic methodologies.
In structuring a value tree, which is plotted left
to right, attributes can be copied and pasted, but
cannot be moved by dragging and dropping. Most
of the operations related to the tree structuring
have to be done in other windows, taking more
clicks to build the same tree than in other
packages tested. There is support for zooming,
which is necessary since the goals and measures are
displayed in quite large text boxes, which nevertheless cannot display long attribute names very
well, see Figure 7. This makes the tree dicult to
view in decision conferences and also makes large
trees dicult to include in reports.
4.5.2. Value elicitation. The values of alternatives
on bottom-level attributes (measures) are entered
through a matrix, which is similar to the V.I.S.A
package. Unlike the other packages, it allows not
only a single number as an element of the matrix,
Copyright # 2006 John Wiley & Sons, Ltd.

but also a probability distribution. If an alternative has such uncertain attribute values, Monte
Carlo simulation is used in the evaluation. The
shape of value functions in Logical Decisions can
be linear, a default setting, piecewise linear, or
curved, but it has to be monotonic. Value
functions can be input through several interfaces,
according to the value function elicitation method
used, such as direct assessment, the bisection
method, and AHP. Compared with the other
packages, the shape of a value function is less easy
to control when using the direct method.
There are six methods for eliciting attribute
weights: Tradeos, Direct, Smart, Smarter, Pairwise and AHP. The Tradeos method is not seen
in any other packages. It asks the decision maker
how much she would sacrice on one attribute in
order to gain a certain amount on another. It then
calculates the weight ratio of the two attributes
from the tradeos and by taking into account the
value range of the two attributes. It captures the
spirit of the value trading approaches, AHP in
Logical Decisions is applied dierently to other
packages. Instead of considering a family of
attributes, it considers all bottom-level attributes
simultaneously, which makes the comparison
matrix very large and much less user-friendly.
In Logical Decisions, dierent decision makers
are able to have dierent sets of attribute weights
and dierent value functions. This is useful when
there are multiple stakeholders involved in a
decision problem. However, the current version
does not provide a synthesized group opinion, but
there is a full Logical Decisions for Group
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



Figure 7. A hierarchy in Logical Decisions.

Copyright # 2006 John Wiley & Sons, Ltd.

J. Multi-Crit. Decis. Anal. 13: 6580 (2005)


4.5.3. Sensitivity analysis. Logical Decisions provide similar types of graphs for sensitivity analysis
as in Expert Choice, but only one of the types is
interactive, which uses interactive bars to represent
weights on the left pane and alternative scores on
the right. The other types do not facilitate user
interactions to change weights.
Perhaps our most immediate conclusion from
exploring all ve packages}and a few others
besides}is that all provide excellent support for
the decision analytic process beginning with
problem formulation and continuing through to
evaluation and report writing. Much functionality
is common between the packages and there has
been convergence in terms of the methodologies
supported, most notably in the joint support of
AHP and MAVT analyses. Even HiView which
does not have any AHP functionality per se, now
supports pairwise comparisons in the form of the
Macbeth approach to elicitation. Only V.I.S.A
focuses solely on support for MAVT methodologies. And that perhaps is an unfair statement,
because Valerie Belton and her students have
experimented with many extensions to V.I.S.A,
including a fuzzy version (Koulouri and Belton,
1998). Nonetheless, there are distinctions between
the packages and their origins still shows through
in some of their functionality and interfaces. Let us
consider the t with the three cases of decision
processes identied in Section 2.
5.1. Case A: analysis conducted by the decision
makers themselves
If the decision maker is skilled in decision analysis,
each package provides substantial support, and
there is little to choose between them. For the
untrained decision maker, things are not so clear.
The methodology of decision analysis has a
sophistication that belies the simplicity of the
weighted sum of attribute scores model which lies
at the heart of decision analysis. It is too easy to
input numbers, obtain a ranking and adopt the
highest ranking alternative with only a perfunctory
exploration of the sensitivity of the ranking to the
inputs. Sensitivity analysis and Pareto plots,
indeed all the techniques of decision analysis
should be used to challenge thinking and catalyse
creativity (French, 2003; Phillips, 1984). Only
Web-Hipre via the decisionarium website provides
Copyright # 2006 John Wiley & Sons, Ltd.


substantial training support for decision makers

new to decision analysis. Some of the functionalities oered by, e.g. Web-Hipre or Logical
Decisions, can be inconsistent because they allow
the user to mix MAVT and AHP without checking
the theoretical compatibility of the implied operations. Modern object-oriented programming methods should enable the compatibility of methods
used in a decision analysis to be policed eectively
(Liu and Stewart, 2003). Then there is the issue of
whether the approach should be value focused or
alternative focused (Keeney, 1992; Wright and
Goodwin, 1999). The opening screens of Expert
Choice lead one into a brainstorming session
which may avoid too great an early focus on
alternatives. The other packages are eectively
neutral in supporting both approaches. In short,
all the packages provide excellent support for
decision analytic calculations but little in the way
of support of the decision making process itself.
This is perhaps surprising because MAUD, one of
the earliest MCDM packages, did emphasize
process support (Humphreys and McFadden,
1980). Klein (1994) has developed the theory
further, providing an automated explanation of
the implications of decision analytic calculations:
see also Papamichail and French (2003). The
importance of providing guidance and explanations in decision support software which is driven
by the decision makers themselves has been shown
by Benbasat and his co-workers (Dhaliwal and
Benbasat, 1996; Mao and Benbasat, 2000). Thus
we would hope that one of the next areas of
functionality to be included into MCDM packages
is support for the decision making process itself:
see also Papamichail and Robertson (2004).
5.2. Case B: decision conferencing
In the case of decision conferencing when the
analyses are run in plenary by the facilitator/
analyst and his team, the requirements on the
software are subtly dierent. The analyst team will
have the expertise to run the software and the
facilitation of the process means that the software
need not support the process itself. However, the
fact that the software is projected for all the group
to see does have implications. The plots and text
need to be easy to read and the screens should be
clear of distractions. Those of us who have worked
with groups and projected complex software know
the diculties caused if the screen becomes
cluttered with windows and there are options
available behind unexplained buttons. One is
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



forever pointing at the right place on the screen

and dragging attention back to the issue at hand.
In this respect, HiView still shows its origins as a
software tool purpose designed to support decision
conferences, though in fairness, the distinction in
this respect between the ve packages is not so
great as it was in previous versions. Requisite
modelling involves beginning simple and adding
complexities later; thus there is need for the
software to allow the model to be edited and
new attributes and alternatives added. These
packages meet that requirement to varying degrees
as noted earlier.
It is worth noting that none of the software has
really been designed for projection. What happens
is that the PCs screen is simply diverted to a data
projector. What the group sees is precisely what
the analyst sees. One development that would be
very welcome is specic windows were sent to the
data projector without all the gory details of
menus and buttons on the analysts screen. The
analyst could have complex analyses available to
him without distracting the group with all the
details, showing them only the plots that will truly
inform their understanding. This is eminently
possible, but not available in any of the current
versions of the software.
5.3. Case C: use in consultancy
As in case B, the software will be driven by a
experienced decision analyst. Thus there is little
need to either support the process or avoid
distraction by keeping the screens clear. What is
needed is sucient functionality for the analysis
itself and the means to include that in the
consultancy report. Some packages, as noted,
provide a reporting function; others rely on
exporting to Excel spreadsheets or the Windows
clipboard to provide the means of developing
tables and plots for the report. The route via Excel
is useful because Excel oers a range of plots to
supplement those in the software itself. The route
via the clipboard can be problematic because it
often requires far more Mb than are strictly
needed and even in the latest versions of Windows
XP sometimes the clipboard loses lines. WebHipre has the greatest diculty in this respect
because it is Java based and cannot write to the
users lestore at all: the clipboard provides the
only route. When reporting functions do exist,
they are useful but tend to reproduce output
available onscreen. It would be useful to ask
whether the printed medium, being non-interacCopyright # 2006 John Wiley & Sons, Ltd.

tive, requires a dierent range of plots and tables

to those use in live analysis.
5.4. Concluding remarks
There are two general areas in all packages could
do rather more. The rst is in problem formulation
and structuring. While Expert Choice does provide
support for brainstorming a list of entities for the
analysis and then building the model, none of the
packages really draw upon the growing range of
problem formulation methodologies, often referred to as soft OR (Belton and Stewart, 2002;
French et al., 1998; Pidd, 1996, 2004; Rosenhead
and Mingers, 2001). We have noted that Belton
et al. (1997) have investigated the interface
between a formulation phase supported primarily
by cognitive modelling followed by an evaluation
phase supported by V.I.S.A, but there is potentially much more that can be done in this regard.
The second point, which perhaps reects the
particular interests of one of us (French, 2003), is
that none of the packages explores the full range of
sensitivity analysis that is possible (Hodgkin et al.,
2002; Mateos et al., 2003; Rios Insua, 1990, 1999).
Perhaps all packages could move in these directions: certainly it seems to us that identifying
potentially optimal alternatives would be a very
useful feature.
Four of the packages have strong academic
connections: HiView, V.I.S.A, Expert Choice and
Web-Hipre. This seems to manifest itself in a
certain vibrancy in the way in which new
functionality has been introduced at each version.
Logical Decisions began over a decade ago with
the most advanced functionality in that it allowed
non-independent trade-os, but it has developed
little in functionality since then.8 It seems to us
that the next phase of development should be to
develop functionality relating to dierent contexts
of use, either marketing dierent versions of the
software or providing dierent skins that can be
adopted according to how the software is being
We would advise the potential user of MAVT
software to give careful consideration to the
intended context of use. There is much more to
such software than the calculation of weighted
sums and the presentation of attractive plots.
Their purpose is to support the process of decision

To be fair, we should remember that there is a groupenabled version which we have not examined.
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)


analysis from problem formulation to report

writing and implementation. Dierent contexts
require dierent emphases on the dierent stages
and thus some software ts better with one context
than others. Evaluation of MAVT software
requires a much broader perspective than one
focused on detailed functionality: see also Belton
and Hodgkin (1999). We hope that this papers
value will stem in part not just from the details
above, but its approach and recognition that
decision analytic software need do more than
facilitate calculation and analysis. It must also
cohere with the decision culture and process. We
hope too that our work will contribute to the
design of future decision analytic software.

We are grateful to many people for discussions: we
mention Valerie Belton, Raimo Hamalainen,
Larry Phillips, Theo Stewart and Jian-Bo Yang;
and thank them along with many others. Referees
oered very helpful advice on an earlier version of
this paper. However, the interpretations we oer
are ours alone}as are the errors. Moreover, we
trust that we do not oend the developers of the
dierent packages by our reections on the
motivation and origins of the packages. The
packages evaluated were in some cases, evaluation
downloads provided on vendors websites.

Acko RL. 1974. Redesigning the Future: A Systems
Approach to Societal Planning. Wiley: New York.
Bana e Costa CA, Vansnick J-C. 2000. Cardinal value
measurement with Macbeth. In Decision Making:
Recent Developments and Worldwide Applications,
Zanakis SH, Doukidis G, Zopounidis C (eds).
Kluwer: Dordrecht, 317329.
Barclay S. 1984. Hiview Software Package. Decision
Analysis Unit, London School of Economics.
Bazerman M. 2002. Managerial Decision Making. Wiley:
New York.
Belton V. 1984. The use of a simple multi-criteria model
to assist in selection from a short-list. Journal of the
Operational Research Society 36: 265274.
Belton V, Ackermann F, Shepherd I. 1997. Integrated
support from problem structuring through to alternative evaluation using COPE and VISA. Journal of
Multi-Criteria Decision Analysis 6: 115130.
Copyright # 2006 John Wiley & Sons, Ltd.


Belton V, Hodgkin J. 1999. Facilitators, decision

makers, D.I.Y. users: is intelligent multi-criteria
decision support for all feasible or desirable. European
Journal of Operational Research 113(2): 247260.
Belton V, Stewart TJ. 2002. Multiple Criteria Decision
Analysis: An Integrated Approach. Kluwer Academic
Press: Boston.
Belton V, Vickers S. 1988. VISA Software Package.
Department of Management Science, University of
Bertsch V, French S, Geldermann J, Hamalainen RP,
Papamichail KN, Rentz O. 2005. Multi-criteria
decision support and evaluation of strategies for
environmental remediation management. Omega,
accepted 2005.
Dhaliwal JS, Benbasat I. 1996. The use and eects of
knowledge-based system explanations: theoretical
foundations and a framework for empirical evaluation. Information Systems Research 7(3): 342362.
Eden C, Ackermann F. 1998. Making Strategy: The
Journey of Strategic Management. Sage: London.
Eden C, Radford J (eds). 1990. Tackling Strategic
Problems: The Role of Group Decision Support. Sage:
French S (ed.). 1988. Readings in Decision Analysis.
Chapman & Hall: London.
French S. 2003. Modelling, making inferences and
making decisions: the roles of sensitivity analysis.
TOP 11(2): 229252.
French S, Geldermann J. 2005. The varied contexts of
environmental decision problems and their implications for decision support. Environmental Science and
Policy 8: 378391.
French S, Kelly GN, Morrey M. 1992. Decision
conferencing and the International Chernobyl Project.
Radiation Protection Dosimetry 12: 1728.
French S, Rios Insua D. 2000. Statistical Decision
Theory. Arnold: London.
French S, Simpson L, Atherton E, Belton V, Dawes R,
Edwards W, Hamalainen RP, Larichev O, Lootsma
FA, Pearman AD, Vlek C. 1998. Problem formulation
for multi-criteria decision analysis: report of a workshop. Journal of Multi-Criteria Decision Analysis 7:
Goodwin P, Wright G. 2003. Decision Analysis for
Management Judgement. Wiley: Chichester.
Hamalainen RP. 2003. Decisionarium}aiding decisions, negotiating and collecting opinions. Journal of
Multi-Criteria Decision Analysis 12: 101110.
Hodgkin J, Belton V, Koulouri A. 2002. Intelligent user
support for MCDA: a case study. Working Paper,
Department of Management Science, University of
Strathclyde, Glasgow Gl 1 QE, UK.
Humphreys PC, McFadden W. 1980. Experiences with
MAUD: aiding decision structuring versus bootstrapping the decision maker. Acta Psychologica 45: 5169.
Isitt T. 1990. The sports tourers. Motor Cycle International 64: 1827.
J. Multi-Crit. Decis. Anal. 13: 6580 (2005)



Kahneman D, Slovic P, Tversky A (eds). 1982.

Judgement under Uncertainty. Cambridge University
Press: Cambridge.
Keeney RL. 1992. Value-Focused Thinking: A Path to
Creative Decision Making. Harvard University Press:
Cambridge, MA.
Keeney RL, Raia H. 1976. Decisions with Multiple
Objectives: Preferences and Value Trade-os. Wiley:
New York.
Klein DA. 1994. Decision-Analytic Intelligent Systems:
Automated Explanation and Knowledge Acquisition.
Lawrence Erlbaum Associates: New Jersey.
Kleindorfer PR, Kunreuther HC, Schoemaker P. 1993.
Decision Sciences. Cambridge University Press:
Koulouri A, Belton V. 1998. Fuzzy multi-criteria DSS:
their applicability and their practical value. Research
Reports, Department of Management Science, University of Strathclyde, Glasgow.
Liu D, Stewart TJ. 2003. Integrated object-oriented
framework for MCDM and DSS modelling. Decision
Support Systems 38: 421434.
Mao J-Y, Benbasat I. 2000. The use of explanations in
knowledge-based systems: cognitive perspectives and
a process-tracing analysis. Journal of Management
Information Systems 17(2): 153179.
Mateos A, Jiminez A, Rios Insua S. 2003. Modelling
individual and global comparisons for multi-attribute
preferences. Journal of Multi-Criteria Decision Analysis 12: 177190.
Maxwell D. 2002. Aiding Insight VI. OR/MS Today
29(3): 4451.
Maxwell D. 2004. Aiding Insight VII. OR/MS Today
31(3): 4455.
Mustajoki J, Hamalainen RP, Marttunen M. 2002.
Participatory multi-criteria decision analysis with
Web-Hipre: a case of lake regulation policy. Systems
Analysis Laboratory, Helsinki University of Technology. http://www.sal.hut./Publications/pdf-les/mmusb.
pdf (Visited on 28/10/2003).

Copyright # 2006 John Wiley & Sons, Ltd.

Nunamaker JF, Applegate LM, Konsynski BR. 1988.

Computer-aided deliberation: model management
and group decision support. Operations Research 36:
Papamichail KN, French S. 2003. Explaining and
justifying the advice of a decision support system:
a natural language generation approach. Expert
Systems with Applications 24: 3548.
Papamichail KN, Robertson I. 2005. Integrating decision making and regulation in the management
control process. Omega 33(4): 319332.
Phillips LD. 1984. A theory of requisite decision models.
Acta Psychologica 56: 2948.
Pidd M. 1996. Tools for Thinking: Modelling in Management Science. Wiley: Chichester.
Pidd M (ed.). 2004. Systems Modelling: Theory and
Practice. Wiley: Chichester.
Rios Insua D. 1990. Sensitivity Analysis in MultiObjective Decision Making. Springer: Berlin.
Rios Insua D (ed.). 1999. Sensitivity analysis in
MCDA. Journal of Multi-Criteria Decision Analysis
8(3): 117187.
Rosenhead J, Mingers J (eds). 2001. Rational
Analysis for a Problematic World Revisited. Wiley:
Roy B. 1996. Multi-Criteria Modelling for Decision
Aiding. Kluwer Academic Publishers: Dordrecht.
Saaty TL. 1980. The Analytical Hierarchy Process.
McGraw-Hill: New York.
Salo A, Hamalainen RP. 1997. On the measurement of
preferences in the analytical hierarchy process (with
discussion). Journal of Multi-Criteria Decision Analysis 6(6): 309343.
Watson SR, Buede DM. 1987. Decision Synthesis:
The Principles and Practice of Decision Analysis.
Cambridge University Press: Cambridge.
Wright G, Goodwin P. 1999. Rethinking value elicitation for personal consequential decisions (with discussion). Journal of Multi-Criteria Decision Analysis
8(1): 330.

J. Multi-Crit. Decis. Anal. 13: 6580 (2005)