Vous êtes sur la page 1sur 307

1

2
Introduction

Welcome to version 1.0 of the TPM-Catalogue of Concepts, Theories en Methods!

Below, you will find a list of more than one hundred entries, written by over 75 TPM-
researchers. Perspectives range from quantitative to qualitative, from descriptive to
normative, from engineering to the various branches of the social-sciences, and... more.

Before you scroll down to entry A (Accident deviation model?) or perhaps W (Wicked
problems?), let us explain what this catalogue aims to be, and what it doesnt pretend to be.

The catalogue aims at:
- Inspiring new as well as incumbent TPM-researchers, by highlighting the diversity of our
departments knowledge base and research interests, and
- Communicating the TPM-body of knowledge, both internally as well as externally.

This catalogue does not pretend to be:
- A list of ingredients that we want incoming researchers to choose from. On the contrary,
we would rather have PhD-candidates add to this catalogue in the course of their research,
than have them choose a theory or method from whats available in the current version.
- A static and exhaustive overview of TPM-research. On the contrary, this catalogue will be
a living document, and researchers can add entries anytime (see contact information).

Within a few months, we will publish version 2.0, which will take the form of a wiki-like
tool, with some nice extra functionalities. The upgraded catalogue will also feature an attempt
at positioning the different concepts, theories and methods vis--vis each other. We hope this
will further enhance the catalogues relevance for TPM-researchers.

To us, the combination of topics discussed in this catalogue and the way in which they are
discussed are a testimony of the broadness and quality of our departments research
endeavours. It was our pleasure to help create this first version of the TPM-catalogue, and we
expect that as a reader, you will enjoy it as much as we do.

To all contributing sections: a big thank you!
- Economics of Infrastructures
- Energy and Industry
- Policy Analysis
- Policy, Organisation, Law and Gaming
- Systems Engineering
- Safety Science
- Technology Dynamics and Sustainable Development
- Transport and Logistics
We hope to hear from the other sections in the near future, too.

The TPM-catalogue team:
Caspar Chorus (correspondence: c.g.chorus@tudelft.nl )
Bert van Wee
Sjoerd Zwart
Ruihua (Zack) Lu (editorial assistance)
3
Table of Contents

Concept .................................................................................................................................. 6
ACTOR ...................................................................................................................................... 8
AGENT .................................................................................................................................... 10
ATTITUDE .............................................................................................................................. 12
BOW-TIE ................................................................................................................................. 14
BUSINESS ECOSYSTEM ...................................................................................................... 16
COMPLEXITY ........................................................................................................................ 18
DESIGN PATTERNS .............................................................................................................. 20
ECONOMIC GOVERNANCE ................................................................................................ 23
FINDING OF LAW (RECHTSVINDING) ............................................................................... 27
GOVERNANCE1 .................................................................................................................... 28
GOVERNANCE2 .................................................................................................................... 31
HISTORY OF TECHNOLOGY .............................................................................................. 33
HUMAN FACTORS ................................................................................................................ 35
INCREASING RETURNS ...................................................................................................... 37
INDUSTRIAL ECOLOGY ...................................................................................................... 39
INSTITUTION ......................................................................................................................... 41
INTERACTIVE LEARNING .................................................................................................. 44
LEGAL CERTAINTY (RECHTSZEKERHEID) ..................................................................... 46
MULTISOURCE MULTIPRODUCT (ENERGY) SYSTEMS .............................................. 47
OPERATIONAL READINESS (OR) ...................................................................................... 49
(ORGANISATIONAL) SAFETY CULTURE/ SAFETY CLIMATE .................................... 51
POLICY ENTREPRENEURSHIP ........................................................................................... 53
PUBLIC VALUES ................................................................................................................... 55
RESILIENCE ........................................................................................................................... 57
RESPONSIBLE INNOVATION ............................................................................................. 59
SAFE ENVELOPE OF OPERATIONS .................................................................................. 61
SAFETY MANAGEMENT SYSTEM (SMS) ........................................................................ 63
SELF-MANAGEMENT .......................................................................................................... 66
SENSEMAKING ..................................................................................................................... 68
SITUATION AWARENESS ................................................................................................... 70
STATE AID ............................................................................................................................. 72
STRATEGIC BEHAVIOUR ................................................................................................... 74
TECHNICAL ARTIFACT ....................................................................................................... 76
TRANSITION .......................................................................................................................... 78
TRAVEL TIME BUDGETS .................................................................................................... 82
UNCERTAINTY IN MODEL-BASED DECISION SUPPORT ............................................ 84
VERDOORNS LAW (AT THE FIRM LEVEL) .................................................................... 87
VIRTUAL WORLDS .............................................................................................................. 89
WICKED PROBLEMS ............................................................................................................ 91

Theory .................................................................................................................................. 93
BOUNDED RATIONALITY THEORY ................................................................................. 95
COHERENCE THEORY ......................................................................................................... 97
ENGINEERING SYSTEMS .................................................................................................. 100
EVOLUTIONARY GAME THEORY .................................................................................. 102
EXPECTED UTILITY THEORY ......................................................................................... 104
4
FUNCTIONS OF INNOVATION SYSTEMS ...................................................................... 106
ICE FUNCTION THEORY ................................................................................................... 108
METAETHICAL THEORIES ............................................................................................... 110
NORMATIVE ETHICAL THEORIES ................................................................................. 113
ORGANISATIONAL LEARNING ....................................................................................... 116
POLICY LEARNING IN MULTI-ACTOR SYSTEMS ....................................................... 119
PRINCIPAL AGENT THEORY ........................................................................................... 121
PROCESS MANAGEMENT ................................................................................................. 123
PUNCTUATED EQUILIBRIUM THEORY IN PUBLIC POLICY ..................................... 126
RANDOM UTILITY MAXIMIZATION-THEORY (RUM-THEORY) .............................. 129
SOCIO-TECHNICAL SYSTEM-BUILDING ...................................................................... 131
SYSTEM AND CONTROL THEORY ................................................................................. 133
SYSTEMS ENGINEERING .................................................................................................. 135
THEORY OF INDUSTRIAL ORGANIZATION (IO) ......................................................... 138
THEORY OF INSTITUTIONAL CHANGE ........................................................................ 140
THEORY OF PLANNED BEHAVIOUR ............................................................................. 142
TRANSACTION COST THEORY/ECONOMICS ............................................................... 144

Method ............................................................................................................................... 146
ACCIDENT DEVIATION MODEL ..................................................................................... 149
ACCIDENT STATISTICS .................................................................................................... 151
ACTOR ANALYSIS .............................................................................................................. 153
ADAPTIVE POLICYMAKING ............................................................................................ 157
AGENT BASED MODEL ..................................................................................................... 161
BACKCASTING ................................................................................................................... 163
BAYESIAN ANALYSIS ....................................................................................................... 166
BAYESIAN BELIEF NETWORKS (BBN) .......................................................................... 168
BOUNCECASTING .............................................................................................................. 171
CAUSAL MODEL ................................................................................................................. 173
COLLABORATION ENGINEERING .................................................................................. 175
COLLABORATIVE STORYTELLING ............................................................................... 178
COST BENEFIT ANALYSIS ............................................................................................... 180
CPI MODELLING APPROACH ........................................................................................... 182
DEMO METHODOLOGY .................................................................................................... 185
DESIGN SCIENCE ................................................................................................................ 187
DISCRETE EVENT SIMULATION (DES) .......................................................................... 190
DISCRETE EVENT SYSTEMS SPECIFICATION (DEVS) ............................................... 192
ECFA+: EVENTS & CONDITIONAL FACTORS ANALYSIS.......................................... 194
(EMPIRICALLY INFORMED) CONCEPTUAL ANALYSIS ............................................ 197
ENGINEERING SYSTEMS DESIGN .................................................................................. 199
ETHNOGRAPHY .................................................................................................................. 202
EVENT TREE ANALYSIS (ETA)........................................................................................ 206
EXPERT JUDGMENT & PAIRED COMPARISON ........................................................... 208
EXPLORATORY MODELLING (EM) ................................................................................ 210
EXPLORATORY MODELLING ANALYSIS (EMA) ........................................................ 213
FAULT TREE ANALYSIS (FTA) ........................................................................................ 216
FINITE-DIMENSIONAL VARIATIONAL INEQUALITY ................................................ 218
GAMING SIMULATION AS A RESEARCH METHOD.................................................... 220
GENEALOGICAL METHOD ............................................................................................... 223
GRAMMATICAL METHOD OF COMMUNICATION...................................................... 225
5
HAZARD AND OPERABILITY STUDY (HAZOP) ........................................................... 228
HUMAN INFORMATION PROCESSING .......................................................................... 230
IDEF0 INTEGRATION DEFINITION FOR FUNCTION MODELLING ....................... 232
LINEAR PROGRAMMING ................................................................................................. 234
MODEL PREDICTIVE CONTROL ..................................................................................... 237
MODELLING AND SIMULATION ..................................................................................... 240
MONTE CARLO METHOD ................................................................................................. 242
MORT MANAGEMENT OVERSIGHT & RISK TREE ................................................... 245
MULTI-AGENT SYSTEMS TECHNOLOGY ..................................................................... 248
OREGON SOFTWARE DEVELOPMENT PROCESS (OSDP) .......................................... 250
POLICY ANALYSIS (HEXAGON MODEL OF POLICY ANALYSIS) ............................ 253
POLICY ANALYSIS FRAMEWORK .................................................................................. 258
POLICY SCORECARD ........................................................................................................ 261
Q-METHODOLOGY ............................................................................................................. 264
QUANTITATIVE RISK ANALYSIS (QRA) ....................................................................... 266
REAL OPTIONS ANALYSIS AND DESIGN ..................................................................... 268
SAFETY AUDITING: THE DELFT METHOD ................................................................... 270
SCENARIO APPROACH (FOR DEALING WITH UNCERTAINTY ABOUT THE
FUTURE) ............................................................................................................................... 272
SERIOUS GAMING .............................................................................................................. 275
STATED PREFERENCE EXPERIMENTS .......................................................................... 277
SADT - STRUCTURED ANALYSIS AND DESIGN TECHNIQUE .................................. 279
STRUCTURAL EQUATION MODELLING ....................................................................... 281
SUSTAINABLE DEVELOPMENT FOR (FUTURE) ENGINEERS (IN A
INTERDISCIPLINARY CONTEXT) ................................................................................... 283
SYSTEMS INTEGRATION .................................................................................................. 286
TECHNOLOGY ASSESSMENT .......................................................................................... 288
TECHNOLOGY TRANSFER & SUPPORT ........................................................................ 290
THE MICROTRAINING METHOD ..................................................................................... 293
TRANSITION MANAGEMENT .......................................................................................... 296
UML (UNIFIED MODELLING LANGUAGE) - CLASS DIAGRAMS ............................. 298
VALIDATION IN MODELLING AND SIMULATION ..................................................... 300
VALUE FREQUENCY MODEL .......................................................................................... 302
VALUE RELATED DEVELOPMENT ANALYSIS ............................................................ 305
















6















Concept













7
Table of Contents of Concept

ACTOR ...................................................................................................................................... 8
AGENT .................................................................................................................................... 10
ATTITUDE .............................................................................................................................. 12
BOW-TIE ................................................................................................................................. 14
BUSINESS ECOSYSTEM ...................................................................................................... 16
COMPLEXITY ........................................................................................................................ 18
DESIGN PATTERNS .............................................................................................................. 20
ECONOMIC GOVERNANCE ................................................................................................ 23
FINDING OF LAW (RECHTSVINDING) ............................................................................... 27
GOVERNANCE1 .................................................................................................................... 28
GOVERNANCE2 .................................................................................................................... 31
HISTORY OF TECHNOLOGY .............................................................................................. 33
HUMAN FACTORS ................................................................................................................ 35
INCREASING RETURNS ...................................................................................................... 37
INDUSTRIAL ECOLOGY ...................................................................................................... 39
INSTITUTION ......................................................................................................................... 41
INTERACTIVE LEARNING .................................................................................................. 44
LEGAL CERTAINTY (RECHTSZEKERHEID) ..................................................................... 46
MULTISOURCE MULTIPRODUCT (ENERGY) SYSTEMS .............................................. 47
OPERATIONAL READINESS (OR) ...................................................................................... 49
(ORGANISATIONAL) SAFETY CULTURE/ SAFETY CLIMATE .................................... 51
POLICY ENTREPRENEURSHIP ........................................................................................... 53
PUBLIC VALUES ................................................................................................................... 55
RESILIENCE ........................................................................................................................... 57
RESPONSIBLE INNOVATION ............................................................................................. 59
SAFE ENVELOPE OF OPERATIONS .................................................................................. 61
SAFETY MANAGEMENT SYSTEM (SMS) ........................................................................ 63
SELF-MANAGEMENT .......................................................................................................... 66
SENSEMAKING ..................................................................................................................... 68
SITUATION AWARENESS ................................................................................................... 70
STATE AID ............................................................................................................................. 72
STRATEGIC BEHAVIOUR ................................................................................................... 74
TECHNICAL ARTIFACT ....................................................................................................... 76
TRANSITION .......................................................................................................................... 78
TRAVEL TIME BUDGETS .................................................................................................... 82
UNCERTAINTY IN MODEL-BASED DECISION SUPPORT ............................................ 84
VERDOORNS LAW (AT THE FIRM LEVEL) .................................................................... 87
VIRTUAL WORLDS .............................................................................................................. 89
WICKED PROBLEMS ............................................................................................................ 91




8
ACTOR

Definition

The word actor has a Greek / Latin root and means doer, player in a play. Action and behavior
are indeed central to the concept of actor. The following examples illustrate that the definition
of actor depends on how actor behavior is explained:

- Behaviorist theories explain actor behavior as a causal mechanism: actors react to their
context as a (deterministic) stimulus-response system.

- Rational actor theories explain actor behavior as a decision-making mechanism: actors
choose one action from a set of alternative actions, based on an assessment of the
expected consequences of these alternative actions in the light of the actors' own
preferences regarding these consequences.

- Some social actor theories explain actor behavior as a rule-following mechanism: actors
are part of a social context and observe the social conventions that prescribe what is
appropriate behavior.

These examples show that different theories may assume different actor models, each model
being a simplification of a human as an individual capable of taking action. One particular
social theory that also considers inanimate objects such as texts, graphical representations,
and technical artefacts as actors is Actor Network Theory (Latour, 2005) often confused
with policy network theories.

At TPM, actor usually means "actor in a policy network". This concept entails a complex
actor model that assumes that actors behave intentionally, have cognitive and deliberative
capabilities, and a "theory of mind", i.e., a representation of other actors' actor-being.

The term actor can denote an individual person, but also a composite actor as defined by
Scharpf (1997, p. 52) as "an aggregate of individuals, having a capacity for intentional action
at a level above the individuals involved", i.e., organi za tions. Scharpf furthermore (p. 54)
defines a collective actor as a composite actor that is "dependent on and guided by the
preferences of its members", and a corporate actor as a composite actor that has "a high
degree of autonomy from the ultimate beneficiaries of their action and whose activities are
carried out by staff members whose own private preferences are supposed to be neutralized by
employment contracts". For some analytic purposes, an actor may even represent a group of
individuals that share similar characteristics but are otherwise unorganized.

Applicability

The actor concept is applicable and relevant in virtually all problems studied at TPM.

Pitfalls / debate

As mentioned above, different theories use different definitions of "actor". The underlying
assumptions of these definitions may not be compatible. Researchers using the term should
therefore always clarify the actor model they assume in their work. The validity of different
9
actor models, notably the predominant rational actor paradigm is a long-standing topic of
debate -- see (Jaeger et al., 2001) for an excellent overview.

Key references

Jaeger, C.C., Renn, O., Rosa, E.A., & Webler, T. (2001). Risk, Uncertainty, and Rational
Action. London: Earthscan Publications Ltd.
Latour, B. (2005). Reassembling the social: An introduction to actor-network-theory. Oxford,
UK: Oxford University Press.
Scharpf, F.W. (1997). Games that real actors play: Actor-centered institutionalism in policy
research. Boulder, CO: Westview Press.


Key articles from TPM-researchers

De Bruijn, J.A, & Herder, P.M. (2009). System and actor perspectives on sociotechnical
systems. IEEE Transactions on Systems, Man and Cybernetics, Part A-Systems and Humans,
39 (5), 981-992.
Hermans, L.M., & Thissen, W.A.H. (2009). Actor analysis methods and their use for public
policy analysts. European Journal of Operational Research, 196 (2), 808-818.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Examples can be found in the related topics specified below.

Related theories / methods

Actor analysis, Agent, Attitude, Bounded Rationality Theory, Expected Utility Theory,
Strategic Behaviour.

Editor

Pieter Bots (p.w.g.bots@tudelft.nl)













10
AGENT

Definition

Agent is the smallest element in an Agent Based Model. Agents are reactive, proactive,
autonomous and social software entities. Agents are:

1. Clearly identifiable problem-solving entities with well-defined boundaries and interfaces;
2. Situated (embedded) in a particular environment - they receive inputs related to the state
of their environment through sensors, and they act on the environment through effectors;
3. Designed to fulfil a purpose - they have particular objectives (goals);
4. Autonomous - they have control both over their internal state and over their own
behaviour;
5. Capable of exhibiting flexible problem-solving behaviour in pursuit of their design
objectives - they need to be both reactive (able to respond in a timely fashion to changes
that occur in their environment) and proactive (able to act in anticipation of future goals).

Please observe that agent is used to denote the model representation of an actor.

Applicability

Agents are relevant to fields of Artificial Intelligence and Agent Based Modelling. These have
been applied to almost any problem.

Pitfalls / debate

There is a discussion how fat or rich an agent has to be to be called an agent. In the simple
cases, agents are not more that state machines, for example in a Cellular Automata. In the
other extreme are AI agents that reason over complex coupled ontologies, across the Internet,
while avoiding conflicts with other agents.

Key references

Jennings, N.R. (2000). On agent-based software engineering. Artificial Intelligence, 117(2),
277296. ISSN 0004-3702.
More articles could be referred to the entire journals like Journal of Artificial Societies and
Social Simulation (http://jasss.soc.surrey.ac.uk/JASSS.html)

Key articles from TPM-researchers

/

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Any time when an actor in a multi actor problem is used, it can be formalized as an agent in a
simulation.

Related theories / methods

11
Complexity, Complex Adaptive Systems, Agent Based Modelling

Editor

Igor Nikolic (I.nikolic@tudelft.nl)













































12
ATTITUDE

Definition

An attitude is defined as a psychological tendency that is expressed by evaluating a particular
entity with some degree of favour or disfavour (Eagly & Chaiken, 1996). Also Ajzen (2001)
stressed the evaluative component of attitudes, stating that this evaluation is measured on
scales like good-bad, harmful-beneficial, pleasant-unpleasant. An attitude could be: I find it
good that the government implements transport pricing. Or: I think that mobile phone use
while driving is harmful.

Neurological research shows that evaluative judgments differ in important ways from non-
evaluative judgments. For example, a different part of brain is involved with distinguishing
positive from negative food items, as opposed distinguishing vegetables from non-vegetables.

Attitudes can differ by strengths, which has implications for persistence, resistance, and
attitude-behaviour consistency. Stronger attitudes are thought to be more stable over time,
more resistant to persuasion and better predictors of behaviour (Ajzen, 2001). An attitude is
less influenced by situational factors than are preferences, but is less stable than personality
traits (Ajzen, 1987)

Applicability

The term attitude is specifically used in the field of psychology. For example, the theory of
planned behaviour predicts behaviour from attitudes via the psychological variable intention
to behave. Attitudes give an indication of how people think about certain objects, which can
explain or predict their behaviour towards the object.

Pitfalls / debate

Previous thoughts on attitudes were that attitudes concerned three components: affective,
cognitive and behavioural. More recently, Ajzen (2001) clarified that attitude is different from
affect and that the current preference is to reserve the term affect for general moods and
specific emotions and by stating that affect contains not only valence but also arousal. Crano
and Prisline (2006) say that an attitude represents an evaluative integration of cognitions and
affects experienced in relation to an object. So attitudes are evaluative judgments that
integrate and summarize these cognitive/affective reactions. Therefore, nowadays, attitudes,
cognitions and behaviours are considered to be different concepts and attitudes can be
considered intermediary between affects and cognitions on the one hand, and behaviours on
the other hand.

Sometimes people think that attitudes should be consistent with each other and with
behaviour. However, with respect to behaviour, it is known that peoples attitudes influence
behaviour to some extent, but there are several other factors that influence behaviour, such as
the control that people have over a behaviour, or the influence of what other people think
about it. With respect to the consistency between different attitudes, it is even found that
people can have two conflicting attitudes at the same time. One attitude could be explicit
while the other could be implicit or habitual. Attitudes can also be ambivalent because of co-
existing positive and negative dispositions towards an attitude object, because either cognitive
13
components of attitudes are conflicting, or because cognitive components are conflicting with
affective components.

Key references

Ajzen, I. (2001). Nature and operation of attitudes. Annual Review of Psychology, 52, 27-58.
Crano, W.D., & Prislin, P. (2006). Attitudes and persuasion. Annual Review of Psychology,
57, 345-374.

Key articles from TPM-researchers

To my knowledge, no TPM researchers have investigated the concept of attitude itself, but the
concept is widely applied in questionnaire studies. For example, Molin used it for measuring
opinion of hydrogen applications and Huijts used it to measure acceptance of CO
2
-storage.

Midden, C.J.H., & Huijts, N.M.A. (2009). The role of trust in the affective evaluation of
novel risks: the case of CO
2
-storage. Risk Analysis, 29, 743-751.
Molin, E. (2005). A causal analysis of hydrogen acceptance. Transportation Research
Record: Journal of the Transportation Research Board, 1941, 115-121.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

For understanding peoples acceptance of technology, which influences policy makers
decisions and the success of the technology implementation, it is important to capture
peoples evaluative judgment of the technology and related policies was well as underlying
psychological factors that might influence this evaluative judgment.

Related theories / methods

The theory of planned behaviour makes use of the attitude concept. It predicts that attitudes
influence behaviour via intention to behave. This theory is also discussed in this catalogue.

Editor

Nicole Huijts (N.M.A.Huijts@tudelft.nl)







14
BOW-TIE

Definition

The Bow-Tie is a concept used in Risk Analysis. It is used to identify systemic risks, calculate
effects, and calculate the effectiveness of safety solutions. The bow-tie couples an accident to
the events leading up to the accident and the aftermath. The Bow-Tie originates form the idea
that Fault Trees, which lead up to the accidents, and Event Trees, which describe the
consequences can be coupled through the accident itself. If the Bow-Tie concept is used in
this way it is a qualitative risk analysis tool (Ale, 2009). But the Bow-Tie concept is also
often used for quantitative risk analysis and hazard identifications.



Applicability

The concept was developed for quantitative risk analysis and still used as such. That makes it
a very powerful tool for the estimation of risks, their consequences, and mitigation of that
risk. The qualitative concepts are useful for identifying threats in risk systems but can
basically be applied in any system that can be threatened by catastrophic events (such as the
bank crisis); errors hospitals; and pipe breaks in chemical industry.

Pitfalls / debate

The extremely high face validity of the Bow-Tie is its most important pitfall. The concept is
so very generic that it seems to apply anywhere. However, scientifically speaking, it is not
proven to be true. This may lead to the misuse of the Bow-Tie.

The coupling between Fault Trees (or equivalent mathematical methods) and Event Trees (or
equivalents) seems a simple operation. However, it is important that the mathematical
instruments may have different basic principles that conflict.

Key references
15

Ale, B. (2009). Risk: an introduction. Oxon: Routledge.
Groeneweg, J. (1998). Controlling the controllable. Leiden: DSWO press.
Visser, J.P. (1998). Developments in HSE management in oil and gas exploration and
production. In A.R. Hale & M. Baram (Eds.) Safety management, the challenge of change.
Oxford.

Key articles from TPM-researchers

Hale, A. R., Ale, B. J. M., et al. (2007). Modeling accidents for prioritizing prevention.
Reliability Engineering & System Safety, 92(12), 1701-1715.
Kurowicka, D., Cooke, R.M., Goossens,L.H.J., & Ale, B.J.M. (2006). Expert judgement
study for placement ladder bowtie. In C.G. Soares, & E. Zio (Eds.), Safety and reliability for
managing risk (pp. 21-28). London: Taylor and Francis Group.
Ale, B.J.M, Bellamy, L.J., Cooke, R.M., Goossens, L.H.J., Hale, A.R., Roelen, A.C.L., &
Smith, E. (2006). Towards a causal model for air transport safety an ongoing research
project. Safety Science, 44(8), 657-673.
Ale, B.J.M., Bellamy, L.J., Oh, J.I.H., Whiston, J.H., Mud, M.L., Baksteen, H., Papazoglou,
I.A., Hale, A.H., Bloemhoff, A., & Post, J. (2006). Quantifying occupation risk. Working on
Safety, 12-15.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

The concept is used for analysis of risk systems. It ties various constituents of the risk system
together and can be used to evaluate the effectively of safety solutions. It can be used to
evaluate the reliability of any technological system such as the national power grid and water
supply networks.

Related theories / methods

The method combines fault trees and event trees that are methods in their own right. Also,
more complex mathematical methods can be used such as Bayesian Belief Nets.

Editor

Coen van Gulijk (c.vangulijk@tudelft.nl)











16
BUSINESS ECOSYSTEM

Definition

A business ecosystem is defined as a network of suppliers and customers around a core
technology, who depend on each other for their success and survival.

As a consequence of the importance of technology networks, it is almost impossible for firms
to engage the competitive battle on their own. We therefore see patterns of competition
emerge that do not match the economic models of perfect competition or even of oligopolistic
or monopolistic competition. Rather, competition takes place between a few large coalitions,
or networks, of firms around a common technological platform. Such networks, consisting of
multiple firms performing different roles, are not unlike biological ecosystems. For such
networks therefore the term business ecosystems is increasingly used (Moore, 1993; 1996;
Iansiti & Levien, 2004a; 2004b).

As economic activity is changing from stand-alone to interconnected economic agents
forming a network economy as it is today, research on business strategy evolves or includes
more dimensions to better understand the continuous interaction and behavior of
interconnected organizations. The paradigm of atomistic actors competing against each other
in an impersonal marketplace is becoming less adequate in a world in which firms are
embedded in networks of social, professional, and exchange relationships with other
economic actors. The environment should no longer be seen as faceless, atomistic, and
beyond the influence of the organization, as assumed by the current strategy management
doctrine.

Applicability

- The study of business networks, specifically networks geared toward innovation
- The study of networks around technology platforms

Pitfalls / debate

- Research on business ecosystems is limited and a body of knowledge, even on the core
definition of the concept, is still absent
- The concept overlaps considerably with business networks or alliance constellations;
due to the lack of body of knowledge, it is not always clear what the essential distinctive
characteristics of the business ecosystems concept are and why they are important
- There is debate about the question whether business ecosystem is a (more or less useful)
metaphor of a business network or it is a description of an organizational form which is
bigger than a business network

Key references

Iansiti, M., & Levien, R. (2004a). Strategy as ecology. Harvard Business Review, 82(3), 68.

Iansiti, M., & Levien, R. (2004b). The Keystone Advantage: What the New Dynamics of
Business Ecosystem Mean for Strategy, Innovation, and Sustainability. Boston, Massachusetts:
Harvard Business School Press.

17
Moore, J. F. (1993). Predators and prey: A new ecology of competition. Harvard Business
Review, 71(3), 75-86.

Moore, J. F. (1996). The Death of Competition: Leadership and Strategy in the Age of
Business Ecosystem. John Wiley & Sons, Ltd.

Key articles from TPM-researchers

Anggraeni, E., Hartigh, E. den, & Zegveld, M.A. (2007). Business ecosystem as a perspective
for studying the relations between firms and their business networks. European
Chaos/Complexity in Organisations Network (ECCON), Conference 19-21 October 2007.

Hartigh, E. den, & Tol, M. (2008). Business ecosystem. In G. Putnik & M.M. Cunha (Eds.),
Encyclopedia of Networked and Virtual Organizations, Vol. I. (pp. 106-111). New York: IGI
Global, Information Science Reference, Hershey.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

When business firms and other actors are cooperating in networks toward a common goal, the
business ecosystem concept provides a unique, evolutionary and ecologically-inspired way of
studying (a) the actors and connections in this network and (b) the performance and dynamics
of this network.

Related theories / methods

- Business networks
- Innovation networks
- Innovation systems
- Strategic alliances

Editors

Erik den Hartigh (e.denhartigh@tudelft.nl)
Elisa Anggraeni (e.anggraeni@tudelft.nl)


18
COMPLEXITY

Definition

Complexity is ...the property of a real world system that is manifest in the inability of any
one formalism being adequate to capture all its properties. It requires that we find distinctly
different ways of interacting with systems. Distinctly different in the sense that when we
make successful models, the formal systems needed to describe each distinct aspect are not
derivable from each other. (Mikulecky, 2001).

It should, however, be noted that there is no generally accepted definition of complexity.

Applicability

The concept of complexity is applicable in almost any research field.

Pitfalls / debate

The concept of complexity meant here should not be confused with complicatedness. A
system may very well be complicated but not complex.

Key references

Mikulecky, D.C. (2001). The emergence of complexity: Science coming of age or science
growing old? Computers and Chemistry, 25(4), 341348.
Complexity. http://en.wikipedia.org/wiki/Complexity

Key articles from TPM-researchers

/

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

There is no typical TPM-problem area within which the concept may not be particularly
useful.

Related theories / methods

The diagram below clearly indicates the relations between complexity and connected notions.
(Source: <http://en.wikipedia.org/wiki/File:Complexity-map-overview.png>)


19


Editor

Ivo Bouwmans (i.bouwmans@tudelft.nl)

20
DESIGN PATTERNS

Brief discussion

Design patterns provide a format or template to capture best practices and recurring solutions.
In Alexanders words: a pattern describes a problem which occurs over and over again and
then describes the core of the solution to that problem, in such a way that you can use this
solution a million times over, without ever doing it the same way twice (p. x, Alexander, et
al., 1977) The original pattern concept described by the architect Alexander (Alexander,
1979) has been widely adopted in the software engineering world after introduction by the
gang of four (Gamma et al., 1995).

Applicability

After their introduction in software engineering (Gamma et al., 1995) they have been adopted
in various fields such as knowledge management (May & Taylor, 2003), human-computer
interaction (Borchers, 2001), communication software (Rising, 2001), e-learning (Niegemann
& Domagk, 2005) and pedagogy (Eckstein et al., 2001).

Alexander (1979) suggests a number of different purposes for design patterns and the pattern
languages they comprise. Patterns provide a convenient common language for
communication; they enable users to name and share complex concepts without having to
explain them over and over again in detail. Patterns help to inspire and design new or
improved patterns, and they enable to design larger coherent systems based on individual
patterns. The pattern language typically has embedded values and ethics that are reinforced in
the design patterns. In this way they ensure that designs based on the patterns improve the
quality of life. They help in teaching, capturing, and sharing expert design knowledge, and
can be used to teach practitioners and novices, but they will also offer a valuable library for
experts. We like to see pattern languages as living documents, which further develop in a
design community. In this way they are intended to enable anyone to create with patterns.

A pattern typically describes a problem, its context, the essence of the solution and an
example of that solution. Further the pattern is linked to other patterns, and the relations
between them are explained. For this purpose, patterns are identified with a name, picture and
a summary.

Pitfalls / debate

Shortly after Gamma et al. (1995) introduced the design patterns in software engineering,
Alexander was asked to reflect on the use of patterns in software engineering (Alexander,
1996). He expressed two key concerns about the way pattern languages are used for software
engineering. First, he argued that software design patterns are not based on a philosophy of
the quality of human life; they appear to be only a vehicle for communication. Second, he
argued that software design patterns appear not to focus on creating a whole and coherent
system, rather they aim to design an independent object without taking into account how it
should contribute to the larger whole. Patterns as a focus of study, as opposed to tools, show
designers the structure of the solution instead of the objects in the solution. Patterns are
richer, and they help to rise above the details of tools, to focus on the fundamental problems
of process. Design patterns seem especially useful to transfer tacit best practices and
21
expertise. In the past decade, various design pattern languages have been developed that better
adhere to these concerns.

Key references

Alexander, C. (1979). The Timeless Way of Building. New York: Oxford University Press.
Coplien, J. O. (1997). Idioms and patterns as architectural literature. IEEE Software, 14(1),
36-42.
Erickson, T. (2000). Lingua Francas for design: sacred places and pattern languages.
Proceedings of the conference on Designing interactive systems, ACM Press, 2000, 357-368

Key articles from TPM-researchers

Design patterns have been used to capture techniques for collaboration as ThinkLets, and to
capture patterns for the design of computer mediated interaction, such as in e.g. social
software.

A ThinkLet is a named, scripted collaborative activity that gives rise to a known pattern of
collaboration among people working together toward a goal. ThinkLets are design patterns for
collaborative work practices (Vreede et al., 2006).

ThinkLets are used by facilitators and collaboration engineers as (1) predictable building
blocks for collaboration process design, (2) as transferable knowledge elements to shorten the
learning curve of facilitation techniques, and (3) by researchers as parsimonious, consistent
templates to compare the effects of various technology-supported collaboration practices.

Patterns for Computer Mediated Interaction (Schmmer & Lukosch, 2007) are socio-technical
patterns. To serve as a Lingua Franca for design (Erickson, 2000) and for usage within end-
user centered design processes, they need to be understood by end-users as well as software
developers. For that purpose, patterns for computer-mediated interaction describe the
technology that supports a group process and therefore include a technical and a social aspect.
The current pattern language comprises 71 patterns that can be positioned on a seamless
abstraction scale. The more a pattern discusses technical issues; the lower is its level. Low-
level patterns deal with class structures, control flow, or network communication. High-level
patterns focus on human interaction and address computer systems just as tools to support the
human interaction.

Schmmer, T., & Lukosch, S. (2007). Patterns for Computer-Mediated Interaction. West
Sussex, England: John Wiley & Sons Ltd.
Vreede, G. J. d., Briggs, R. O., & Kolfschoten, G. L. (2006). ThinkLets: A pattern language
for facilitated and practitioner-guided collaboration processes. International Journal of
Computer Applications in Technology, 25(2/3), 140-154.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

To resolve problems in a technical complex setting, often experts rely on rules of thumb and
best practices. Capturing these offers researchers a more rich perspective on interventions and
solutions in these domains, and enables researchers to find common practices and patterns in
22
problem solving approaches. Design patterns are a meta- concept. Specific pattern languages
can be used in resolving specific TPM problems. For instance the ThinkLets and Patterns of
Computer Mediated interaction will be used in a project to improve the concurrent design of
satellites and other space objects.

Related theories / methods

Patterns are intended to be used for design, and fit in a design approach such as Oregon
Software Development Process (OSDP) and design science. Further, design patterns are often
used in communities of practice and for knowledge management in organizations or
communities.

Editors

Gwendolyn Kolfschoten (G.L.Kolfschoten@tudelft.nl);
Stephan Lukosch (S.G.Lukosch@tudelft.nl)


































23
ECONOMIC GOVERNANCE

Definition

The concept of economic governance suggests that economic activities are governed (steered,
regulated or organized); states play a relevant role in economic governance but other actors
are relevant as well. Societies have created a variety of institutions to govern economic
transactions, reduce their costs and increase their occurrence. Governments are only one of
those institutions.

Economic governance includes two aspects: first, how economic activities are regulated
within a political community, or the formal rules of the game; and second how of the
economic units like firms and corporations are organized and governed, in this second sense
the terms reflects a broad use of the term corporate governance.

Applicability

The concept is used by different disciplines such as institutional economy, economic history,
and economic sociology. They all challenge the possibility of analyzing political and
economic issues independently, and question the state-market divide.

The concept of economic governance implies that markets are not spontaneous forms of
social orders, but have to be created and maintained by institutions. Institutions are supposed
to provide, monitor and enforce the rules of the game. The latter include, for instance, fixing
property rights, enforcing contracts, protecting competition, and reducing information
asymmetries.

Institutional economics has outlined the existence of forms of economic coordination which
range from pure markets to unitary corporate hierarchies like the firm, which is considered as
a governance structure (Williamson). These structures for Williamson emerge to minimize
transaction costs. This allows a comparative analysis of different arrangements. (Williamson,
1978, 247)

The literature on corporate governance examines the forms that the governance of
corporations takes. The ways firms and companies are organized in modern economies has
great consequences since different interests are protected by different structures and processes
of governance.

In a broad sense corporate governance is concerned with the role of stakeholders and their
impact on the welfare of society. Corporate governance, according to OECD, has two
dimensions. The first refers to the way in which shareholders, managers, employees interact
with each other. The second is related to public policy and considers how different regulatory
frameworks impact on society (Chhotray & Stoker, 2009, 145).

Pitfalls / debate

An ongoing debate involves the evolution of different forms of governance and whether or
not Institutional Economy is able to provide an account of it.

24
In this debate the relevance of a third party with the power of enforcing rules is discussed and
the creation of the state as the enforcing power remains central. As North has suggest
institutional economic cannot provides an explanation of the origin of the state (North, 1990,
56).

This is related with the study of self organizing institutions like those addressed by Ostrom in
her work on the governance of commons (Ostrom, 1990) or those addressing the evolutionary
dimension of institutional orders order using complexity theory (Teisman et al., 2009) or
evolutionary game theory (Aoki, 2001)

Economic governance it is used to provide an economic theory of non market institutions. In
doing so however economists move into the terrain of politics without explicitly
acknowledging it. The institutions that are created to reduce transaction costs and promote
efficient outcome are also the one that creates and implement structures of power.

Key references

Aoki, M. (2001). Toward a comparative institutional analysis. Cambridge MA: MIT Press.
Hollingsworth, J.R., & Boyer, R. (Eds.). (1997). Contemporary capitalism: The
embeddedness of institutions. Cambridge: Cambridge University Press.
Hopt, K.J. (Ed.) (1997). Comparative corporate governance: The state of the art and
emerging research. Oxford: Clarendon Press.
North, D.C. (1990). Institutions, institutional change and economic performance. Cambridge:
Cambridge University Press.
North, D.C., & Thomas, R.P. (1973). The rise of the Western world: A new economic history.
Cambridge: Cambridge University Press.
Ostrom, E. (1990). Governing the commons: The evolution of institutions for collective
action. Cambridge, Cambridge University Press.
Shleifer A., & Vishny R.W. (1997). A survey of corporate governance. Journal of Finance,
52(2), 73783.
Teisman, G., Van Buuren, A., & Gerrits, L. (2009). Managing complex governance systems:
Dynamics, self-organization and coevolution in public investments. New York: Routledge.
Williamson, O.E. (1975). Markets and hierarchies: Analysis and anti-trust implications: A
study in the economics of internal organization. New York: Free Press.
Williamson, O. (1979). Transaction-cost economics: Governance of contractual relations.
Journal of Law and Economics, 22(2), 233261.
Williamson, O.E. (1985). The economic institutions of capitalism: Firms, markets, relational
contracting. London: Macmillan.
Williamson, O.E. (1996). The mechanisms of governance. New York/Oxford: Oxford
University Press.

Key articles from TPM-researchers

Finger, M (2005). Global governance through the institutional lense. In M. Lederer, & P.
Muller (Eds.), Criticizing global governance (pp. 145-159). New York: Palgrave.
25
Finger, M., Tamiotti, L., & Allouche, J (Eds.). (2005). The multi-governance of water: Four
case studies (Suny Series in Global Politics). New York: State University of New York Press.
Groenewegen, J. (2004). Who should control the firm? Insights from new and original
institutional economics. Journal of Economic Issues, 38(2), 1-9.
Kunneke, R.W., Groenewegen, J.P.M., & Auger, J.F. (Eds.). (2009). The governance of
network industries. Cheltenham: Edward Elgar Publishing.
Kunneke, R.W., & Finger, M. (2008). Regulatory governance regimes in liberalized network
industries. In P. Herder, P. Heijnen & A. Nauta (Eds.), Proceedings of the International
Conference on Infrastructure Systems, Building Networks for a Brighter Future (pp. 1-10).
Delft: Next Generation Infrastructures Foundation.
Kunneke, R.W., & Finger, M. (2008). Towards a topology of regulatory governance regimes
in the liberalizing network industries. In S. Fadda (Ed.), Proceedings of the 20th Annual
conference of the European Association for Evolutionary Political Economy: Labour,
Institutions, and Growth in a global Knowledge Economy (pp. 1-10). Rome: AEAEPE.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Problems addressed within TBM include the analysis of various structures of governance, the
study of regulatory processes in public policy and the management of complex governance
systems. The concept of economic governance provides the instruments to conceptualize the
transformation of governing practices and to provide on reforms.

Related theories / methods

Related entries in this catalogue are Governance, Institutions, Transaction Costs, Theory of
Institutional Change.

Editor

Maria Julia Trombetta (M.J.Trombetta@tudelft.nl)

















26
EMERGENCE

Definition

Emergence is stable macroscopic patterns arising from the local interactions of system
components. Also emergent high-level properties are interesting, non-obvious consequences
of interaction between low-level properties. They are more easily understood in their own
right than in terms of properties at a lower level.

Applicability

The main relevant field is that Complex Adaptive Systems. However, emergence is used
across a very wide spectrum of scientific fields.

Pitfalls / debate

Main argument is between the holistic and reductionist camp. In a strictly reductionist view,
emergence is a just an example of epiphenomena, directly following form interactions of
system elements. In a holistic view, emergent properties are not reducible to just low level
element interactions.

Key references

Kay, J.J. (2002). On complexity theory, exergy and industrial ecology: Some implications for
construction ecology. In C. Kibert, J. Sendzimir, & B. Guy (Eds.), Construction ecology:
Nature as the basis for green buildings (pp.72-107). Spon Press.
Barabsi, A., & Albert, R. (1999). Emergence of scaling in random networks. Science,
286(5439), 509. [DOI: 10.1126/science.286.5439.509]

Key articles from TPM-researchers

P. Kroes. (2009). Functions in biological and artificial worlds: Comparative philosophical
perspectives, chapter technical artifacts engineering practice and emergence. Vienna Series in
Theoretical Biology. MIT Press. ISBN 978-0-262-11321-2.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Almost any multi-actor system displays emergent properties. Sustainability is often seen as an
emergent property of a socio-technical system.

Related theories / methods

Complexity, Complex Adaptive Systems, Agent Based Modelling, Self-organization, Path
dependence.

Editor

Igor Nikolic (I.nikolic@tudelft.nl)

27
FINDING OF LAW (RECHTSVINDING)

Brief discussion

A court's, scholars or lawmakers determination of law.

Applicability

Law can be determined in different ways, e.g., by systematically analyzing legal sources such
as legislation, case law, academic publications and custom.

Pitfalls / debate

Unlike empirical academic fields, methods to find law have not been made explicit
systematically. Basically, all ways of juridical reasoning can be considered methods to find
law.

Key references

Ehrlich, E., Pound, R., & Ziegert, K.A. (1936). Fundamental principles of the sociology of
law. Cambridge: Harvard University press.
Cliteur, P.B., Ellian, A. (2007). Encyclopedie van de rechtswetenschap. Deventer: Kluwer.

Key articles from TPM-researchers

Stoter, W.S.R. (2000). Belangenafweging door de wetgever, Een juridisch onderzoek naar
criteria voor de belangenafweging van de formele wetgever in relatie tot de
belangenafweging op bestuursniveau. Den Haag: Boom Juridische uitgevers, 2000, pp. 338.

Stoter, W.S.R. (2004). Wie en wat bepaalt de inhoud van de wet? In R. den Boer e.a. (red.),
Uit de openbare dienst (pp. 193-198). Den Haag.

Stout, H., & de Jong, M. (2005). Over spreektelegraaf en beeldtelefoon. De rol van de
overheid bij technologische transities in infrastructuurgebonden sectoren. Utrecht: Lemma.

Description of a typical TPM-problem that may be addressed using the theory / method

How can the law deal with new technological innovations, e.g., do all rules for trade outside
internet apply on trade on internet?

Related theories / methods

Finding of law is such a basic concept that most legal theory is related to.

Editor

H.E. (Evelien) van Rij (h.e.vanrij@tudelft.nl)


28
GOVERNANCE1

Definition

Governance addresses the problem of steering collective action and seeks to conceptualize
how the process of governing is changing in a globalized world. As Chhotray and Stoker put
it Governance is about the rules of collective decisionmaking in settings where there are a
plurality of actors or organizations and where no formal control system can dictate the terms
of the relationship between these actors and organizations (Chhotray & Stoker, 2009, 3).

- Rules can be both formal and informal.
- The term collective suggests a process of mutual influence and control. This implies that
governance arrangements determine the right for some actors to take decisions and for all
the others to accept those decisions.
- Decision making can occur at various levels, from the micro to the global one. In this way
governance is used in a variety of context, from the firm to global politics.
- The possibility of a lack of formal control is a crucial aspect of the concept. The recent
use of the term governance is different from government; as Rosenau puts it the issue at
stake is exactly how to conceptualize governance without government (Rosenau, 1992),
even if in the past the two words where used as synonyms.

Applicability

The concept is used within different disciplines as a rejection to the division between state,
market and civil society. It allows grasping the development of institutional arrangements in
which the borders between public and private sectors have become blurred.

The term governance is used by a variety of disciplines and theoretical frameworks:

- Within Institutional Economics, the use of the terms reflects the interest for forms of
economic coordination beyond the market (Williamson 1975; Williamson 1985;
Williamson 1998) and the importance of exploring how economic activities are controlled
and governed by formal and political institutions.

- Within International Relations the use of the term challenges the mainstream realist
paradigm which considers the states as the only relevant actors in the international arena
by emphasizing the divide between domestic politics and international relations and the
relevance of the logic of anarchy for the latter. The literature on global governance instead
considers a multiplicity of actors like NGOs, MNCs, civil society; addresses issues at a
variety of levels, from local to global (Keohane & Nye, 1977; Rosenau, 1992) and
emphasizes the relevance of institutions in the international arena. Some approaches to
global governance involve the study of international regimes (Krasner, 1983) or set of
principles, norms, rules, and decision making procedures around which actors
expectations converge in a given issue-area

- Within political studies the term is used for studying forms of political coordination which
escape the public-private divide as well as forms of complex interdependence between
different functional domains and levels of government.

29
- Corporate governance examines the forms that the governance of corporations takes and
whose interests are promoted by the different structures and process of governance
(Shleifer & Vishny, 1995, 145).

Pitfalls / debate

The concept is used by a variety of disciplines and it is often considered as a way to overcome
disciplinary divides. However, each one by using the term governance tries to overcome a
specific limit of the discipline or of other approaches. So for instance within institutional
economics the issue of control by upper layers on lower layers is central while within
international relations the challenge of the necessity of a sovereign power is crucial. As a
result, on the one hand, the concept remains rather vague; on the other hand, its use within
different disciplines is characterized by specific assumptions and refers to specific meanings
that are not shared by other disciplines.

Key references

Chhotray, V., & Stoker, G. (2009). Governance theory and practice: A cross-disciplinary
approach. Basingstoke/New York: Palgrave Macmillan.
Keohane, R. O., & Nye, J. S. (1977). Power and interdependence: World politics in
transition. New York: Little Brown.
Krasner, S. D. (1983). International regimes. Ithaca: Cornell U.P.
Rosenau, J. N. (1992). Governance, Order, and Change in World Politics. In J. N. Rosenau, &
E. O. Czempiel (Eds.), Governance without government: Order and change in world politics.
Cambridge: Cambridge University Press.
Shleifer, A., & Vishny, R.W. (1995). A survey of corporate governance. Journal of Finance,
52(2), 137183.
Williamson, O.E. (1975). Markets and hierarchies, analysis and antitrust implications: A
study in the economics of internal organization. New York: Free Press.
Williamson, O.E. (1985). The economic institutions of capitalism: Firms, markets, relational
contracting. New York: Free Press; London: Collier Macmillan.
Williamson, O.E. (1998). The institutions of governance. The American Economic Review,
88(2), 75-79.

Key articles from TPM-researchers

Finger, M. (2005). Conceptualizing e-governance. European Review of Political
Technologies, 3, 3-7.
Finger, M (2005). Global governance through the institutional lense. In M. Lederer, & P.
Muller (Eds.), Criticizing global governance (pp. 145-159). New York: Palgrave.
Finger, M., Tamiotti, L., & Allouche, J. (Eds.). (2005). The multi-governance of water: Four
case studies (Suny Series in Global Politics). New York: State University of New York Press.
Kunneke, R.W., Groenewegen, J.P.M. & Auger, J.F. (Eds.). (2009). The governance of
network industries. Cheltenham: Edward Elgar Publishing.
30
Kunneke, R.W., & Finger, M. (2008). Regulatory governance regimes in liberalized network
industries. In P. Herder, P. Heijnen, & A. Nauta (Eds.), Proceedings of the International
Conference on Infrastructure Systems, Building Networks for a Brighter Future (pp. 1-10).
Delft: Next Generation Infrastructures Foundation.
Kunneke, R.W., & Finger, M. (2008). Towards a topology of regulatory governance regimes
in the liberalizing network industries. In S. Fadda (Ed.), Proceedings of the 20th Annual
conference of the European Association for Evolutionary Political Economy: Labour,
Institutions, and Growth in a global Knowledge Economy (pp. 1-10). Rome: AEAEPE.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

TPM is interested in the governance of complex socio technical systems. Within the faculty
the term governance provides a framework to understand and conceptualize the
transformation of organizational practices and to address the issue of institutional and
organizational reforms at various levels.

It is used for a variety of issues and terms like environmental governance, water governance,
and corporate governance are frequently employed within TPM.

A relevant aspect is that, given the multidisciplinary character of the research undertaken by
the faculty, the term is used in its different understandings.

Related theories / methods

The concept of governance is part of the vocabulary of different theoretical approaches;
several are used by TPM and are part of this catalogue such as Transaction Costs Theory and
Economic Governance.

Editor

Maria Julia Trombetta (M.J.Trombetta@tudelft.nl)


















31
GOVERNANCE2

Definition

Governance is the directing of an organizational unit in various sizes and forms, from nation
states to corporations to projects, from transactional bodies to the family. On the one hand, it
focuses on laying the groundwork: building an organizational structure appropriately
reflecting responsibilities, setting rules and regulation for those operating within that structure
and devising processes and procedures for decision-making and control. As such, governance
can be seen as the building of institutions, a term closely related to governance. On the other
hand, governance is concerned with the activity of governing after the groundwork is laid.
Consequently, governance is concerned with the design and execution of a system of rule. But
the character of governance has changed over the past decades, from hierarchical rule to more
egalitarian rules.

Applicability

The concept is widely applicable and provides a sound backdrop for discussions on how to
design government intervention and involvement in various sectors. It is general set apart
from institutions, the rules and regulations, whereas governance is more seen as actions of
government within the institutional framework.

Pitfalls / debate

As said, governance, both as academic concept and governmental activity, has changed
character in the recent decades. In its traditional form, the hierarchy of the organizational unit
(the state, the corporation, the project) formed the spine of the governance structure. Rules,
and regulations, procedures and processes reflected that hierarchy. The assumed power
distribution allowed strict control from top to bottom.

But from the 1970s the flaws and failures of pure hierarchical governance surfaced. It did not
reflect the modern democracy, in which voicing and voting made power as much bottom-up
as top-down. In most democracies power had become decentralized and the spreading
affluence in the late 20
th
century further loosened the control potential of government.
Governance became a matter of organizing the interplay between the various actors in the
network. The view on government changed from ruler to coordinator. Hierarchical
governance was supplemented with network governance (cf. Heffen et al., 2000).

At the same time, a second answer to the flaws and failures of pure hierarchical governance
was rediscovered: the market. As the bureaucratic machines of hierarchical governance
proved to be failing with respect to efficiency and innovation, the flexibility and agility of the
private sector seemed attractive (cf. Osborne & Gaebler, 1992). The second answer, under its
original moniker of New Public Management, was not to share the power in the network, but
rather to reformulate the role of government to goal setting and rely heavily on the market to
execute policy. This asked of government two major steps. First, government had to reduce its
foothold in various sectors through deregulation and privatization. Second, goal formulation
had to allow for contracting: clear controllable goals one could hold a contractor accountable
for.

Key references
32

Frederickson, H.G. (2005). Whatever happened to public administration: Governance,
governance everywhere. In E. Ferlie, L. E. Lynn & C. Pollitt (Eds.), The Oxford Handbook of
Public Management (pp. 282-304). Oxford: Oxford University Press.
Heffen, O. van, Kickert, W.J.M., & Thomassen, J.J.A. (Eds.) (2000). Governance in modern
society: Effects, change and formation of government institutions. Dordrecht: Kluwer
Academic Publishers.
Williamson, O. (1996). The mechanisms of governance. New York: Oxford University Press.
Osborne, D., & Gaebler, T. (Eds.) (1992). Reinventing government: How entrepreneurial
spirit is transforming the public sector. Reading: Addison-Wesley.

Key articles from TPM-researchers

Koppenjan, J.F.M., & Groenewegen, J.P.M. (2005). Institutional design for complex
technological systems. International Journal of Technology, Policy and Management, 5(3),
240-257.
Koppenjan, J.F.M., & Klijn, E.H. (2000). Public management and policy networks:
Foundations of a network approach to governance. Public Management, 2(2), 135-158.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

In typical TPM domains government involvement is substantial. A large part of TPM research
is aimed at optimizing that role and as such the performance of that sector. As such,
governance is a key concept.

Related theories / methods

Multi actor networks, Process management, Institutional design.

Editor

Wijnand Veeneman (W.W.Veeneman@tudelft.nl)















33
HISTORY OF TECHNOLOGY

Definition

To days technology has its history. How this history is described and interpreted can be
different, depending on time, place and society. History is not purely a database of facts and
artefacts. It is a never ending debate about what causes are dominant in a process of change.

For long the history of technology was more or less considered as a timetable highlighted by
inventions and inventors. As today technology is studied as a function within a complex
system, historians and technicians look at the history of technology from this view. So history
of technology is seen as contextual history. Political trends, economic developments, state of
the art in science, societal visions and interest of stakeholders, all have to be considered in
reconstructing histories of technology.

Nevertheless all history has to be based on appropriate sources. Machinery or organization
dating from the seventeenth century have to be described on the basis of contemporary
sources as patents, drawings, charters and minutes. Modern insights certainly are inspiring to
ask new questions to old stuff. Interpretation of historical source has to be done in its
historical context.

Applicability

The main goal of history is a better understanding of todays world. How things and
especially changes came about is important for the present. Analyzing processes in the past
can help in forecasting and shaping the future. Of course this has to be done in a very decent
and most critical way. Comparisons can bring some similarities to the surface or show some
continuity, however, it never provides a model.

A particular field, for which history of technology is relevant, is development cooperation.
The Western industrialization history shows how development historically took place and this
might raise the understanding of present development processes and help to put development
approaches in perspective.

Also many studies on sociotechnical systems and innovation systems take a historical case
study perspective (e.g. Geels, 2002; Kamp, 2004). The aim is to learn lessons from this
historical case studies to understand better how new technologies developed in niches can be
successfully implemented and taken up in the general regime.

Pitfalls / debate

Anachronism is a well-known pitfall in studying the history, particularly where an implicit
link is made with the present situation. Making this explicit can help a lot. The debate in
history is always which causal factors are mentioned, and which causes are not considered.
What really matters? Many historical debates breathe ideological air. Examples are Whig-
history and Marxism history.

Within the history of technology a fruitful tension exist between external and internal,
between technological determinism and social constructivism. Historical cases are unique in
the way that all possible information is available. Historical cases are studied by different
34
people from different views controlling and reacting to each other. So the past can be
evaluated broadly and in depth without the pressure of taking decisions on a short term.

Key references

Hughes, T. P., Bijker, W. E., & Staudenmaier, J. M. (2009, July). SHOT fiftieth anniversary.
Technology and Culture, 50.
Basalla, G. (1988). The evolution of technology. Cambridge etc.: Cambridge University Press.
Bijker, W.E., Hughes, T. P., & Pinch, T. (Eds.). (1987). The social construction of
technological systems: New directions in the sociology and history of technology. Cambridge,
Mass./London, England: MIT Press.
Hughes, T. P. (1983). Networks of power: Electrification in western society, 1880-1930.
Baltimore: Johns Hopkins University Press.
Landes, D.S. (1997). The wealth and poverty of nations: Why some are so rich and some so
poor. London: Little, Brown and Company.
Geels, F.W. (2002). Technological transitions as evolutionary reconfiguration processes: A
multi-level perspective and a case-study. Research Policy, 31(8/9), 1257-1274.
Kamp, L., Smits, R., Andriessem, C. (2004). Notions on learning applied to wind turbine
development in the Netherlands and Denmark. Energy Policy, 32(14), 1625-1637.

Key articles from TPM-researchers

Lintsen, H.W., Schot, J.W. a.o. (2003). Techniek in Nederland in de twintigste eeuw.
Zutphen: Walburg Pers.
Ravesteijn, W., & Ten Horn-van Nispen, M. L. (2007). Engineering an empire: The creation
of infrastructural systems in the Netherlands East Indies 1800-1950. Indonesia and the Malay
World, 35(103), 273-292.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

History of infrastructure. Careful examination of the genesis of technical aspects of todays
complex society and its problems keeps (TPM) researchers alert not to overlook relevant
items.

Related theories / methods

Value driven / values sensitive development

Editors

Frida de Jong (f.dejong@tudelft.nl)
Wim Ravesteijn (w.ravesteijn@tudelft.nl)




35
HUMAN FACTORS

Definition

Human Factors is the study of the variables that influence the efficiency with which the
human performer can interact with the inanimate components of a system to accomplish the
system goals (Proctor & van Zandt, 1994). This is also called ergonomics or engineering
psychology.

Applicability

Because the efficiency of a system depends on the performance of the operator as well as the
adequacy of the machine, the operator and machine must be considered together as a single
human-machine system. With this view, it then makes sense to analyze the performance
capabilities of the human components in terms that are consistent with those used to describe
the inanimate components of the system. For example, the reliability of human components
can be evaluated in the same way as the reliability of machine components.

Pitfalls / debate

/

Key references

Wickens, C.D., & Hollands, J.G. (1999). Engineering psychology and human performance.
Upper Saddle River, New Jersey: Prentice Hall.
Proctor, R. W., & van Zandt, T. (1994). Human factors in simple and complex systems.
Needham Heights, MA: Allyn and Bacon.
Kirwan, B. (1994). A guide to practical human reliability assessment. CRC.

Key articles from TPM-researchers

Wiersma, J.W.F., Butter, R., & vant Padje, W. (2000). A human factors approach to
assessing VTS operator performance. In VTS 2000 Symposium. Singapore. Session 5-12.
Sillem, S., Jagtman, H. M., et al. (2006). Using cell broadcast in citizens warning:
Characteristics of messages. PSAM8, New Orleans, Louisiana, USA: ASME Press.
Sillem, S., & Wiersma, J. W. F. (2008). Constructing a contextual human information
processing model for warning citizens. PSAM9, Hong Kong, PSAM9.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

- Improving safety in the harbour by applying situation awareness in vessel traffic services
(Wiersma, et al., 2000). Situation awareness can be used to determine what issues are
important to make sure that operators maintain situation awareness at all times.
- Determining whether a new warning system can effectively improve peoples self-reliance
in an emergency? The Human Information Processing model can be used to determine
36
what boundary conditions there are for a warning system to be effective (Sillem &
Wiersma 2008)

Human factors are of interest for any system in which humans are working with inanimate
things, such as machines, warning systems, or even procedures.

Related theories / methods

- Human Information Processing
- Human Reliability analysis. Human Reliability Analysis involves quantitative predictions
of operator error probability and of successful system performance. Operator error
probability is defined as the number of errors made divided by the number of
opportunities for such errors. Human reliability is thus 1- (operator error probability)
(Proctor & van Zandt, 1994). Human Reliability Analysis can be carried out for both
normal and abnormal operating conditions. Under normal operation the following
activities are important: routine control, preventive and corrective maintenance,
calibration and testing, restoration after maintenance and inspection. Under abnormal
conditions, the operator recognizes and detects faults, diagnoses the problem and makes
decisions. Human reliability analyses can be combined to predict overall performance of
the human-machine system

Editor

S. Sillem (s.sillem@tudelft.nl)





37
INCREASING RETURNS

Definition

Increasing returns, i.e., positive feedback effects in the economy, play an important role in
many markets and firms. In markets, they appear for example in the emergence of fashions
and fads and in technology adoption and standardization (Arthur, 1989). In firms they appear
for example in the production and commercialization of information and knowledge intensive
products (Shapiro & Varian, 1998) and in technological process improvements.

Increasing returns mechanisms come in four generic forms:
(1) Social interaction effects: these effects occur when a customers preference for a product
is dependent upon the opinions or expectations of other (potential) customers.
(2) Network effects: these occur when the economic utility of using a product becomes larger
as its network grows in size. The difference between social interaction effects and
network effects is that while the former is associated with information search and
preference formation, the latter is associated with the economic utility as a result of actual
growth in network size.
(3) Scale effects; these imply that average total cost declines with larger production volumes.
In other words, scale effects are reflected in a downward slope of the average total cost
curve.
(4) Learning effects: these imply that there is a positive dynamic relationship between the
growth of output of a firm and the growth of productivity. In other words, learning results
in a more efficient use of input factors as cumulative output grows. In other words, the
same output can be produced with less input, reflecting a downward shift of the average
total cost curve

Of these, social interaction effects and network effects are market-based, while scale effects
and learning effects are firm-based.

Applicability

The reason to study increasing returns is that it is becoming more relevant in the increasingly
information and knowledge based business environment of today. The rising information and
knowledge intensity of the business environment is expressed in the rising prevalence of the
services sector, rising prevalence of information products, e.g., software, and the increasing
amount of knowledge required to configure and execute business processes. The economic
characteristics of information and knowledge endow them with an inherent possibility of
increasing returns when used as input factors. The interactive characteristics of many IT-
based technologies, good and services make them prone to network effects and social
interaction effects.

Pitfalls / debate

There is a debate about whether different sectors have a different sensitivity to increasing
returns mechanisms. Theoretically, IT-based technologies, products and services often
possess interactive characteristics that make them more prone to network effects and social
interaction effects in the market, and they often possess cost structures that make them more
prone to scale effects and leaning effects in the firm (Arthur, 1996). In our own research,
38
however, we did not find above-average scores of the IT-based sectors on the increasing
returns measurement scales and models.

Key references

Arthur, W.B. (1989). Competing technologies, increasing returns, and lock-in by historical
small events. The Economic Journal, 99, March, 116-131.

Arthur, W.B. (1996). Increasing returns and the new world of business. Harvard Business
Review, July-August, 100-109.

Shapiro, C., & Varian, H.R. (1998). Information rules: A strategic guide to the network
economy. Boston (MA): Harvard Business School Press.

Key articles from TPM-researchers

Hartigh, E. den, & Langerak, F. (2001). Managing increasing returns. European Management
Journal, 19(4), 370-378.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

In multi-actor systems (specifically firms, markets), increasing returns can occur as a result
of:
- direct positive dependence of actors preferences in a market (network effects, social
interaction effects)
- division of labour between actors in a firm

Related theories / methods

- Positive feedback
- Network externalities
- Verdoorns law at firm level [ALSO SUBMITTED FOR CATALOGUE]

Editor

Erik den Hartigh (e.denhartigh@tudelft.nl)


39
INDUSTRIAL ECOLOGY

Definition

Industrial Ecology (IE) is the concept that ecology, economy and technology can be brought
together in a sustainable way enabling sustainable production and consumption systems. The
central idea is the analogy between natural and socio-technical systems. The word 'industrial'
does not only refer to industrial complexes but more generally to how humans use natural
resources in the production of goods and services. Ecology refers to the principle that our
industrial systems should incorporate principles exhibited within natural ecosystems.

Industrial ecology is also both the study and the concept of the flows of materials and energy
in industrial and consumer activities, of the effect of these flows on the environment, and of
the influence of economic, political, regulatory and social factors on the flow, use and
transformation of resources. The objective of industrial ecology is to understand better how
we can integrate environmental concerns into our economic activities. This integration, an
ongoing process, is necessary if we are to address current and future environmental concerns
(White, 1994).

Industrial ecology includes thus the study of industrial and societal materials metabolism and
energy flows, as well as the study of engineering and social aspects to realise sustainable
consumption and production systems. The concept includes working on and towards
sustainable industries and sustainable consumption practices and transitions and transition
management to bring these about.

Applicability

The concept is applicable to all industrial and consumption systems in industrialised
countries.

Pitfalls / debate

There is a debate if industrial ecology should focus in particular on material and energy flows,
environmental analysis and eco-industrial parks / industrial symbiosis, or that it encompasses
the full scope mentioned above.

Key references

Allenby, B.R. (1999). Industrial ecology: Policy framework and implementation. Prentice
Hall.
Ayres, R.U., & Ayres, L.W. (Eds.) (2002). A handbook of industrial ecology. Cheltenham,
UK: Edward Elgar.
Ehrenfeld, J. (2004). Industrial ecology: A new field or only a metaphor? Journal of Cleaner
Production, 12, 825831.

Key articles from TPM-researchers

Dijkema, G. P. J., & L. Basson (2009). Complexity and industrial ecology: Foundations for a
transformation from analysis to action. Journal of Industrial Ecology, 13(2), 157-164.
40
Moors, E.H.M., & Mulder, K.F. (2002). Industry in sustainable development: The
contribution of regime changes to radical technical innovation in industry. International
Journal of Technology, Policy and Management, 2(4), 434-454.
Moors, E.H.M., Mulder, K.F., & Vergragt, P.J. (2005). Towards cleaner production: Barriers
and strategies in the base metals producing industry. Journal of Cleaner Production, 13, 657-
668.
Mulder, K. F. (1998). Sustainable consumption and production of plastics? Technological
Forecasting and Social Change, 58(1-2), 105-124.
Mulder, K., & Knot, M. (2001). PVC plastic: A history of systems development and
entrenchment. Technology in Society, 23(2), 265-286.
Mulder, K. F. (2007). From environmental management to radical technological change:
Industry in sustainable development. International Journal of Environmental Technology and
Management, 7(5-6), 513-526.
Verhoef, E. V., Van Houwelingen, J. A., Dijkema, G.P., & Reuter, M.A. (2006). Industrial
ecology and waste infrastructure development: A roadmap for the Dutch waste management
system. Technological Forecasting and Social Change, 73(3), 302-315.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Transition towards sustainable industry, Transition towards sustainable energy systems.

Related theories / methods

Transitions & system innovations towards sustainability (submitted), Ecological
modernisation, Industrial symbiosis,

Editor

Jaco Quist (j.n.quist@tudelft.nl)


















41
INSTITUTION

Definition

In Institutional Economics (IE) institutions are defined as systems of social rules that structure
human behaviour and coordinate human interaction.

- Social means man made (note that an institution can be the result of human action and
not design)
- System refers to layers of institutions and relations between different domains (social,
political, economical and cultural)
- Structure behaviour is about agent-structure debate (methodological individualism,
collectivism and interactionism)
- Coordination: human interactions is about the need to be coordinated all kind of
transactions between agents; institutions reduce uncertainty and are durable and in that
sense they contribute to the lowering of transaction costs in coordinating behaviour.

Different type of institutions can be identified in the layered system as shown in Figure. The
elements in the different blocks influence and structure individual behaviour and the
interaction between actors. In other words: in order to understand the behaviour of actors in
socio-technological systems like infrastructures knowledge of the system of institutions is
required.

Figure below shows the elements one has to investigate form an IE point of view.

1. Values are the standards of judgements in a society, which are specified in norms
(informal rules) about how to behave according the values.

2. Next the Figure shows the formal institutions of laws and regulations which are designed
through the private or public systems of collective decision-making. Private firms can
design for instance private courts of arbitrage, or standardization committees, whereas the
public actors through the political system can design and enforce formal laws and
regulations.

3. Also informal private rules of behaviour belong to the system of institutions, which can
spontaneously emerge in specific groups, sectors or regions.

4. Finally the governance structures (institutional arrangements) of contracts and diverse
modes of organisations are presented.

42
Values
Norms
Legally enforceable private
and public rules
(such as laws)
Informal private rules of
behavior
Governance structures

Figure: A layered system of institutions

Applicability

The concept of institutions is applied in almost all social sciences and both the orthodoxy and
heterodoxy in economics applies the concept. In the more mainstream approaches institutions
are part of the environment and as an exogenous variable they explain economic growth, level
and type of innovations, nature of the firm, and the like. In New Institutional Economics
(NIE) the focus is on designing efficient and effective formal rules of the game like property
rights, competition laws, corporate laws, etc. (Get the institutions right). Also in NIE and
especially in Transaction Cost Economics (TCE) the endogenous variable is the institution of
the governance structure. Why are there firms, what determines the boundaries of the firm,
why does a change in technology as the exogenous variable has an impact on the governance
structure? In analysing institutions much of the behaviour of actors can be explained as well
as the performance of systems like infrastructures.

Institutions can be analysed in a comparative static way (given institution A what kind of
behaviour and performance result compared to given institution B? A comparative
institutional analysis is the domain of New Institutional Economics. Also a process analysis of
the evolution, the dynamics of institutions is possible. Why do institutions emerge, or why are
they designed. Moreover, why do they develop over time along a specific trajectory, or why
do they radically change at other times? Original Institutional Economics is more relevant for
understanding the issues of the dynamics of institutions.
43

Pitfalls / debate

Because institutions are of different nature and studied from different disciplines the concept
needs clear specification. A distinction between the exogenous and endogenous role of
institutions in the different theories is necessary as well. There is no disagreement about the
question whether institutions matter or not, but there is a lot of debate on how they can be best
analysed. Is the economic efficiency perspective most appropriate, or the sociological power
perspective? Can institutions best be studied in isolation, or is the interaction between
institutions in different domains (culture, politics, economics, judicial) a necessity to be taken
on board? Much of the debate on the dynamics of institutions is on the role of spontaneous
evolution in contrast to the role of purposeful design.

Key references

Commons, J.R. (1931). Institutional economics. The American Economic Review, 21(4), 648-
657.
Hodgson, G.M. (2006). What are institutions? Journal of Economic Issues, 40(1), 1-25.

Key articles from TPM-researchers

Koppenjan, J., & Groenewegen, J. (2005). Institutional design for complex technological
systems. International Journal of Technology, Policy and Management, 5(3), 240-257.
Groenewegen, J., Spithoven, A., & Van den Berg, A. (2009, forthcoming). Institutional
economics: An introduction. New York: Palgrave.

Description of a typical TPM-problem that may be addressed using the theory / method

Unbundling of vertically integrated firms, regulation of privatised forms, public-private
partnerships, different type of contracts between actors in infrastructures, are just a few
examples of institutions that are central in TPM

Related theories / methods

Governance, Economic governance, Socio-technological systems

Editor

John Groenewegen (J.P.M.Groenewegen@tudelft.nl)






44
INTERACTIVE LEARNING

Definition

Interactive learning is the transfer of knowledge between actors engaged in the innovation
process. In this Chapter, we will look more closely at the concept of learning, and at the role
that meanings play in learning processes. Generally, the literature distinguishes five kinds of
learning: learning by searching, learning by doing, learning by using and learning by
interacting and higher order learning. Learning by searching, or R&D, entails the creation of
new knowledge at research institutes or companies. Learning by doing consists of production
skills which increase the efficiency of production operations. Learning by using delivers
specific production-related knowledge such as feedback on the performance of the system in
an actual application, as well as gaining experience with placing or installing the innovation.
Learning by interacting entails the transfer of knowledge between different actors. Higher-
order learning is a special form of learning-by-interacting. Social interaction between actors
and negotiations can lead to learning processes not only on the cognitive level but also with
respect to values, attitudes and underlying convictions.

Applicability

In the context of innovation systems research, and also in the research into transitions and
system innovation, interactive learning is relevant.

Pitfalls / debate

Up till now, no clear indicators for interactive learning were formulated. Therefore, it is very
hard to measure directly whether learning has actually taken place. Kamp et al. (2004) defined
a number of conditions for learning that can be used as an indirect measure.

Key references

Garud, R. (1997). On the distinction between know-how, know-why, and know-what.
Advances in Strategic Management, 14, 81-101.
Grin, J., & Van de Graaf, H. (1996). Technology assessment as learning. Science, Technology
& Human Values, 21, 72-99.
Kamp, L., Smits, R., & Andriesse, C. (2004). Notions on learning applied to wind turbine
development in the Netherlands and Denmark. Energy Policy, 32(14), 1625-1637.
Nooteboom, B. (2001). Learning and innovation in organizations and economies. New York:
Oxford University Press.

Key articles from TPM-researchers

Kamp, L., Smits, R., & Andriesse, C. (2004). Notions on learning applied to wind turbine
development in the Netherlands and Denmark. Energy policy, 32(14), 1625-1637.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

45
Why were the Dutch not successful in transferring R&D knowledge to wind turbine
companies?

Related theories / methods

Functions of innovation systems, Transitions research, System innovation theory.

Editor

Linda Kamp (l.m.kamp@tudelft.nl)








































46
LEGAL CERTAINTY (RECHTSZEKERHEID)

Definition

Legal certainty is the principle that legal rules be clear and precise, and aims to ensure that
situations and legal relationships remain foreseeable.

Applicability

In all legal research fields (particularly administrative law, EC-Law and legal philosophy)
legal certainty is important.

Pitfalls / debate

Legal certainty does not necessarily imply legal rigidness.

Key references

Raitio, J. (2003). The principle of legal certainty in EC law. Series: Law and Philosophy
Library, 64. New York: Springer.
Van Wijk, H.D., Konijnenbelt, W., & Van Male, R. (2008). Hoofdstukken van bestuursrecht.
Den Haag: Elsevier Juridisch.

Key articles from TPM-researchers

Stoter, W.S.R., & Vermaat, M.F. (2005). Burgers hebben er recht op te weten wat ze van de
overheid kunnen verwachten. SMA, 6, 293-296.
Stout, H.D. (1994). De betekenissen van de wet: Theoretisch-kritische beschouwingen over
het principe van wetmatigheid van bestuur (Dissertatie). UvA, serie: Rechtsstaat en sturing,
W.E.J. Tjeenk Willink, Zwolle.
Van Rij, H.E. (2008). Improving institutions for green landscapes in metropolitan areas
(Dissertation, TU Delft). Delft: IOS press.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

In case of compulsory purchase for the development of new infrastructures, there can be a
tension between efficiency, cheap acquisition of land, and legal certainty, respecting the rights
of the owner of the land.

Related theories / methods

Institutional economic theory (Williamson) and a wide number of legal concepts

Editor

H.E. (Evelien) van Rij (h.e.vanrij@tudelft.nl)


47
MULTISOURCE MULTIPRODUCT (ENERGY) SYSTEMS

Brief discussion

Historically, energy conversion was seen as a one-dimensional system, in the sense that one
form of energy was converted into another form. By-products of the conversions such as heat
were disregarded and/or treated as waste. Cogeneration is a first step towards system
improvement since the waste heat is recovered and used as a valuable product. Tri-
generation systems, which take the concept even further, are proposed for the simultaneous
production of chemicals, power, and heat, and are integrated into larger systems, such as
chemical plants, to achieve increased overall performance. In these systems electricity may
just be a by-product. However, co- and tri-generation systems are still characterized by one
energy input source. In this paper we further extend this concept and explore the potential role
of multi-source multi-product (MSMP) systems.

Applicability

The concept is predominantly to be applied to energy conversion systems.

Pitfalls / debate

Introducing more inputs to an energy system makes the system more complex it might also
require more competence areas and more actors and stakeholders, (which make up the more
interesting for TPM)

Key references

Hemmes, K. (2009, January-June). (Towards) Multi-Source Multi-Product and other
integrated energy systems. The International Journal of Integrated Energy Systems, 1(1), 1-
15.
Chicco, G., & Mancarella, P. (2009). Distributed multi-generation: A comprehensive view.
Renewable and Sustainable Energy Reviews, 13, 535551.

Key articles from TPM-researchers

Hemmes, K. (2009, January-June). (Towards) Multi-Source Multi-Product and other
integrated energy systems. The International Journal of Integrated Energy Systems, 1(1), 1-
15.
Hemmes, K. (2007). Chapter 2 and other contribution to Duurzaamheid duurt het langst.
KNAW rapport van de verkenningscommissie duurzame energieconversies. In Dutch. Report
of the Royal Academy of Science foresight committee on sustainable energy conversions,
Amsterdam. ISBN 978-90-6984-526-5.

Description of a typical TPM-problem that may be addressed using the theory / method

Accelerating the introduction of renewable energy into the existing fossil energy based
society.

Related theories / methods
48

Systems thinking, Multifunctional systems, Quantitative analysis

Editor

Kas Hemmes (k.hemmes@tudelft.nl)



























49
OPERATIONAL READINESS (OR)

Definition

Operational Readiness is about creating an organisation that places the right people in the
right places at the right times, working with the right hardware according to the right
procedures and management controls to create an operation that realises a system goal.
Operational Readiness also requires that these elements function in an environment which is
conducive to good performance.

Within the philosophy of operational readiness, the Nertney Wheel provides a simple
representation of the main ideas, see Figure below and Nertney (1987). The outside of the
circle represents the beginning of the development process: at this point none of the
developmental tasks needed to achieve readiness have been started. The segments of the circle
alternate between subsystems and interfaces. The subsystems correspond to the three elements
People, Plant & Equipment, and Procedures & Management (PPP) controls. Each of these
subsystems needs to be developed in step with the others. Each concentric circle represents a
step. For example, the selection and training of personnel needs to keyed to the procedures
and management controls for the operational tasks. Similarly, the design of procedures and
management controls needs to take account of the characteristics and needs of the people who
will actually use them.


Figure: The Nertney Wheel developmental model of operational readiness

Life Cycle phases from conceptualisation via design & development till commission are
visible in the Nertney Wheel from outside to inside. Maintaining Operational Readiness
requires an adequately designed maintenance programme. Unbalanced development and
configuration of PPP-ingredients increases operational risks. Thus, an accident demonstrates
50
an imbalance that needs to be restored; and the OR-wheel shows that there are several
strategies to regain Operational Readiness.

Applicability

The concept of Operational Readiness is applicable for new operations and also for incident
review. The prospective use in new operations is accomplished by proceeding along
selection/development lines as indicated in Figure above. The retrospective use in learning
from critical experiences is equally important as the OR concept provides a sensible answer to
the questions of what there is to learn from operational surprises.

Pitfalls / debate

In dealing with complex systems, it is necessary that orderly methods for tracking and
displaying readiness information be established. These cannot always be based on final
product inspection. For example, certain hardware elements must be inspected prior to the
time that they are made physically inaccessible through construction. Similarly, it is not
always possible to determine whether a procedure was "reviewed" or whether an operator was
"trained" by simple observation after the fact. Readiness determination, therefore, must be
based to some degree on records, certification, spot checks and descriptions of processes as
well as on product examination.

Key references

Kingston, J., Frei, R., Koornneef, F., & Schallier, P. (2007). DORI - Defining operational
readiness to investigate. WP1. NRI Foundation
Nertney, R.J. (1987). Process operational readiness and operational readiness follow-on.
SSDC39. System Safety Development Center, Idaho Falls, USA. Available from nri.eu.com.

Key articles from TPM-researchers

Koornneef, F., Kingston, J., Beauchamp, E., Verburg, R.M., & Akselsson, R., (2008). Key
issues on sharing and transformation of lessons from experiences by actor organisations in the
aviation industry. 9
th
International Probabilistic Safety Assessment and Management
Conference (PSAM9). Hong Kong.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Holistic approach to multi-actor systems regarding assurance of intentional and critical
functioning throughout all phases of a systems life cycle. A framework for design and for
learning from experiences.

Related theories / methods

Life Cycle Analysis, MORT Safety Assurance programme, Organisational Learning.

Editor

Floor Koornneef (f.koornneef@tudelft.nl)
51
(ORGANISATIONAL) SAFETY CULTURE/ SAFETY CLIMATE

Definition

Safety culture has been defined by Guldenmund (2000) as Those aspects of the
organisational culture which will impact on attitudes and behaviour related to increasing or
decreasing risk.

(Organisational) safety culture is a construct that refers to the deeply held assumptions about
safety in the organisation, and how to deal with it, shared by (most) organisational members
(or various other significant organisational groups). Safety culture is learned through various
socialisation processes and it provides for behavioural norms and facilitates the understanding
of daily working reality.

Safety climate is a construct that is exclusively used in the context of organisations. It refers
to the development, observance and enforcement of safety policies and procedures within
organisations and it is measured exclusively through (self-administered) questionnaires.

Applicability

Safety culture is used both in a descriptive/ explanatory and instrumental way. That is, safety
culture is often considered to be a given, and possible intervention measures or strategies are
adapted to fit its specific nature. Yet another take on safety culture considers it more-or-less
manageable and possible organisational intervention measures are undertaken to change it to a
desirable end.

Safety climate is often used as an explanatory variable, or in a predictive way, i.e. to explain a
current accident rate or to predict a future rate.

Pitfalls / debate

The field of safety culture is not typified by heated debates, although opinions on definition
and assessment differ significantly. It is, however, important to decide beforehand how to
envision the construct of safety culture: either as a given or a more malleable phenomenon.

Safety climate is, compared to safety culture, clearer delineated theoretically. Also, the
method with which to measure safety climate (self-administered questionnaires) is crystallised
out better.

Key references

Important papers on safety culture and safety climate have been published in:
- Safety Science
- Work and Stress
- Journal of Safety Research
- Journal of Applied Psychology
- Journal of Accident Analysis and Prevention

Key articles from TPM-researchers

52
Guldenmund, F. W. (2000). The nature of safety culture: A review of theory and research.
Safety Science, 34(1-3), 215-257.
Guldenmund, F. W. (2007). The use of questionnaires in safety culture research: An
evaluation. Safety Science 45, 723-743.
Guldenmund, F. W. (2008). Safety culture in a service company. Journal of Occupational
Health and Safety - Australia and New Zealand, 24(3), 221-235.
Guldenmund, F.W. (2009). Understanding and exploring safety culture (PhD thesis). Delft
University of Technology.
Guldenmund, F. W., & Swuste, P. H. J. J. (2001). Veiligheidscultuur: Toverwoord of
onderzoeksobject? (Safety culture: Magic word or research object?) (In Dutch). Tijdschrift
voor Toegepaste Arbowetenschap, 14(4), 2-8.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

- Companies that have recurrent safety problems (incidents, accidents) but that have, at first
glance, an excellent safety management system implemented.
- Companies that have trouble getting various safety interventions into effect.
- Possible consequences for safety in case of mergers.

Related theories / methods

Safety management, Survey research, Qualitative research, Behavioural based safety.

Editor

Frank Guldenmund (f.w.guldenmund@tudelft.nl).





















53
POLICY ENTREPRENEURSHIP

Brief discussion

In the policy sciences the importance of the policy entrepreneur as source of creativity and
innovation, and as catalyst for social learning, public sector renewal, societal transitions and
radical change has been recognized since long (Cohen 1988; Currie, Humphreys, Ucbasaran
and McManus? 2008; King and Roberts 1992; Roberts 1998). According to the policy
literature, skilful leadership by policy entrepreneurs can help navigate coalitions towards
policy victories (Kingdon 1984) and public-sector entrepreneurs are able to proactively solve
problems before they become crises (Borins 2000). Recently the role of entrepreneurs in
policy innovation is fuelled by the encouragement of public-sector entrepreneurship in the
New Public Management literature and the related discussion on their accountability (Bernier
and Hafsi 2007; Borins 2000; Denhardt and Denhardt 2000).

In the literature on policy entrepreneurship, observations on policy entrepreneurs acquired a
more personal touch. Mintrom emphasized their networking abilities and political savvyness
(Mintrom 1997). Roberts pointed at their use of rhetoric, symbols and analysis and their
ability to develop strategies and coalitions (Roberts 1998). The hypothesis that personality
profiles of policy entrepreneurs matter is bolstered by numerous observations by prominent
policy scientists. For example, when Mintrom describes the role of policy entrepreneurs and
how they operate in policy innovations, he endows them with talents such as being able to
spot problems, being prepared to take risks and as having the ability to organize others
(Mintrom 1997). Also Kingdon in his treatment of agenda setting recognizes the skills of
entrepreneurs required to bring about actual policy change (Kingdon 1984).

Applicability

Policy diffusion, stability and change.

Pitfalls / debate

/

Key references

Bernier, L., & Hafsi, T. (2007). The changing nature of public entrepreneurship. Public
Administration Review, 67, 488-503.
Borins, S. (2000). Loose cannons and rule breakers, or enterprising leaders? Some evidence
about innovative public managers. Public Administration Review, 60, 498-507.
Cohen, S. (1988). The effective public manager: Achieving success in government. San
Francisco: Jossey-Bass Publishers.
Currie, G., Humphreys, M., Ucbasaran, D., & McManus, S. (2008). Entrepreneurial
leadership in the English public sector: Paradox or possibility?. Public Administration, 86,
987-1008.
Denhardt, R.B., & Denhardt, J.V. (2000). The new public service: Serving rather than
steering. Public Administration Review, 60, 549-559.
54
King, P.J., & Roberts, N.C. (1992). An investigation into the personality profile of policy
entrepreneurs. Public Productivity & Management Review, 16, 173-190.
Kingdon, J.W. (1984). Agendas, alternatives, and public policies. Boston: Little, Brown.
Levin, M.A., & Sanger, M.B. (1994). Making government work: How entrepreneurial
executives turn bright ideas into real results. San Francisco: Jossey-Bass.
Mintrom, M. (1997). Policy entrepreneurs and the diffusion of innovation. American Journal
of Political Science, 41, 738-770.
Mintrom, M., & Vergari, S. (1996). Advocacy coalitions, policy entrepreneurs, and policy
change. Policy Studies Journal, 24, 420-434.
Roberts, N. C. (1998). Radical change by entrepreneurial design. Acquisition Review
Quarterly.
Roberts, N. C., & King, P. J. (1996). Transforming public policy: Dynamics of policy
entrepreneurship and innovation. San Francisco, Calif.: Jossey-Bass Publishers.

Key articles from TPM-researchers

Timmermans, J., de Haan, H., & Squazzoni, F. (2008). Computational and mathematical
approaches to societal transitions. Computational & Mathematical Organization Theory, 14,
391-414.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

/

Related theories / methods

- Agenda setting

- Punctuated equilibrium theory in public policy

- Transformative leadership

Bass, B.M. (1985). Leadership and performance beyond expectations. New York; London:
Free Press; Collier Macmillan.
Bass, B.M. (1990). From transactional to transformational leadership: Learning to share the
vision. Organizational Dynamics, 18, 19-31.
Bass, B.M. (1997). Does the transactional-transformational leadership paradigm transcend
organizational and national boundaries?. American Psychologist, 52, 130-139.
Hater, J.J., & Bass, B.M. (1988). Superiors' evaluations and subordinates' perceptions of
transformational and transactional leadership. Journal of Applied Psychology, 73, 695-702.

Editor

Jos Timmermans (j.s.timmermans@tudelft.nl)

55
PUBLIC VALUES

Definition

Public values are those values that government seeks to secure in a specific sector. Mostly
used in the context of sectors in which the role of government has been recently reduced,
public values represent what government seeks in a sector when the role of the private sector
grows and a stronger use of market mechanisms. Public values are related but not the same as
public goals, which suggest direct intervention and levels of attainment, whereas values allow
for less interventionist approach. A closer related synonym is public interest, mostly used as a
more general term.

Applicability

The concept is best applied in discussions of government involvement in various sectors.
Compared to traditional public-private discussions, it pinpoints better what the governments
seek from a sector. It diverts the discussion from a stifling black-white debate of whether a
sector should be publicly or privately run. The reality is that most sectors show public and
private involvement. The public values concept allows for a more targeted (not necessarily
limited) definition of the public role in a sector. The concept also shows two important
aspects of intervention more clearly: value conflicts and value operationalisation. First, public
values are operationalised when passed on from its original inception in the public debate
(privacy, safety, availability) along the intervention by regulators (data retention rules, health
and safety regulation, down time malus systems) to the complexity of implementing al those
interventions in the operation. Second, at operation value conflicts surface with privacy
asking for different operational behaviour (shed user data for privacy) than security (retain
user data for security).

Pitfalls / debate

An important pitfall is that the concept tends to be used in the same way as public goals:
define, prioritise, and intervene. Using public values as a synonym of public goals reduces the
added value of the concept. Research has shown that public values are generally dynamic,
often ambiguous, and show no fixed priority. Though a short-list of values may be construed
for a specific sector, the interpretations and priorities are often flexible. The concept more
clearly shows the need to develop institutions and governance that can deal with that
flexibility.

Key references

Bozeman, B. (2002). Public value failure and market failure. Public Administration Review,
62(2), 145-161.
Jrgensen, T.B. (2006). Value consciousness and public management. International Journal
of Organization Theory and Behavior, 9(4), 510-536.

Key articles from TPM-researchers

De Bruijn, H., & Dicke, W. (2006). Strategies for safeguarding public values in liberalized
utility sectors. Public Administration, 84(3), 717735.

56
Veeneman, W., Dicke, W., & De Bruijne, M. (2009). From clouds to hailstorms: A policy and
administrative science perspective on safeguarding public values in networked infrastructures.
International Journal of Public Policy, 4(5), 414-434.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

In various infrastructure sectors the question has been raised what the best way is to
institutionalize the role of government. From the 1990s allowing a stronger role for the
private sector seemed promising. In recent years, the complexities of the introduction of the
market in various sectors have surfaced. The concept helps understand these complexities and
shows alternative ways forward in securing the public interest. It shows how to define the
public role and shows the need of and possibilities for a more flexible way of securing public
values beyond standard regulatory regimes.

Related theories / methods

/

Editor

Wijnand Veeneman (W.W.Veeneman@tudelft.nl)


























57
RESILIENCE
1


Definition

Resilience can be defined as: the capacity of a social-technical system to proactively adapt to
and recover from disturbances that are perceived within the system to fall outside the range of
normal and expected disturbances.

Applicability

The term resilience has many meanings in academic discourse. It is derived from the Latin
word resilio, meaning to jump back. In physics and engineering, resilience refers to the
ability of a material to return to its former shape after a deformation and is considered more or
less synonymous to adaptability or flexibility.

When applied to social(-technical) entities such as societies or organizations that manage
infrastructures, resilience refers to the ability to continue its existence, or to remain more or
less stable, in the face of surprise. Resilience can be considered a decision making strategy to
deal with risk and uncertainty. It emphasizes the ability of systems or persons to cope with
hazards and provides insights on what makes a system more or less vulnerable.

Pitfalls / debate

In the past decades, research on resilience has been conducted at various levels of analysis
e.g. the individual, group level and the organizational or community level in a wide variety
of disciplines within the social sciences ranging from psychology, ecology (ecosystem
management), organization and management sciences, group/team literature to safety
management and disaster and crisis management.

Because resilience has emerged as a key concept in various academic disciplines that have
taken an interest in the capacity of social systems to resist adversity and deal with
uncertainty and change. As a result, resilience has become a multifaceted and multi-
interpretable concept and because of its widespread use in a wide variety of disciplines
various types of resilience can be identified (see De Bruijne et al., forthcoming). Two
different types of resilience are of particular importance.

Engineering resilience pertains to the capacity of something or someone to bounce back to a
normal condition following some shock or disturbance. This type of resilience may be
considered a measure of a systems stability; the ability of a system to return to an
equilibrium state after a temporary disturbance (Holling, 1973:17).

Ecological resilience refers to a measure of the amount of change or disruption that is
required to transform a system from being maintained by one set of mutually reinforcing
processes and structures to a different set of processes and structures (Bennett et al,
2005:945). This definition assumes that systems continuously change in structure and
function through time.


1
This description is based on excerpts from De Bruijne et al. (forthcoming).
2
This definition is foremost based on the work of Weick (1995).
58
Key references

Bennett, E., Cumming, G., & Peterson, G. (2005). A systems model approach to determining
resilience surrogates for case studies. Ecosystems, 8(8), 945-957.
Comfort, L. K., Boin, A., & Demchak, C. C. (Eds.). Designing resilience for extreme events:
Sociotechnical approaches. Pittsburgh: Pittsburgh University Press.
Gunderson, L. H., & Holling, C. S. (Eds.). (2002). Panarchy: Understanding transformations
in human and natural systems. Washington DC: Island Press.
Holling, C. S. (1973). Resilience and stability of ecological systems. Annual Review of
Ecology and Systematics, 4(1), 1-23.
Holling, C. S. (1996). Engineering resilience versus ecological resilience. In P. C. Schulze
(Ed.), Engineering within ecological constraints (pp. 31-44). Washington D.C.: National
Academy Press.
Wildavsky, A. B. (1991). Searching for safety. New Brunswick: Transaction publishers.

Key articles from TPM-researchers

De Bruijne, M.L.C., Boin, A., & Van Eeten, M.J.G. (forthcoming). Resilience: Exploring he
concept and its meaning. In L.K. Comfort, A. Boin & C.C. Demchak (Eds.), Designing
resilience for extreme events: Sociotechnical approaches. Pittsburgh: Pittsburgh University
Press.
Hale, A., & Heijer, T. (2006). Defining resilience. In E. Hollnagel, D. D. Woods, & N.
Leveson (Eds.), Resilience engineering, concepts and precepts (pp. 35-40). Aldershot:
Ashgate.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Because of its widespread use in a variety of disciplines, resilience is a concept that is of
interest to all socio(-technical) systems that are the subject of study at TPM in a variety of
ways. However, over the last years, resilience has particularly emerged as an important
concept in the safety and security management fields (e.g. critical infrastructure protection;
CIP).

Related theories / methods

Wildavksy (1991) contrasts resilience with anticipation.

Editors

Mark de Bruijne (M.L.C.deBruijne@tudelft.nl)
Michel van Eeten (m.j.g.vaneeten@tudelft.nl)






59
RESPONSIBLE INNOVATION

Definition

Responsible innovation is an innovation that minimises unwanted side-effects of the
production and use of innovations and integrates social, environmental and ethical aspects in
the innovation process. Responsible innovation is strongly related to (1) sustainable
innovation in the sense that both take as a starting point to optimise the social and ecological
benefits and (2) to technology assessment in the sense that it aims to prevent social side-
effects which is especially relevant in case of emerging technologies that are likely to have
major impacts on society.

Exploring the ethical and societal aspects of science and technology goes back several
decades. Societal aspects have been researched and debated in assessments in various forms,
such as technology assessment (TA, in variants such as constructive and participative
technology assessment) and impact assessments (including societal and economic impact
assessments, risk analyses). In addition, ethical aspects of science and technology have been
mapped out and analysed, for example during research into Ethical, Legal and Social Aspects
(known as ELSA research).

Applicability

The concept applies to all technological developments and innovations for which it is
reasonable to suspect that they will have a large impact (whether positive or negative) on
people and/or society. Those developments and innovations concern new technologies (such
as ICT, nanotechnology, biotechnology and neural sciences) as well as technological systems
in transition (for example agriculture and healthcare). It also applies to sustainable
innovations as the aim here is to realise innovations having a much lower environmental
impact, while meeting social and ethical demands at the same time.

Pitfalls / debate

The debate could be whether and how the concept of responsible innovation relates to the
concept of sustainable innovation.

Key references

Hellstrm, T. (2003). Systemic innovation and risk: Technology assessment and the
challenge of responsible innovation. Technology in Society, 25, 369-384.
Kemp, R., Rip, A., & Schot, J. (2001). Constructing transition paths through the
management of niches. In R. Garud & P. Karne (Eds.), Path dependence and creation
(pp.269-299). Mahwah NJ / London UK: LEA Publishers.
Callon, M., Law, J., & Rip, A. (Eds.) (1986). Mapping the dynamics of science and
technology: Sociology of science in the real world. London UK: Macmillan Press Ltd.
Rip, A., Misa, T., & Schot, J. (Eds.) (1995). Managing technology in society. London /
New York: Pinter.
Schot, J, & Rip, A. (1996). The past and future of constructive technology assessment.
Technological Forecasting and Social Change, 54, 251-268.
60
Grin, J., & Grunwald, A. (Eds.). (2000). Vision assessment, shaping technology in the 21
st

century: Towards a repertoire for technology assessment. Berlin: Springer Verlag.

Key articles from TPM-researchers

Knot, J.C.M., Van den Ende, J., Mulder, K., & Vergragt, P. (2001). Flexibility strategies
for sustainable technology development. Technovation, 21(6), 335-343.
Van den Ende, J., Mulder, K., Knot, M., Moors, E., & Vergragt, P. (1998). Traditional and
modern technology assessment: Toward a toolkit. Technological Forecasting and Social
Change, 58, 5-21.
Mulder, K. (2001). TA und Wirtschaft: Die Niederlande. In N. Malanowski, C. P. Kruck,
& A. Zweck (Eds.), Technology Assessment und Wirtschaft (pp.131-146). Frankfurt/New
York: Campus Verlag.
Mulder, K. F. (2005). Innovation by disaster: The ozone catastrophe as experiment of
forced innovation. International Journal of Environment and Sustainable Development,
4(1), 88-103.
Mulder, K. F. (2007). Innovation for sustainable development: From environmental design
to transition management. Sustainability Science, 2, 253-263.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Development and introduction of new technologies in society and the actor opinions and
perceptions on that.

Related theories / methods

Technology Assessment, ELSA.

Editor

Jaco Quist (j.n.quist@tudelft.nl)
















61
SAFE ENVELOPE OF OPERATIONS

Definition

The safe envelope of operations is a concept meant to combine risk control and process
control into one model. It allows finding and illustrating accident mechanisms by displaying
system states with several boundaries within which the system state must be kept to keep it
controllable, safe and leading towards the desired goal.



Applicability

The concept was developed for complex dynamic safety systems such as in aviation and
railways, since it was felt that other models fall short to describe these sufficiently well.
However, the safe envelope principles can also be applied to generic systems theory.
Since the model takes a functional approach, it fits in principle a broad number of fields and it
connects well to a systems approach and to control theory.

Pitfalls / debate

First applications of the model learned that it is an elaborate model, since it comprises multi-
dimensional system states which each have four boundaries. Too quick application of the
concept easily leads to certain dimensions or boundaries being overlooked, which however is
an oversimplification so that the goal of finding accident mechanisms can be not fully be
achieved.

The concept is not meant to fully replace existing accident and risk control models, but it
provides additional insights into accident mechanisms and brings these in relation to intended
system behaviour.

Key references

Ale, B. (2009). Risk: An introduction. Oxon: Routledge.
damaged system
states
undamaged
system states
7
8
1. viable envelope
2. controllability envelope
3. operational envelope
4. attainability envelope for t=t+t
5. safety margin
6. system state
7. goal state
8. deviation from goal state
1
2
3
5
accepted system states
6
controllable
system states
4
62
Hale, A. R., Ale, B. J. M., et al. (2007). Modeling accidents for prioritizing prevention.
Reliability Engineering & System Safety, 92(12), 1701-1715.
Van den Top, J., & Jagtman, H. M. (2007). Modelling risk control measures in railways.
Paper presented at the ESREL 2007 - Risk, Reliability and Societal Safety, Stavanger,
Norway.
Van den Top, J., Jagtman, H. M., et al. (submitted for publication). Safe envelope of
operations: A new framework to analyse rail safety and model control measures. Safety
Science.

Key articles from TPM-researchers

Ale, B. (2009). Risk: An introduction. Oxon: Routledge.
Hale, A. R., Ale, B. J. M., et al. (2007). Modeling accidents for prioritizing prevention.
Reliability Engineering & System Safety, 92(12), 1701-1715.
Van den Top, J., & Jagtman, H. M. (2007). Modelling risk control measures in railways.
Paper presented at the ESREL 2007 - Risk, Reliability and Societal Safety, Stavanger,
Norway.
Van den Top, J., Jagtman, H. M., et al. (submitted for publication). Safe envelope of
operations: a new framework to analyse rail safety and model control measures. Safety
Science.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

The model has been used during TPM-research for combining the influence of design,
management and operations on both quality of service and safety in railway systems.
However, given the functional approach of the concept, it is applicable to any systems.

Related theories / methods

The safe envelope concept combines well with cybernetics or process control theory.
Combination with the functional modelling technique SADT / IDEF0 was proposed but not
yet further elaborated on.

Via the broader field of cybernetics, the safe envelope can also be connected to other fields
such as management science (e.g. Viable Systems Model, Plan-Do-Check-Act), Human
Factors (situational awareness) and engineering (process control theory).

Editor

Jaap van den Top (j.vandentop@tudelft.nl)







63
SAFETY MANAGEMENT SYSTEM (SMS)

Definition
The Dutch safety management model was developed along the line with several projects (I-
RISK, ARAMIS, WORM, CATS) since 1993. The basic theory is to see the safety
management as secondary process which allocate suitable resources and deliver proactive
resources to the primary business functions on the way in which they are carried out. The
decisions as to what these resources and controls are taken in the management level are
governed by secondary management processes, which we have called them the delivery
systems. To fulfil the primary business control functions in the first place, safety
management are conceptualized as a system to ensure primary business control function well
during their life cycle.

Figure: Generic safety management system with learning loops

Figure depicts Safety Management System at functional level in which the business process
forms the heart and process design, learning process trigger mechanisms and regulatory
influences fit in coherently. The operational business process is represented in block 1, the
controls and barriers in block 3 that are put in place to manage operational dependability
risks, as well as the risk management system in block 4 that shapes the operational business
processes provide several mechanisms that trigger organisational learning from operational
experiences: incident review in block 7, inspection and performance monitoring in block 5
and system auditing in block 6. Lessons to be learned by implementing them come out of the
learning system and need translation into the specific operational contexts in order to close the
64
learning process. Another learning might be triggered in society, block 8, resulting in
regulatory requirements that need to be transformed internally in order to adjust relevant
business processes represented in blocks 4, 3 and ultimately 1.

Applicability

Safety Management Systems exist or are being implemented in all sectors of safety-critical
activities, including transportation, chemical, nuclear industries, health care, etc.

Pitfalls / debate

From scientific point of view, some people have debated the effectiveness of the safety
management system on controlling risk. As the accident data is usually rare and hard to obtain
and the fact that the management influences on the accident is usually not closely related in
time and space, hence, it is difficult for safety analyst to understand whether the accident
events is due to the implementation of the safety management or other factors.

Key references

Hale, A.R., & Barem, M. (1998). Safety management: The challenge of change. Oxford:
Pergamon Press.
Glendon, A.I., Clarke, S., & McKenna, E.F. (2006). Human safety and risk management (2
nd

ed.). Boca Raton: CRC Press.

Key articles from TPM-researchers

Akselsson, R., Ek, ., Koornneef, F., Stewart, S., & Ward, M. (2009). Resilience safety
culture. 17th World Congress on Ergonomics IEA2009, 9-14 August, Beijing, CN.
Hale, A.R., Heming, B.H.J., Carthey, J., & Kirwan, B. (1997). Modelling of safety
management systems. Safety Science, 26(1-2), 121-140.
Guldenmund, F., Hale, A., Goossens, L., Betten, J. & Duijm, N. (2006). The development of
an audit technique to assess the quality of safety barrier management. Journal of Hazardous
Materials, 130, 234-241.
Hale, A.R., Goossens, L.H.J., Ale, B.J.M., Bellamy, L.A., Post, J., Oh, J.I.H. & Papazoglou,
I.A. (2004). Managing safety barriers and controls at the workplace. In C. Spitzer, U.
Schmocker & V.N. Dang (Eds.), Probabilistic safety assessment & management (pp 608
613). Berlin: Springer Verlag.
Papazoglou, I.A., Bellamy, L.J., Hale, A.R., Aneziris, O.N., Ale, B.J.M., Post, J.G. & Oh,
J.I.H. (2003). I-Risk: development of an integrated technical and management risk
methodology for chemical installations. Journal of Lost Prevention in the Chemical Industry,
16, 575-591.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Understanding of socio-technical systems that depend on managing dependability risks by
design and in real-time and what is takes to configure such a system in order to assure its
dependability within its hosting global environment. Safety management theory has been
65
applied in the CATS project (Causal Modelling for Air Transport Safety) (CATS) to quantify
the influences of different elements (technical, human, management decisions) on risk.

Related theories / methods

SADT, Safe Envelop, CATS, Life Cycle, Organisational Learning, Culture

Editor

Pei-Hui Lin (p.h.lin@tudelft.nl)






























66
SELF-MANAGEMENT

Brief discussion

Complex systems are becoming increasingly difficult for humans to manage. They often, need
to adapt to their dynamic open environments on their own with little or no human
intervention, to manage themselves, to be capable of self-management. Such systems need to
be designed to create virtual organizations to accomplish a task, reassembling themselves if
something changes or fails. They need to be capable of self-organisation and self-healing.

The concept of self-management (i.e. autonomic systems) relies on local knowledge:
knowledge within individual systems, and/or knowledge within groups of systems acquired as
a result of interaction. Approaches to self-management differ significantly in the amount of
explicit knowledge involved, but are always based on the notion of local interaction within
organizations of systems. Knowledge-based approaches, for example, use explicit knowledge
of individual systems and their interaction to manage a system. For example, in case of
failure a system may self-diagnose the cause, devise one or more potential solutions, and
execute one judged to be best. Emergent approaches, in contrast, self-organize solely based on
numerous local interactions often with minimal information exchanged between systems.
Properties of such systems are more difficult to ascertain/predict.

Applicability

Self-managing systems are becoming increasingly more popular in computer systems.
Examples include routing protocols (White, 1997), power management of datacenters
(Kephart, 2007) and self-organizing tree overlays (Pournaras et al., 2009) that can be used for
large scale aggregation or efficient video broadcast. Self-managing systems are especially
applicable in complex and large dynamic environments in which adaptation is mandatory.

Pitfalls / debate

Designing systems to adapt, to manage themselves is a new area of research and development.
To predict, manage and determine the behaviour of distributed systems, and the risks
involved, requires further research and development. The role of the human user,
responsibility and liability are aspects which have yet to be addressed.

Key references

Kephart, J.O., & Chess, D.M. (2003). The vision of autonomic computing. IEEE Computer,
41-50.
Ganek, A.G., & Corbi, T.A. (2003). The dawning of the autonomic computing era. IBM
Systems Journal, 42(1), 5-18.
Kramer, J., & Magee, J. (2007). Self-managed systems: An architectural challenge. Future of
Software Engineering (FOSE'07), 259-268.

Key articles from TPM-researchers

67
Brazier, F.M.T., Kephart, J.O., Huhns, M., & Van Dyke Parunak, H. (2009). Agents and
service-oriented computing for autonomic computing: A research agenda. IEEE Internet
Computing, 13(3), 82-87.
Pournaras, E., Warnier, M., & Brazier, F.M.T. (2009). Adaptive agent-based self-organization
for robust hierarchical topologies. Proceedings of the International Conference on Adaptive
and Intelligent Systems (ICAIS'09), IEEE.
Haydarlou, A.R., Oey, M.A., Overeinder, B.J., & Brazier, F.M.T. (2006). Using semantic web
technology for self-management of distributed object-oriented systems. Proceedings of the
2006 IEEE/WIC/ACM International Conference on Web Intelligence (WI-06).

Description of a typical TPM-problem that may be addressed using the theory / method

A typical example is the coordination of energy consumption among thermostatically
controlled appliances in a specific environment, over time. This is a resource allocation
problem that can be approached using a hierarchical topology of autonomous software agents,
each representing an individual appliance. In this system, agents aggregate information about
their appliances energy utilization. They individually reason about their needs in relation to
the agents to which they are connected in a hierarchical topology. The global goal of the
system, that is the stabilization of the total energy consumed in the network, is acquired by
individual agents reasoning within their local context. Both the agents and the hierarchical
topology need to be self-managing for this to work in large-scale dynamic environments.

Related theories / methods

Self-management is related to the research area of Autonomic Computing (Kephart & Chess,
2003). It has been proposed as a way to reduce the cost of maintaining complex systems, and
increase the human ability to manage these systems properly. Autonomic computing
introduces a number of autonomic properties, such as self-configuring, self-healing, self-
optimizing, and self-protecting. Extending and enhancing a system with these properties is an
important step towards a self-management system.

Editors

Martijn Warnier (M.E.Warnier@tudelflt.nl)
Frances Brazier (F.M.T.Brazier@tudelft.nl)














68
SENSEMAKING

Definition

Sensemaking can be defined as: a retrospective social process in which a person attributes
meaning to events based on extracted cues from the environment (Weick, 1995).
2


Applicability

Several influential works on sensemaking exist: in communication (Dervin & Foreman-
Wernet, 2003), in organization science (Weick, 1995), decision making (Klein et al., 2006a,
2006b), and in human-computer interaction (Russell et al., 1993). Although each work
represents a different perspective, they share some common themes. The most important
similarity is that in each work sensemaking is used as a concept to understand how people
make sense of their surroundings. As people make sense of their surroundings all the time, the
concept of sensemaking is applicable to a great many research domains. It is important to
note, however, that sensemaking is mostly applied in situations in which people are
confronted with gaps: ambiguous, equivocal, uncertain, and/or challenging circumstances
that require a person, a group, or an organization to make sense of what is happening.

Pitfalls / debate

Although the concept of sensemaking seems relevant and applicable, a number of issues need
to be considered to decide whether to use it or not. First of all, it is difficult to separate and
understand the difference between the scientific usage and meaning of sensemaking and the
everyday meaning of it. We make sense all of the time, of emails, conversations, and research
results; so, when are we making sense by using sensemaking? This is rather confusing and a
reason for many to opt for alternative words or even concepts.

Another issue relates to its actual usage in science. Apart from the fact that the concept is
dispersed over many disciplines, which causes it to be a multifaceted and multi-interpretable
concept, it is not very widespread amongst the communities in which it is used. Therefore, it
is not considered an accepted concept to be used and few works can be cited and used that
have applied it. Few researchers apply the concept, even though the works of in particular
Weick (1995, 2001) and Dervin (2003) are considered classics.

A possible explanation for its little use and the third and final issue is that it is difficult to
research the concept. Sensemaking extends to many facets, in the environment and over
time. This means that researchers have to take a large amount of variables into account.
Additionally, sensemaking is difficult to observe. It relates to what happens inside peoples
minds. This makes it hard to operationalize the concept and, subsequently, be confident that
the research results are valid.

Key references

Crandall, B., Klein, G., & Hoffman, R.R. (2006). Working minds: A practitioners guide to
cognitive task analysis. Cambridge, MA: The MIT Press.

2
This definition is foremost based on the work of Weick (1995).
69
Dervin, B., & Foreman-Wernet, L. (2003). Sense-making methodology reader: Selected
writings of Brenda Dervin. Cresskill, NJ: Hampton Press Inc.
Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.
Klein, G., Moon, B., & Hoffman, R.R. (2006a). Making sense of sensemaking 1: Alternative
perspectives. IEEE Intelligent systems, 21(4), 70-73.
Klein, G., Moon, B., & Hoffman, R.R. (2006b). Making sense of sensemaking 2: A
macrocognitive model. IEEE Intelligent systems, 21(4), 88-92.
Orr, J. (1996). Talking about machines: An ethnography of a modern job. Ithaca, NY: Cornell
University Press.
Russell, D.M., Stefik, M.J., Pirolli, P., & Card, S.K. (1993). The cost structure of
sensemaking. In Proceedings of INTERCHI93 Conference on Human Factors in Computing
Systems, Amsterdam, April 24-29, 269-276.
Weick, K.E. (1995). Sensemaking in organizations. Thousands Oaks, CA: Sage.
Weick, K.E. (2001). Making sense of the organization. Oxford, UK: Blackwell Publishers.

Key articles from TPM-researchers

Harteveld, C. (2009). Making sense of studying games: Using sensemaking as a perspective
for game research. Paper presented at the 40
th
Conference of the International Simulation and
Gaming Association, June 29-July 3, 2009, Singapore.
Harteveld, C., & De Bruijne, M.L.C. (forthcoming). Hoe cht is een virtuele crisis? De rol
van serious gaming in crisis- en rampenbestrijding. In Bestuurskunde.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Sensemaking is applicable in any context or domain as long as the focus concerns to get an
understanding of the individual or organizational behaviour (or information processing) of
people. The wide variety of its application can be noted from the work of Weick (2001) and
Dervin and Foreman-Wernet (2003). Within TPM it is currently mostly applied to get a better
understanding of how inspectors and operators deal with the problems they face. It is also
used to study games (see Harteveld, 2009).

Related theories / methods

It is closely affiliated with (social) cognitive (learning) theories that deal with mental models,
cognitive maps, or schemata. Cognitive task analysis and cognitive mapping are related
methods. See Crandall et al. (2006) for a partial overview of these theories and methods.

Editors

Casper Harteveld (C.Harteveld@tudelft.nl)
Mark de Bruijne (m.l.c.debruijne@tudelft.nl)




70
SITUATION AWARENESS

Definition

Situation awareness is the awareness of the current and evolving situation within a system or
in the system state. Often people in dangerous system states do not realize that they are.
Having clear awareness is necessary to plan or solve problems effectively in dynamic,
changing environments. For example, in planning an efficient driving route across town at
rush hour, it helps to know about the current traffic situation. There is a direct link between
situation awareness and the working memory of an individual. Our awareness of an evolving
situation mostly resides in working memory and therefore degrades as resources are allocated
to competing tasks. According to Endsley (1995), situation awareness consists of three levels:
perception of elements in the current situation, comprehension of the current situation and
projection of future status. Although situation awareness is defined as a state of knowledge, it
does not include all of the knowledge of an individual. The knowledge referred to by situation
awareness only pertains to the current stat of a particular dynamic situation. Thus, information
in the long term memory, such as mental models, influences situation awareness through
directing comprehension, projection and expectations but it is not explicitly included in the
definition of situation awareness.

Applicability

Situation awareness is relevant for designing displays or other system control instruments to
support situation awareness, and it is relevant for understanding the causes of deviations,
accidents and disasters where situation awareness has been lost. Situation Awareness has been
developed in military aviation, and has since been applied in numerous fields, as diverse as air
traffic control, weather services, and medicine. Situation awareness is necessary especially in
systems in which skill functions are automated. To keep a human operator in the loop of the
system performance and humans control such systems, the human needs situation awareness.

Pitfalls / debate

Some debate has occurred as to whether situation awareness is a product or a process
(Wickens & Hollands, 1999). Situation awareness is not the same as attention. We may
distinguish between the process of maintaining situation awareness, supported by attention,
working memory, and long-term memory, and the awareness itself.

Key references

Endsley, M.R. (1995). Toward a theory of situation awareness in dynamic-systems. Human
Factors, 37(1), 32-64.
Durso, F., & Gronlund, S. (1999). Situation awareness. In F.T. Durso (Ed.), Handbook of
applied cognition (pp. 283-314). New York: John Wiley & Sons.
Wickens, C. D., & Hollands, J. G. (1999). Engineering psychology and human performance.
Upper Saddle River, New Jersey: Prentice Hall.

Key articles from TPM-researchers

71
Houtenbos, M. (2008). Expecting the unexpected: A study of interactive driving behaviour at
intersections (PhD thesis). TU Delft.
Wiersma, J.W.F.., Butter, R., & vant Padje, W. (2000). A human factors approach to
assessing VTS operator performance. In VTS 2000 Symposium. Singapore. Session 5-12.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Improving safety in the harbour by applying situation awareness in vessel traffic services
(Wiersma, et al, 2000). Situation awareness can be used to determine what issues are
important to make sure that operators maintain situation awareness at all times.

Related theories / methods

Human Information Processing (HIP is included in the catalogue.)

Editor

S. Sillem (s.sillem@tudelft.nl)































72
STATE AID

Definition

In art. 87 (1) EC Treaty, state aid is defined as any aid granted by a Member State or through
State resources in any form whatsoever, which distorts or threatens to distort competition by
favouring certain undertakings or the production of certain goods and which affects trade
between Member States. All benefits (subsidies, soft loans, nice prices guarantees) from
central, regional and local governments as well as organizations in which a government has a
decisive influence, to entities that perform economic activities can come within the definition
of state aid. As a basic rule, state aid is prohibited. When a Member State wants to give aid to
an undertaking, it has to notify the European Commission. The European Commission has to
approve the measure. Only after approval, state aid may be granted, otherwise it is illegal.

Applicability

The concept of state aid is relevant whenever there is a transfer of benefits from a government
and an undertaking. Especially when dealing with the liberalized sectors and public private
partnerships, state aid is often there, though it might not be very obvious. The concept comes
from the law of the European Community. A state aid measure does not belong to a particular
type of (national) law; it can be civil law (e.g. a contract) or administrative law (e.g. a
decision).

Pitfalls / debate

Is a particular measure really a state aid measure that has to be notified or can notification be
avoided? If a state aid measure is not notified but the aid is granted, the aid is illegal and has
to be recovered. That is a huge risk to a project or a company.

One of most discussed questions nowadays is how to tackle the tension in the liberalized
sectors between the prohibition of state aid on the one hand, and the use of state resources to
safeguard public interests on the other.

Key references

Quigley, C. (2009). European state aid law and policy. UK: Hart Publishing.
Vesterdorf, P., & Nielsen, M. (2008). State aid law of the European Union.
Hancher, L., Slot, P.J., & Ottervanger, T.R. (2006). EC state aids.

Key articles from TPM-researchers

Saanen, N. (2009). De selectiviteit van gerichte milieuheffingen in het staatssteunregime.
NTER, 108-112.
Saanen, N. (2008). Vliegveld Eelde: een baanverlenging op de lange baan geschoven. NTER,
356-362.
Saanen, N. (2008). Amsterdam als particuliere marktinvesteerder. NTER, 137-142.
Saanen, N., Stout, H.D. (2007). Breedband in Appingedam: Verloren bij de Commissie,
gewonnen door de inwoners. NTER, 212-218.
73

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

- Maasvlakte II
- New infrastructure on local airports
- Public private partnerships
- Broadband infrastructure

Related theories / methods

Services of general (economic) interest, Public service obligations, Public task, Public private
partnership

Editor

Nienke Saanen (n.saanen@tudelft.nl)

































74
STRATEGIC BEHAVIOUR

Definition

Strategic behaviour is opportunistic, sly behaviour. Strategists do what they do to serve their
own interest. They do so by camouflaging their intentions.

Applicability

Strategic behaviour is relational and takes shape in interactions. Strategic behaviour thrives in
a constellation in which actors are aware of each other and can mutually react to each others
behaviour and anticipate it. It is applicable in several contexts:
- Markets: competitors may behave strategically in order to beat their competitor
- Hierarchies: the regulatee may behave strategically to avoid the application of rules that
may harm their interest.
- Networks: actors may act strategically to improve their position in inter-organizational
decision making processes.

Pitfalls / debate

Research on strategic behaviour has at least two pitfalls. Firstly, the existence of strategic
behaviour is difficult to prove. Why? Due to its pejorative connotation, subjects under study
may be quite reluctant to collaborate in telling researchers about their strategic behaviour.
They will probably deny any strategic behaviour.

Secondly, there is hardly any room for strategic behaviour in the perspectives of a number of
important perspectives. In a business perspective strategy has positive connotations. It
means that the strategist has a vision and is prepared for the future. In spite of the strong
apparent similarity of strategy and strategic behaviour, it is fruitful to emphasize the
differences.

In a legal perspective behaviour is either lawful or unlawful. If the court has not declared
behaviour to be unlawful, it is permitted. However, in many cases it is uncertain whether
behaviour is permitted or not. Due to technical changes, market changes and legal changes,
new situations emerge that stimulate a kind of behaviour that may or may not be lawful. This
gives room for strategic behaviour.

In the perspective of an economist, a good entrepreneur is supposed to act cleverly, trying to
realize advantage for their enterprise, even if this harms their competitor. The difference
between a commercially sound operating entrepreneur and strategic behaviour is that the
former is proud of his behaviour and will not be reluctant to tell the world about his heroic
acting, whereas the same entrepreneur will deny his strategic behaviour, because it is and will
be not done.

Key references

Dixit, A.K., & Nalebuff, B.J. (1993). Thinking strategically: The competitive edge in
business, political and everyday life. New York: W.W. Norton & Company.
Scharpf, F.W. (1997). Games real actors play: Actor-centered institutionalism in policy
research. Oxford: Westview Press.
75
Schelling, T.C. (1960). The strategy of conflict. Cambridge, MA: Harvard University Press.
Schmalensee, R., & Willig, R. (Eds.). (1989). Handbook of industrial organization.
Amsterdam: Elsevier Science Publishers.

Key articles from TPM-researchers

Ten Heuvelhof, E., De Jong, M., Kars, M., & Stout, H. (2009). Strategic behaviour in
network industries: A multidisciplinary approach. Edward Elgar.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

- Design and management of decision-making processes
- Design of markets
- Regulation
- Change processes

Editor

Ernst ten Heuvelhof (e.f.tenheuvelhof@tudelft.nl)






























76
TECHNICAL ARTIFACT

Definition

Technical artifact: physical object intentionally made by humans with a pragmatic function.

Applicability

This concept is particularly relevant in discussions:
- about notions such as (technical) function and engineering design,
- about the distinction between the natural and the artificial and the distinction between the
technical and the social, and
- about how technical artifacts fit into the furniture of the world (ontology).

Pitfalls / debate

The notion of technical artifact may be defined in different ways. The above definition
restricts technical artifacts to the domain of objects involving physical constructions. More
general definitions may also include objects involving intentionally arranged physical
processes, human actions (services) and even social entities (a bureaucracy). The above
definition excludes software from the domain of technical artifacts; software may be
considered to be an incomplete technical artifact; implemented on a platform it becomes part
of a technical artifact. Finally, the above definition may be considered anthropocentric; by
definition physical constructions made by animals fall outside the scope of technical artifacts.

Key references

Hilpinen, R. Artifact. In The Stanford Encyclopedia of Philosophy (Fall 2004 Edition). E. N.
Zalta (Ed.). http://plato.stanford.edu/archives/fall2004/entries/artifact/

Key articles from TPM-researchers

Kroes, P., & Meijers, A. (2006). The dual nature of technical artefacts. Special issue of
Studies in History and Philosophy of Science, 37.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

How to formally represent (systems of) technical artifacts in particular in simulations of
socio-technical systems? In contrast to the formal representation of physical systems, the
formal representation of technical artifacts turns out to be problematic, for it requires a formal
representation of functional features. A generally accepted model of formally representing
functions is still lacking.

Another problem concerns the distinction between technical and social artifacts and their role
in solving (societal) problems.


77
Related theories / methods

Technical artifacts are a special subclass of artifacts in general, to which also social artifacts
(like money, states etc.) belong. For an interesting theory of how technical and social artifacts
fit into the world, see: Searle, J. (1995). The construction of social reality. See also: ICE
function theory.

Editor

Peter Kroes (p.a.kroes@tudelft.nl)








































78
TRANSITION

Definition

A transition is a process during which a societal system radically changes the way it meets a
societal need.

Different definitions have been provided to describe a transition. The most recent and
commonly accepted is that transition is a process that emerges as a co-evolution of different
elements of the societal system and that a transition encompasses radical changes in its
structure, culture and practices.

Applicability

Transition studies tend to distinguish between the concept in its own right, often referred to as
transition dynamics and the concept in a policy/governance context, in which it is then
referred to as transition management. The former has a predilection for the understanding of
the processes and phenomena that go on in societal systems in transitions and the latter is
interested in using this understanding to arrive at steering principles for real-life ongoing
transitions.

The concept is readily applied in all contexts where change and societal system are used in
the same sentence. More specifically, with societal system often is meant: a sector, an
industry, sometimes a region (cities, harbour areas, etc) or some more general political or
governance unit.

The adverb radically accompanying change suggests that this change is not, incremental,
gradual or superficial. Also the timescales are large, generations even. This has consequences
in terms of applicability since it implies that there is little to be said about the outcomes of
transitions due to fundamental uncertainties.

This in turn, makes that when the concept is applied in a transition management context, there
can be no step-by-step or blueprint planning, nor predetermined images of the future of the
societal system.

Transition management proposes tools to alert and radically change the dominant
regime/practices. There are two key tools that have been developed and practised as transition
management tools that aim at bringing in innovation to the regime so as to alter it: (a) the
transition arena, and (b) the transition experiments. The transition arena is a collaborative
platform where different actors are invited to participate so as to take part in an envisioning
process. The transition experiment is an open-ended experiment (less structured than a pilot
project) that aims at testing the conditions (organizational, institutional, economic and
societal) in which a societal innovation can yield change in perceptions and practices.

Typically the kind of contexts that require the concept transition are beyond the grasp of
traditional and even adaptive policy approaches, due to the larger time horizons and the
impossibility of articulating a policy goal.

Pitfalls / debate

79
Debates are present about the possibility of understanding societal transitions at all. It is
argued by some that finding generic principles of change for all possible transitions is
fundamentally impossible.

The interdisciplinary nature of transition studies is at times celebrated and at times criticised.
Related to that, the scientific basis of transition (dynamics) studies is sometimes disputed,
since the empirical basis is not that large yet and theory still very much under development
and both in empirical as in theoretical research unorthodox methodologies are often applied.

Similarly the possibility of managing transitions is disputed. Although transition
management takes a relatively modest stance in the influence one (or several) can have on the
course of a transition, this topic regularly re-enters the transition debate.

Pitfalls:
When the concept of transitions or the concepts developed within the transition approach
(multi-level or multi-phase) are used to describe marginal changes in societal system there
might be a conceptual pitfall in describing and conceiving the phenomenon. Societal change
and a transition may seem similar phenomena but there is a difference on the degree of
change and on the time of development of the phenomenon. There is a need to understand the
phenomenon termed transition so as to take into account the inherent complexity, co-
evolutionary nature and most importantly, for policy developers to think about
interventions/policy alternatives that might steer or orient the societal system towards the
transition vision.

Key references

Martens, P., & Rotmans, J. (2005). Transitions in a globalizing world. Futures, 37, 1133-1144
Loorbach, D. (2007). Transition management: New mode of governance for sustainable
development. Utrecht: International Books.
Loorbach, D., & Rotmans, J. (2006). Managing transitions for sustainable development. In X.
Olshoorn, & A. J. Wieczorek (Eds.), Understanding industrial transformation: Views from
different disciplines. Dordrecht: Springer.
Loorbach, D., Van Bakel, J.C., Whiteman, G., & Rotmans, J. (2009). Business strategies for
transitions towards sustainable systems. Business strategy and the environment, Published
online in Wiley InterScience, 10.1002.
Rotmans, J., Kemp, R., & Van Asselt, M. (2001). More evolution than revolution: Transition
management in public policy. Foresight-The Journal of Future Studies, Strategic Thinking
and Policy, 3(1), 15-31.
Rotmans, J. (2005). Societal innovation: Between dream and reality lies complexity.
Inaugural Addresses Research in Management Series. Rotterdam: ERIM, Erasmus Research
Institute of Management.
Van der Brugge, R., & Rotmans, J. (2007). Towards transition management of European
water resources. Water Resources Management, 21, 249-267.
Van der Brugge, R., & Van Raak, R. (2007). Facing the adaptive management challenge:
Insights from transition management. Ecology and Society, 12(2), 33.

Key articles from TPM-researchers
80

De Haan, J. (2007). Pillars of change: A theoretical framework for transition models.
Presented at the ESEE 2007 Conference Integrating Natural and Social Sciences for
Sustainability.
Frantzeskaki, N., & De Haan, J. (2009). Transitions: Two steps from theory to policy.
Futures, 41(9).
Frantzeskaki, N., Loorbach, D., & Kooiman, J. (2009). Transitions governance: Towards a
new governance paradigm. 13th Annual International Research Symposium for Public
Management (IRSPM XIII), 6-8 April 2009, Copenhagen Business School, Denmark.
Loorbach, D., Frantzeskaki, N., & Thissen, W.H. (2009). A transition research perspective on
governance for sustainability. Sustainable Development: A challenge for European Research,
EU DG Research, 28-29 May 2009, Brussels, Belgium. (Best Paper Award)

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

The transition approach and transition management are proposed as a way of dealing with
specific types of wicked problems, persistent problems. An example of a persistent problem is
the fossil-fuel dependency of the energy systems. The transition approach is currently
undertaken by the Ministry of Housing, Spatial Planning and Environment to draw up the
Environmental policy in the Netherlands and to draw action for achieving sustainability.

Related theories / methods

a) (Socio-) technical transitions

Elzen, B., Geels, F.W., & Green, K. (2004). System innovation and the transition to
sustainability. Edward Elgar Publishing.
Elzen, B., & Wieczorek, A. (2005). Transitions towards sustainability through system
innovation. Technological Forecasting and Social Change, 72, 651-661.
Kohler, J., Whitmarsh, L., Nykvist, B., Shilperoord, M., Bergman, N., & Haxeltine, A. (2009).
A transitions model for sustainable mobility. Ecological Economics, 68, 29852995.
Geels, F.W. (2004). From sectoral systems of innovation to socio-technical systems, insights
about dynamics and change from sociology and institutional theory. Research Policy, 33,
897-920.
Geels, F.W. (2005). Processes and patterns in transitions and system innovations: Refining the
co-evolutionary multi-level perspective. Technological Forecasting and Social Change, 72,
682.
Geels, F.W., & Schot, J. (2007). Typology of sociotechnical transition pathways. Research
Policy, 36, 399-417.

b) Integrated Assessment

Paul-Wostl, C., Schlumpf, C., Bussenschutt, M., Schonborn, A., & Burse, J. (2000). Models
at the interface between science and society: Impacts and options. Integrated Assessment, 1,
267-280.
81
Rotmans, L., & Van Asselt, M.B.A. (2001). Uncertainty management in integrated
assessment modeling: Towards a pluralistic approach. Environmental Monitoring and
Assessment, 69, 101-130.
Rotmans, L., & Van Asselt, M.B.A. (2002). Integrated assessment: Current practices and
challenges for the future. In H. Abaza, & A. Baranzini (Eds.), Implementing sustainable
development: Integrated assessment and participatory decision-making processes (pp. 78
116). Cheltenham, Northampton: Edward Elgar.

Editors

N. Frantzeskaki (n.frantzeskaki@tbm.tudelft.nl)
J. de Haan (hans.dehaan@tudelft.nl)






































82
TRAVEL TIME BUDGETS

Definition

Travel time budgets refer to the time (adult) persons on average spend for travel. Normally
they are expressed in minutes per person per day (average). The theory of constant travel time
budgets assumes (based on empirical data) that on average travel time budgets are constant
over time and space. This average includes all modes and motives. All over the world people
in every country (western, non-western) on average spend about 60 to 75 minutes per day for
travel.

Applicability

The theory of constant travel time budgets should be applicable world wide, for the past, the
present and the future. The importance of the theory is that improvements in the transport
system leading to higher travel speeds do (in the long run) not result in a reduction of the time
people spend on travel, but in an increase of passenger kilometres travelled. The theory can be
used for forecasting (see Schafer & Victor, 1997).

Pitfalls / debate

The theory of constant travel time budgets is under debate for decades, but seems to be quite
robust. Nevertheless, recently in the Netherlands average travel time seems to be increasing
(Van Wee et al., 2006). The main pitfall is that the theory should not be applied at a
disaggregate level (one mode, one motive, a small group of people, and certainly not at the
individual level).

Key references

Mokhtarian, P., & Chen, C. (2004). TTB or not TTB that is the question: A review and
analysis of the empirical literature on travel time (and money) budgets. Transportation
Research Part A, 38(9-10), 643-675.
Schafer, A., & Victor, D. (1997). The past and future of global mobility. Scientific American,
227(4), 36-39.
Zahavi, Y. (1979). The UMOT-project (Report DOT-RSPA-DPB-2-79-3). Washington, D.C.:
US Department of Transportation.

Key articles from TPM-researchers

Van Wee, B., Rietveld, P., & Meurs, H. (2006). Is average daily travel time expenditure
constant? In search of explanations for an increase in average travel time. Journal of
Transport Geography, 14, 109-122.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

The theory of constant travel time budgets can be helpful in forecasting the long term impacts
of transport innovations resulting in travel time reductions. For example, the theory leads to
only modest expectations of the role of ICT on reductions of travel. This is now generally
83
accepted in literature: In the eighties expectations on substitution of travel by ICT were high.
Now ICT is seen much more as complementary to travel. Another area of research is long-
distance travel. High speed rail is not only a substitute for air travel, but also generates
additional travel. The rail line from Paris to the south of France resulted in quite a few persons
commuting between Lyon and Paris.

Related theories / methods

Activity based analysis, Space-time prism, Induced travel demand, CBA of transport
infrastructure projects, Rule of half (rule to estimate benefits of additional travel, for CBA),
Substitution and generation, Access, Accessibility.

Editor

Bert van Wee (g.p.vanwee@tudelft.nl)



































84
UNCERTAINTY IN MODEL-BASED DECISION SUPPORT

Definition

The notion of uncertainty has taken different meanings and emphases in various fields,
including the physical sciences, engineering, statistics, economics, finance, insurance,
philosophy, and psychology. Following Walker et al. (2003), we define uncertainty as any
departure from the unachievable aim of complete determinism.

Walker et al. (2003) present a framework for defining uncertainty with respect to model-based
decision support. Their classification has three dimensions:
- Location: where the uncertainty manifests itself within the policy analysis framework (in
the external forces (X), the system domain (R), or the weights (W)).

- Level: the magnitude of the uncertainty, ranging from deterministic knowledge to total
ignorance.

- Nature: the nature of an uncertainty can be due to the imperfection of our knowledge (also
called epistemic uncertainty) or to the inherent variability of the phenomena being
described (also called aleatoric uncertainty). Dewulf et al. (2005) add a third nature of
uncertainty: ambiguity, which is defined as the simultaneous presence of multiple
equally valid frames of knowledge.

Applicability

Uncertainty plays a role in any scientific inquiry. The way it is described in this entry relates
to the way it relates to a policy analysis study or a model-based decision support exercise.

Pitfalls / debate

The notion of uncertainty is closely tied up with notions such as ambiguity, risk, and
probability. Furthermore, the exact meaning of the uncertainty will differ among scientific
discourses. Even in related scientific disciplines, such as risk analysis and integrated
assessment, the notion is used in different ways. However, the definition offered above is
broad enough to accommodate these divergent notions.

With respect to risk, a classic distinction is the one between decisionmaking under uncertainty
and decisionmaking under risk. In the latter case probabilities are known, while in the first
case they are not available (Knight, 1921; Luce & Raiffa, 1957). Knight saw risk and
uncertainty as being disjoint. We prefer to treat risk as one kind (or even one level) of
uncertainty a light uncertainty, that can be quantified by using losses and probabilities.
That is, uncertainty is a broader concept than risk.

Similarly, there are researchers for whom uncertainty and ambiguity are synonymous
(Koppenjan & Klijn, 2004). While others consider ambiguity to be a specific type of
uncertainty (Dewulf et al., 2005).

Also, there is a huge literature on uncertainty and how to deal with it, even within the fields of
policy analysis and modelling and there is no overall agreement. Walker et al. (2003)
review much of this literature. Norton et al. (2006) criticize the Walker, et al. framework on
85
several points. The authors of the original framework responded to these criticisms (Krayer
von Krauss et al., 2006). They found that most of the issues raised by Norton et al. (2006),
although relevant, had little bearing on the frameworks purpose (to promote systematic
reflection on a wide range of types and locations of uncertainty in order to minimize the
chance that relevant key uncertainties are overlooked, and to facilitate better communication
among analysts from different disciplines, as well as between them and policymakers and
stakeholders.).

Key references

Brugnach, M., Dewulf, A., Pahl-Wostl, C., & Taillieu, T. (2008). Toward a relational concept
of uncertainty: About knowing too little, knowing too differently, and accepting not to know.
Ecology and Society 13(2), 30.
Dewulf, A., Craps, M., Bouwen, R., Taillieu, T., Pahl-Wostl, C., & von Krauss, K. (2005).
Integrated management of natural resources: Dealing with ambiguous issues, multiple actors
and diverging frames. Water Science and Technology, 52(6), 115-124.
Funtowicz, S. O., & Ravetz, J. R. (1990). Uncertainty and quality in science for policy.
Dordrecht: Kluwer Academic Publishers.
Knight, F. H. (1921). Risk, uncertainty and profit. New York: Houghton Mifflin Company.
(republished in 2006 by Dover Publications, Inc., Mineola, N.Y.)
Koppenjan, J.F.M., & Klijn, E.H. (2004). Managing uncertainties in networks. London:
Routledge.
Luce, R. D., & Raiffa, H. (1957). Games and decisions. New York, Wiley.
Morgan, M.G., & Henrion, M. (1990). Uncertainty: A guide to dealing with uncertainty in
quantitative risk and policy analysis. Cambridge: Cambridge University Press.
Norton, J. P., Brown, J. D., & Mysiak, J. (2006). To what extent, and how, might uncertainty
be defined? Comments engendered by Defining uncertainty: A conceptual basis for
uncertainty management in model-based decision support. Integrated Assessment, 6(1), 82-
88.
Van Asselt, M.B.A. (2000). Perspectives on uncertainty and risk. Dordrecht: Kluwer
Academic Publishers.
Walker, W. E., Harremos, J., Rotmans, J. P., Van Der Sluijs, J. P., Van Asselt, M. B. A.,
Janssen, P.H.M., & Krayer Von Krauss, M.P. (2003). Defining uncertainty: A conceptual
basis for uncertainty management in model-based decision support. Integrated Assessment,
4(1), 5-17.

Key articles from TPM-researchers

Agusdinata, D.B. (2008). Exploratory modeling and analysis: A promising method to deal
with deep uncertainty (PhD thesis). Delft University of Technology, Delft.
Fijnvandraat, M. L. (2008). Shedding light on the black hole the roll-out of broadband access
networks by private operators (PhD thesis). Faculty of Technology, Policy, and Management,
Delft University of Technology, Delft, the Netherlands.
Meijer, I. S. M. (2008). Uncertainty and entrepreneurial action: The role of uncertainty in the
development of emerging energy technologies (PhD thesis). University of Utrecht, Utrecht.
86
Walker, W. E., Harremos, J., Rotmans, J. P., Van Der Sluijs, J. P., Van Asselt, M. B. A.,
Janssen, P.H.M., & Krayer Von Krauss, M.P. (2003). Defining uncertainty: A conceptual
basis for uncertainty management in model-based decision support. Integrated Assessment,
4(1), 5-17.

Description of a typical TPM problem in the context of which the concept is particularly
relevant

The concept plays a role in any policy analytic study, so it can be encountered in practically
all TPM problems. Some examples of applications within TPM:
- Strategic planning for airports
- Roll-out of broadband access
- Implementation of transport innovations (e.g., intelligent transportation systems, Maglev,
underground freight transportation, road pricing)

Related theories / methods

- Policy analysis framework
- Probability and statistics
- Stochastic processes
- Risk
- Deep uncertainty
- Adaptive policymaking
- Exploratory modelling

Editors:

Warren Walker (W.E.Walker@tudelft.nl)
Vincent Marchau (V.A.W.J.Marchau@tudelft.nl)
Jan Kwakkel (J.H.Kwakkel@tudelft.nl)







87
VERDOORNS LAW (AT THE FIRM LEVEL)

Definition

Verdoorns law states that in the long run there is a linear relationship between the growth of
output and the growth of productivity (Verdoorn, 1949; Kaldor, 1966). This relationship can
be explained by the theory of cumulative causation. In this theory it is primarily the growth of
effective demand that stimulates technological growth through a larger potential for division
of labour and through learning-by-doing. The resulting productivity increase stimulates higher
outputs though the extension of existing markets and the opening up of new markets (Kaldor,
1966). This suggests that productivity gains and growth of output are mutually reinforcing
mechanisms. Following this line of reasoning Verdoorns law has been empirically observed
across nations, regions and industries using country-level, region-level and industry-level
data.

The argument of cumulative causation also applies at the firm level: the firms growth of
output over time causes productivity improvements through the realization of scale effects
and learning effects. Firms can use these productivity gains to increase their output by
stimulating additional market-demand for their products. This can be done by using the cost
advantage to win market share either by offering products at lower prices than competitors for
equivalent benefits or by providing customers with unique benefits at higher prices than
competitors. Through these strategies productivity gains and growth of output become
mutually reinforcing, causing a positive feedback loop (see figure).

Growth of
output
Scale effects and
Learning effects
Productivity
growth
Volume-efficiency or
Volume-differentiation
strategy


Applicability

Estimating the presence of increasing returns (economies of scale and learning effects) at the
country, sector, region, or firm level.

Pitfalls / debate

- Verdoorns law is a long-term relationship; long-run data has to be available
- There is a danger of spurious regression
- Changes in the ratio of capital to labour and/or capital-to-output can distort the meaning of
the results
88
- Changes in capacity utilization over the business cycle may create the expected linear
relation without measuring the same phenomenon (this alternative relationship is known
as Okuns law)

Key references

Kaldor, N. (1966). Causes of the Slow Rate of Economic Growth in the United Kingdom. In
F. Targetti, & A.P. Thirlwall (Eds.) (1989), The Essential Kaldor (pp.282-310). Cambridge:
Cambridge University Press.

Verdoorn, P.J. (1949). Fattori che Regolano lo Sviluppo della Produttivit del Lavoro.
LIndustria, 1, 3-11. An English translation is available in: McCombie, J., Pugno, M., & Soro,
B. (Eds.) (2002). Productivity Growth and Economic Performance: Essays on Verdoorns
Law. Houndmills, Basingstoke: Palgrave MacMillan.

Key articles from TPM-researchers

Hartigh, E. den, Langerak, F., & Zegveld, M.A. (2009). The Verdoorn law, firm strategy and
firm performance. In Van Geenhuizen, M., D.M. Trzmielak, D.V. Gibson, & M. Urbaniak
(Eds.), Value-added Partnering and Innovation in a Changing World (pp.358-373). West
Lafayette (IN): Purdue University Press.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Firms, as multi-actor systems, can improve their productivity by creating and managing
division of labour between the actors. Such a division of labour may result in economies of
scale and in faster learning-by-doing. Verdoorns law at the firm level can be used to measure
the extent to which firms are able to do so.

Related theories / methods

- Productivity
- Increasing returns (economies of scale, learning effects) [ALSO SUBMITTED FOR
CATALOGUE]
- Division of labour / specialization

Editor

Erik den Hartigh (e.denhartigh@tudelf.nl)

89
VIRTUAL WORLDS

Definition

Although there is a widespread use of the concept virtual world, there is not yet a clear cut
definition for it. Bartle (2004) takes an almost poetic approach by defining virtual worlds as
places where the imaginary meets the real' and avoids the much more complicated task of
giving an inclusive definition.

It is easier to convey the idea of virtual worlds by summarizing common attributes. A virtual
world is a virtual environment with, but not limited to, the characteristics of being multi-user,
persistent, real-time, interactive, adhering to predefined rules and allowing the possibility to
identify oneself.

There are several varieties of virtual worlds that each has additional characteristics. For
instance, virtual worlds like Second Life and World of Warcraft can also have three-
dimensional graphics, fictional background stories, user-generated content and an economic
system for virtual object transactions.

Applicability

Virtual worlds can serve as a supporting environment for a number of research fields,
methods and theories. To name just a few:

- Collaboration engineering: the multi-user environment allows this;
- Simulation-gaming: experiential learning in a safe environment;
- Rapid prototyping: creating a virtual prototype of a physical object;
- Simulation: social simulations, logistics simulations, etc can be visualized and studied in a
virtual world.

Pitfalls / debate

- Overemphasis on the virtual - real dichotomy. Rules of physical/social reality are not
necessarily all different in a virtual world. Moreover, even if some rules of physical/social
reality are different, a virtual worlds popularity and pervasiveness can make them just as
real.
- Management of expectations. Many people have presuppositions about virtual worlds
such as Second Life because of media coverage for example, while there are literally
hundreds of virtual worlds in existence based on our definition, each with something
unique.
- Validation. Many business and educational organizations have yet to find clearly
successful applications for virtual worlds.

Key references

Bartle, R. A. (2004). Designing virtual worlds. Berkeley: New Riders.
Castronova, E. (2005). Synthetic worlds: The business and culture of online games. Chicago:
The University of Chicago Press.

90
Key articles from TPM-researchers

Harteveld, C., Warmelink, H., Fumarola, M., & Mayer, I. (2008). Bringing concepts alive in
second life: A design based experiment. Paper presented at the GAMES: Virtual Worlds and
Reality, Kaunas, Lithuania.
Warmelink, H., Bekebrede, G., Harteveld, C., & Mayer, I. (2008). Understanding virtual
worlds: An infrastructural perspective. Paper presented at the GAMES: Virtual Worlds and
Reality, Kaunas, Lithuania.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

- Simulation-gaming: developing a multiplayer computer simulation-game using an existing
virtual world as a platform;
- Collaboration engineering: integrating the use of a virtual world in the work processes of
employees of an organization;
- Simulations: simulating an infrastructural problem in a virtual world to make it accessible
to stakeholders.

Related theories / methods

- Simulation-games or gaming/simulations
- Synthetic worlds
- Massively multiplayer online games
- Virtual reality
- Web 2.0

Editors

Michele Fumarola (M.Fumarola@tudelft.nl)
Harald Warmelink (H.J.G.Warmelink@tudelft.nl)



















91
WICKED PROBLEMS

Definition

Wicked problem is a problem that lacks common ground for involved actors to define and
solve it.

Applicability

Founding fathers of the concept are Horst Rittel and Melvin Webber (1973), who criticized
the scientific way policy analysts are trying to solve policy problems. They state that policy
problems in pluralistic societies cannot be definitely described for there is nothing like the
undisputable public good.

The concept is applicable on any policy problem that has to be solved in a multi-actor context.
It provides a framework for evaluating decision making processes. It also creates
opportunities for reflection on scientific attempts to solve policy problems.

Pitfalls / debate

The concept is attractive to many scholars as an explanation for policy failure, or more
general the way decision making processes evolve over time. This results in many
alternative definitions. These alternatives dont conflict with the constructed definition above,
but find different explanations for the lack of common ground. All explanations assume a
multi-actor context.

Hisschemller (1993) defines two dimensions that are relevant for problem solving: the
certainty of knowledge and the consensus about standards. Based on these dimensions they
characterize policy problems:

Certainty of knowledge

Consensus about
standards
High Low
High Tamed problems
Untamable scientific
problems
Low
Untamable ethical
problems
Untamed political problems

De Bruijn and Ten Heuvelhof (2008) call the untamed political problem unstructured
problems or wicked problems and stress the impossibility to objectify information that
originate or may solve these problems.

Van Bueren et al. (2003) distinguish three types of uncertainties that contribute to the
wickedness of a problem. Cognitive uncertainty results from a lack of knowledge about the
causes and effects of problems. Causal relations are numerous, interrelated, and difficult to
identify. Strategic uncertainty exists because many actors are involved. Their strategies to
address the problem are based on their perceptions of the problem and its solutions, which
may differ from the views of others.

92
Institutional uncertainty results from the fact that decisions are made in different places, in
different policy arenas in which actors from various policy networks participate. (Van Bueren
et al., 2003).

Conklin and Weil (1997) explicitly hint to the dynamics of wicked problems. A wicked
problem they say - is an evolving set of interlocking issues and constraints. The constraints
on the solution, such as limited resources and political ramifications, change over time. The
other interpretations also imply that the wickedness of a problem results in dynamic decision
making processes. This is because any actor can find legitimizations for contesting
assumptions behind current and future policies at any moment.

Key references

Conklin, E.J., & Weil, J. (1997). Wicked problems: Naming the pain in organizations. Article
retrieved from leanconstruction.org.
Hisschemoller, M. (1993). Democratie van problemen. De relatie tussen de inhoud van
beleidsproblemen en methoden van politieke besluitvorming. (In Dutch). Amsterdam: VU-
Uitgeverij.
Rittel, H.W.J., & Webber, M.M. (1973). Dilemmas in general theories of planning. Policy
Sciences, 4, 155-169.

Key articles from TPM-researchers

De Bruijn, H., & Ten Heuvelhof, E. (2008). Management in networks: On multi-actor
decision making. London: Routledge.
Van Bueren, E., Klijn, E.H., & Koppenjan, J. (2003). Dealing with wicked problems in
networks: Analyzing an environmental debate from a network perspective. Journal of Public
Administration Research and Theory, 13(2), 193-212.
Koppenjan, J., & Klijn, E.H. (2004). Managing uncertainty in networks: A network approach
to problem solving and decision making. London: Routledge.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

For example, we assume that increased levels of carbon dioxide in the atmosphere cause
environmental risks, but we do not know what concentrations of carbon dioxide are
threatening. Some actors also doubt if humans substantially cause the increase of carbon
dioxide. Decisions are made in arenas ranging from the international tot the regional or local
level. (Example derived from Van Bueren et al., 2003).

Related theories / methods

Network management, Process management, Institutional theory, Social Constructivism

Editor

Haiko van der Voort (H.G.vanderVoort@tudelft.nl)

93








Theory



































94
Table of Contents of Theory

BOUNDED RATIONALITY THEORY ................................................................................. 95
COHERENCE THEORY ......................................................................................................... 97
ENGINEERING SYSTEMS .................................................................................................. 100
EVOLUTIONARY GAME THEORY .................................................................................. 102
EXPECTED UTILITY THEORY ......................................................................................... 104
FUNCTIONS OF INNOVATION SYSTEMS ...................................................................... 106
ICE FUNCTION THEORY ................................................................................................... 108
METAETHICAL THEORIES ............................................................................................... 110
NORMATIVE ETHICAL THEORIES ................................................................................. 113
ORGANISATIONAL LEARNING ....................................................................................... 116
POLICY LEARNING IN MULTI-ACTOR SYSTEMS ....................................................... 119
PRINCIPAL AGENT THEORY ........................................................................................... 121
PROCESS MANAGEMENT ................................................................................................. 123
PUNCTUATED EQUILIBRIUM THEORY IN PUBLIC POLICY ..................................... 126
RANDOM UTILITY MAXIMIZATION-THEORY (RUM-THEORY) .............................. 129
SOCIO-TECHNICAL SYSTEM-BUILDING ...................................................................... 131
SYSTEM AND CONTROL THEORY ................................................................................. 133
SYSTEMS ENGINEERING .................................................................................................. 135
THEORY OF INDUSTRIAL ORGANIZATION (IO) ......................................................... 138
THEORY OF INSTITUTIONAL CHANGE ........................................................................ 140
THEORY OF PLANNED BEHAVIOUR ............................................................................. 142
TRANSACTION COST THEORY/ECONOMICS ............................................................... 144


























95
BOUNDED RATIONALITY THEORY
3


Brief discussion

Simon (1947) argues that most actors satisfice for most decisions that is, they make the
best decisions which are possible to them given their time, information and resources. The
resultant limitations on decision-making may take many possible forms. Contrast this claim
with rational actor theory which claims that actors know their preferences, and will seek them
out fully and accordingly.

March (1978) describes some of these very many alternative forms of rationality. Limited
rationality is caused by a lack of understanding of alternatives. Contextual rationality
recognizes that decision problems involve multiple, competing factors. Game rationality
recognizes that decision-making occurs in socially contested environments. Process
rationality emphasizes that organizations may prefer defensible decision procedures over
potentially minor or uncertain increases in decision outcomes. Adaptive rationalities
recognize that decision-makers may need to learn their alternatives and only then how they
are to be valued. Selected rationality recognizes that decision-making might involve
experimentation, so that sometimes accidental successes become well-established procedures.
Posterior rationality means that preferences are sometimes revised in light of achieved
outcomes.

Applicability

In potential the theory of bounded rationality is applicable to a broad range of decision-
making settings.

Pitfalls / debate

In practice bounded rationality requires a lot of effort on the part of the researcher to utilize
successfully. Bounded rationality theory is challenged by describing the actual boundaries to
effective decision-making that are encountered by real actors in actual settings. Generalizing
these decision-making boundaries to new actors, institutions, or problem settings is often a
challenge for practitioners of the theory. In empirical applications these generalizations may
entail capturing and characterizing the actual routines or heuristics in use by people and
organizations. Todd and Gigerenzer (2003) demonstrate this approach in their efforts to
identify heuristics used by real experts in challenging and uncertain environments.

Key references

March, J. G. (1978). Bounded rationality, ambiguity, and the engineering of choice. The Bell
Journal of Economics, 9(2), 587-608.
Simon, H. (1947). Administrative behavior: A study of decision-making processes in
administrative organizations. The Free Press.
Todd, P.M., & Gigerenzer, G. (2003). Bounding rationality to the world. Journal of Economic
Psychology, 24(2), 143-165.

3
The term is largely synonymous with limited rationality. See other forms of bounded rationality as listed in
this entry.
96

Key articles from TPM-researchers

Franssen, M., & Bucciarelli, L. L. (2004). On rationality in engineering design. Journal of
Mechanical Design, 126(6), 945-949.
Franssen, M. (2006). The normativity of artefacts. Studies in History and Philosophy of
Science, 37(1), 42-57.
Roeser, S. (2006). The role of emotions in judging the moral acceptability of risks. Safety
Science, 44(8), 689-700.

Description of a typical TPM-problem that may be addressed using the theory / method

Franssen et al. (2004, 2006) uses an enriched conceptualization of rationality to think
reflectively about how and why engineering artefacts are designed. Roeser (2006) applies a
limited rationality approach to understanding risk acceptance. The author enriches rationality
through the inclusion of emotion and moral judgment into a standard decision-making
framework. Together these authors usefully apply specific kinds of bounded rationality into a
reflective and reflexive account of technology and design.

Related theories / methods

Prospect theory entails the experimental verification and generalization of a set of specific
departures away from ideal rationality. Such theories have been applied to risk perception and
communication. Trash can and other theories of organizational decision-making attempt to
describe decision-making processes in organizations. Game theory models may assume
limited rationality to determine the ultimate effects on decision outcomes. Agent-based
models often impose computational resource constraints on agents. These limits may enhance
the descriptive validity of the model. Bounded rationality has entered into institutional
economics in the form of costly searches for contractual opportunities. Costly searches for
alternatives flies in the face of the assumption of the perfect marketplace and so in many
economic settings justifies a major revision of the market explanation.

Editor

Scott Cunningham (S.W.Cunningham@tudelft.nl)






97
COHERENCE THEORY

Brief discussion

The theory of coherence poses that the economic, social and technical performance of
infrastructures is dependent on the degree of coherence between the technical and the
institutional coordination of system functions in infrastructures. It brings together insights
from the fields of complex technical systems and new institutional economics in order to
analyze in a static comparative way the systemic and organizational dimension of networks
and to assess how the technical functioning of infrastructures is safeguarded. Special attention
is paid to system relevant functions such as interoperability, interconnectivity, capacity
management and system management, which are strongly related to the system
complementarities inherent to infrastructures. These system relevant functions need to be
coordinated technically as well as institutionally in a coherent way in order to guarantee a
satisfactory technical functioning of infrastructures. In the most extreme case, infrastructures
will technically collapse, if these functions are not properly performed. On a very general
level, the degree of coherence between the technical and institutional coordination of these
system relevant functions is related to the coordination mechanisms and their technical and
institutional scope of control. Typical examples of coordination mechanisms include top-
down centralized hierarchy, bottom up decentralized coordination and peer-to-peer control.
The following figure illustrates this line of argument.



Although coherence theory mainly focuses on the technical system integrity as part of
infrastructure performance, it is acknowledged that the economic performance and public
values are other important aspects to be considered. Probably there is a trade off between
these three performance categories.

Applicability

The theory of coherence focuses on networks industries through a static comparative
approach. Its application requires a clear focus on a physical network at a given point in time.
98
Nevertheless, a more dynamic picture can be built upon various static snapshots that
combined create an overview of changes in technical and institutional organization and the
accompanying relationship between the degree of coherence and performance. Interesting
questions that arise in this respect are for example how periods of coherence and incoherence
alternate (and why) and what role efficiency and innovation as motivations for change play in
this respect. This links the theory of coherence to studies on the coevolution of institutions
and technologies and ideas on technical and institutional change. It also endogenizes
technologies into theories on institutional change and vice versa. Finally, having a means to
compare the organization of the institutional and technical dimension also opens the door to
the use of coherence as a design principle with which one could match one dimension to the
other.

Pitfalls / debate

Coherence theorys main drawback lies in the difficulties of operationalising the degree of
coherence. There exists no fixed scale classifying possible degrees for measurement.
Consequently, the link between various degrees of (in) coherence and performance criteria
remains murky. This is also due to the fact that performance has various aspects. Thus while
it can be empirically observed that changes in the degree of coherence impact performance,
and these insights can be used for analytical and explanatory purposes, the theory is not able
to predict exactly how those changes will affect performance beforehand.

Key references

Finger, M., Groenewegen, J., & Knneke, R. (2005). The quest for coherence between
technology and institutions in infrastructures. Journal of Network Industries, 6(4), 227-259.
Knneke, R. (2008). Institutional reform and technological practice: The case of electricity.
Industrial and Corporate Change, 17(2), 233-265.
Menard, C. (2009). From technical integrity to institutional coherence: Regulatory challenges
in the water sector. In C. Menard, & M. Ghertman (Eds.), Regulation, deregulation and
reregulation: Institutional perspectives. Cheltenham: Edward Elgar.

Key articles from TPM-researchers

Knneke, R., & Finger, M. (2007). Technology matters: The cases of the liberalization of
electricity and railways. Competition and Regulation in Network Industries, 8(3), 301-334.
Knneke, R., Groenewegen, J., & Mnard, C. (2008). Aligning institutions with technology:
Critical transactions in the reform of infrastructures. Discussion paper.

Description of a typical TPM-problem that may be addressed using the theory / method

What the possibilities and impediments governments face in facilitating an alignment of
institutions (as modes of organization) to the technical changes inherent in an energy
transition (like the hydrogen economy)? How can such a policy be formulated and executed in
a dynamic setting; when is the market sufficient or governmental intervention required?

What are important drivers for the modernization process of electricity networks in the
Netherlands?

99
Related theories / methods

Coherence theory is built upon insights from the field of New Institutional Economics on
transactions, institutional change, and Williamsons four layer scheme on the one hand and
the literature on (complex) technical systems and technical change on the other.

Editors

Rolf Knneke (r.w.kunneke@tudelft.nl)
Daniel Scholten (d.j.scholten@tudelft.nl)






































100
ENGINEERING SYSTEMS
4


Brief discussion

Engineering systems is a new interdisciplinary field of study involving technology,
management and the social sciences. It encompasses activities in the following areas:
- System Engineering
- Technology and Policy
- Engineering Management, Innovation, Entrepreneurship
- System and Decision Analysis, Operations Research
- Manufacturing, Product Development, Industrial Engineering
(Retrieved from: www.cesun.org)

The theory and the ES body of knowledge is rapidly developing, and aims to fully integrate
the various bodies of knowledge as listed above. The idea is not to consider the technical
system as the lead system with bolted-on economic and other aspects, but rather to
consider such systems as constellations of hard and soft systems, including economic systems
and actor behaviour. Thus, economic or institutional aspects can and will also influence or
determine the technical possibilities. Conventional thinking would consider this interaction to
be much more one-way (i.e. the technical solution is leading). The theory, under development,
is currently an eclectic mixture of various theories from sub-disciplines (as reflected in the
bulleted list), but the efforts are aimed at creating a single theory that would fit such
systems.

Also see Engineering System Design for more details.

Applicability

The theory is applicable to any large-scale system that incorporates elements from technical,
economic, management and institutional bodies of knowledge. Most prominent examples
used as cases in this theory are infrastructures, the defence industry (although that is still
mainly focused on the hard systems) and the health care sector.

Pitfalls / debate

The engineering systems theory is easily confused with systems engineering. The latter,
however, is generally narrower in focus. Although the world-wide CESUN community also
seems to mix both areas (check the website and the connected programmes!), the general
opinion of the early Engineering Systems leaders is that the two are in fact subtly different.
As a general rule of thumb, one could consider Engineering Systems theory to work with
open systems, i.e. undefined or dynamic/fluid boundaries for all system aspects (technical,
economic, actors, etc), whereas the Systems Engineering theory tends to want to define the
boundaries of the system under consideration.

Key references

See CESUN.org


4
Socio-technical systems
101
Key articles from TPM-researchers

De Bruijn, J.A., & Herder, P.M. (2009). System and actor perspectives on sociotechnical
systems. IEEE Transactions on Systems Man and Cybernetics Part A -Systems and Humans,
39(5), 981-992.

Description of a typical TPM-problem that may be addressed using the theory / method

Any large scale system: infrastructures, health care sector

Related theories / methods

Socio-technical systems, Complex systems, Engineering Systems Design

Editor

Paulien Herder (p.m.herder@tudelft.nl)













102
EVOLUTIONARY GAME THEORY

Brief discussion

The dynamics of institutions is central in institutional economics. Two approaches can be
distinguished: the spontaneous evolution and the design approach. In the former institutions
emerge and develop as outcome of micro behaviour of actors that can be intentional and
purposeful, but also rule following and habitual. The actors do not purposefully design
institutions at higher level of aggregation (no purposeful collective action).In other words: the
institutions emerge out of human action, but are not the result of human design.

Applicability

This approach is more relevant for informal institutions such as norms and values then for
formal institutions like laws and regulations. In the approaches the institutions are
conceptualised as equilibria that emerge because for individual actors it is rational (efficient)
to behave according the emerging institutions assuming others actors will do likewise. So in
the context information and coordination among actors with equal power positions is crucial:
the process towards the equilibrium is anonymous and not controlled by any of the actors.
Institutions like money and language are explained in term of evolutionary game theory.

Also important in the context is the link between the domains of social, economic, judicial
and political: institutions should cohere and show a certain logic.

Pitfalls / debate

Evolutionary game theory cannot explain the variety of institutions and the different paths of
development without taking the historical initial conditions into account. It should be an
interdisciplinary approach which proves to be very complicated (see Greif, 2006).
Consequently one is tempted to stay in the domain of modelling without reference to
empirics.

Key references

Aoki, M. (2001). Toward a comparative institutional analysis. Cambridge MA: MIT Press.
Greif, A. (2006). Institutions and the path to the modern economy: Lessons from medieval
trade. Cambridge: Cambridge University Press.

Key articles from TPM-researchers

/

Description of a typical TPM-problem that may be addressed using the theory / method

When designing institutions (design is central in TPM approach), it is important to be aware
of more spontaneous, autonomous developments resulting from micro behaviour. This
becomes the more important the more infrastructures are liberalised.

Related theories / methods

103
Principal agent theory, Transaction cost economics, Institutional change.

Editor

John Groenewegen (J.P.M.Groenewegen@tudelft.nl)











































104
EXPECTED UTILITY THEORY
5


Brief discussion

Expected utility theory is a theory of rational decision-making made under limited
uncertainty. The theory is an axiomatic method, that is, it first clearly states its assumptions
about decision-making, then teases out the consequences of these stated assumptions. The
theory makes use of the concept of a lottery to better value specific options for making
decisions. The expectations from expected utility come from the expected valuation of risky
options which are used in the theory.

Applicability

The theory is widely applicable to those who are seeking to enhance consistency in decision-
making processes. When used in empirical settings the theory has the merit of providing clear
and testable hypotheses. Substantive decisions often appear to follow expected utility
outcomes, although these outcomes are usually subject to considerable amounts of noise and
unexplained variability.

Pitfalls / debate

The theory is not an account of how decisions are actually made, even if it sometimes appears
to summarize the consequences of decision-making activities. In many explanatory settings
the theory is tautological: the theory claims that actors desire to maximize utility, and yet such
utilities can only be measured by asking or observing what the actors actually desire.

The method makes strong assumptions about the nature of uncertainty. The theory assumes
any uncertainties can be objectively quantified using probabilities. This assumption can be
relaxed by supplanting the results of repeated experimentation (objective probabilities) with
belief (or subjective probabilities).

The method makes strong assumptions about desirable decision-making behaviour. The
theory assumes that decision-makers are unwilling to pay a premium to avoid uncertainty, all
things being equal. The method assumes that all options are known, or that new options will
not change the valuation of older choices. The Allais and Ellsberg paradoxes show that
repeatable violations of expected utility behaviour can be seen on the part of real actors.

The theory is not, and cannot be, directly applicable to group decision-making. Without
additional assumptions the theory is unable to resolve issues of the interpersonal distribution
of utility.

Key references

French, S. (1993). Decision theory: An introduction to the mathematics of rationality. New
York: Ellis Horwood.
Keeney, R. L., & Raiffa, H. (1993). Decisions with multiple objectives: Preferences and value
tradeoffs. Cambridge: Cambridge University Press.

5
The term is largely synonymous with rational preferences, perfect rationality, or the theory of rational
actors.
105
Luce, R. D., & Raiffa, H. (1957). Games and decisions: Introduction and critical survey. New
York: Dover.

Key articles from TPM-researchers

Beroggi, G.E.G., & Wallace, W.A. (1995). Operational control of the transportation of
hazardous materials: An assessment of alternative decision models. Management Science,
41(12), 1962-1977.
Chorus, C.G., Molin, E.J.E., & Van Wee, B. (2006). Use and effects of Advanced Traveller
Information Services (ATIS): A review of the literature. Transport Reviews, 26(2), 127-149.

Description of a typical TPM-problem that may be addressed using the theory / method

Beroggi et al. (1995) evaluate the feasibility of alternative plans for transportating hazardous
materials. The approach usefully introduces consistent decision-making into a safety-critical
area of policy-making. Chorus (et al. 2006) review the effects of travel information services
on driver behaviour. The approach usefully subsumes considerable uncertainty about the
motivations of driver behaviour into a framework which can be subjected to empirical
analysis.

Related theories / methods

Modern decision theory is largely occupied by expected utility theory, although the field also
addresses decision-making under complete uncertainty. The measurement of utility is
achieved by methods of revealed or stated preferences. The approach is related to multi-
criteria decision analysis and to theories of group decision-making. The field is related to
formal models in sociology and political science. Alternative models of decision utility under
deep uncertainty have been created and are known as state-based preferences. The
assumptions of rational behaviour are tested in experimental psychology and experimental
economics, particularly within the heuristics and biases school.

Editor

Scott Cunningham (S.W.Cunningham@tudelft.nl)
















106
FUNCTIONS OF INNOVATION SYSTEMS

Brief discussion

There are several definitions of innovation systems mentioned in literature, all having the
same scope and derived from one of the first definitions (Freeman 1987): systems of
innovation are networks of institutions, public or private, whose activities and interactions
initiate, import, modify, and diffuse new technologies.

The idea of the innovation systems approach is that innovation and diffusion is an individual
as well as a collective act. This approach understands the technological change through
insight in the innovation system dynamics.

A Technological Innovation System (TIS) is not bound by e.g. a geographical area or an
industrial sector, but by a technology; it is defined as: a network or networks of agents
interacting in a specific technology area under a particular institutional infrastructure to
generate, diffuse, and utilise technology.

A well functioning TIS is a prerequisite for the technology in question to be developed and
widely diffused, but it is difficult to determine whether or not a TIS functions well. Therefore,
the factors that influence the overall function the development, diffusion, and use of
innovation need to be identified. Jacobsson and Johnson (2000) developed the concept of
system functions, where a system function is defined as a contribution of a component or a
set of components to a systems performance. They state that a TIS may be described and
analysed in terms of its functional pattern, i.e. how these functions have been served. The
functional pattern is mapped by studying the dynamics of each function separately as well as
the interactions between the functions. The system functions are related to the character of,
and the interaction between, the components of an innovation system, i.e. agents (e.g. firms
and other organisations), networks, and institutions, either specific to one TIS or shared
between a number of different systems.

Different lists of functions exist in the literature. The mostly used list is of functions is:
entrepreneurial activities, knowledge development, knowledge diffusion, guidance of the
search, market formation, resource mobilization, counteracting resistance to change.

Applicability

This theory is applicable when investigating bottlenecks and facilitators for the development
and implementation of a certain technology, from a broad sociotechnical perspective. It can be
applied to a certain technology in a certain country or region, but also globally.

Pitfalls / debate

The theory is not fully developed yet, which means that the indicators for the functions can be
different in different studies and therefore have to be specified in ones own study.
Furthermore, as mentioned above, different lists of functions are used in the literature.

Key references

Freeman, C. (1987). Technology, policy, and economic performance: Lessons from Japan.
107
London: Frances Printer Publishers
Bergek, A., Jacobsson, S., Carlsson, B., Lindmark, S., & Rickne, A. (2008). Analyzing the
functional dynamics of technological innovation systems: A scheme of analysis. Research
Policy, 37, 407-429.
Hekkert, M.P., Suurs, R.A.A., Negro, S.O., Kuhlmann, S., & Smits, R.E.H.M. (2007).
Functions of innovation systems: A new approach for analyzing technological change.
Technological Forecasting & Social Change, 74, 413-432.
Jacobsson, S., & Johnson, A. (2000). The diffusion of renewable energy technology: An
analytical framework and key issues for research. Energy Policy, 28(9), 625-640.

Key articles from TPM-researchers

Kamp, L.M. (2008). Sociotechnical analysis of the introduction of wind power in The
Netherlands and Denmark. International Journal of Environmental Technology and
Management, 9(2/3), 276-293.

Description of a typical TPM-problem that may be addressed using the theory / method

Why is PV technology (photovoltaic solar power) developed and diffuse more successfully in
Japan than in the Netherlands?

Related theories / methods

Strategic niche management

Editor

Linda Kamp (l.m.kamp@tudelft.nl)





















108
ICE FUNCTION THEORY

Brief discussion

The ICE theory describes the conditions under which agents (= people in philosophical
language) justifiably may ascribe technical functions to entities like technical artefacts. It
combines elements of Intentional, Causal-role and Evolutionary accounts of technical
functions, explaining its name. The intentional element is that functions are features that are
ascribed by agents relative to their beliefs; functions are not physical properties had by
entities. The evolutionary element is that functions are ascribed relative to use plans of
entities which have been developed by designers, and which are reproduced with possible
adaptations by communication from designers to users and from users to fellow users. A use
plan of an entity is a series of actions for realising a goal, where the actions include
manipulations of the entity. The causal-role element is that functions refer to those capacities
of the entities that make the use plans of the entities successful in realising the plans goals.
The core definition of the ICE theory reads:

An agent a justifiably ascribes the physicochemical capacity to | as a function to an item x,
relative to a use plan up for x and relative to an account A, iff:
I. a believes that x has the capacity to |;
a believes that up leads to its goals due to, in part, xs capacity to |;
C. a can justify these beliefs on the basis of A; and
E. a communicated up and testified these beliefs to other agents, or
a received up and testimony that the designer d has these beliefs.

The ICE theory is the only account of technical functions that explicitly is derived from and
tested by desiderata originating from engineering.


Applicability

The ICE function theory is developed for capturing a precise meaning of the concept of
technical functions of engineered artefacts. It presupposes that designing can be reasonably
reconstructed as a process in which agents intentionally develop a use plan and describe (how
to make) the entities that are to be manipulated by the plan. And it presupposes that this use
plan is communicated on and on to other agents whom count as potential users. These
presuppositions apply to engineered technical artefacts ranging from user products to
industrial installations. These presuppositions apply to artefacts made with less deliberate
intentions, ranging from artisan artefacts to ad-hoc contraptions like natural stones used as
paper weights. But for functional entities that lack clear points at which use plans are
designed, the core definition of the ICE theory is less applicable. For functional analysis of
accident the electrical switch that functioned as a detonator in the gas explosion the ICE
theory is adjusted. For useful material that lacks a well-defined moment of use plan
development water that quenches thirst a pure evolutionary analysis of functions is
successful.

Pitfalls / debate

The ICE-theory is not straightforwardly capturing the way in which functions are used in
biology: by the ICE-theory functions are features that are ascribed by agents relative to their
109
beliefs, which contradicts the generally shared idea that biological functions are properties
had by biological items. This means that the ICE-theory challenges the idea that there is one
overarching concept of function, and that it is in principle open whether the ICE theory
applies outside the domain to technology. Second the ICE theory is normative by given the
conditions under which agents are justified to ascribe functions; the ICE theory need not
capture how functions are ascribed in practice.

Key references

/

Key articles from TPM-researchers

Vermaas, P.E., & Houkes, W. (2006). Technical functions: A drawbridge between the
intentional and structural nature of technical artefacts. Studies in History and Philosophy of
Science, 37, 5-18.
Houkes, W., & Vermaas, P.E. (2010). Technical functions: On the use and design of artefacts.
Dordrecht: Springer. (forthcoming).

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

The ICE function theory and the use plan analysis can support the analysis of problems about
the management and modeling of the design of artefacts, their use and the development of
their use in society.

Related theories / methods / concepts

Not applicable

Editor

Pieter Vermaas (p.e.vermaas@tudelft.nl)

















110
METAETHICAL THEORIES

Brief discussion

Metaethical theories try to answer the following questions: how do we know what is morally
right or wrong, good or bad, and are there objective moral truths or is morality just a social
construction? There have been innumerable answers to this question in the history of
philosophy. Most of these answers can be grouped under two major positions, first, the one
that states that concerning ethics, we cannot have objective knowledge (Humeanism: non-
cognitivism, expressivism, sentimentalism; anti-realism), and second, the position according
to which we understand objective moral principles through reason (Kantianism: cognitivism,
rationalism; realism).

The problem with the first position is how it can avoid relativism. However, although
relativism is often alluded to in discussions concerning multiculturalism etc., it is rarely
defended by philosophers. This means that even if philosophers are not realists, they assume
that there is a minimal standard that a moral position has to fulfill (e.g. consensus,
intersubjectively agreed values etc.). But if, by hypothesis, this minimal standard itself is not
objective, how then can it be a standard? Alternatively, if the minimal standard itself is taken
to be objective, then the position either collapses into realism after all, or it is a form of
reductionism, i.e. in the case of a non-moral objective standard (naturalism). The latter
position again has a hard time avoiding relativism (by equating is and ought, or descriptive
and normative statements). Reductionism faces the problem of what Moore called the
naturalistic fallacy.

The second position avoids the pitfall of relativism by explicitly endorsing objective moral
standards. According to Kantians, those standards are constructed by principles of rationality.
Ethical intuitionists instead think that moral truths are not constructed but are part of the
world. They are connected to, but not reducible to natural phenomena. Intuitionism is a
combination of cognitivism (we can have moral beliefs that are either true or false), non-
reductive moral realism (there are objective, irreducible moral facts) and foundationalism (we
can have basic moral beliefs, i.e. beliefs that are not justificatorily based on other beliefs one
has).

Kantians and intuitionists are rationalists; they think that only reason can track objective
moral truths. But such an account does not do justice to our moral experience which is
inherently emotional. My own version of intuitionism consists of a combination of
intuitionism with a cognitive theories of emotion, which I call affectual intuitionism. We need
affective states to have justified basic moral beliefs. Moral intuitions are paradigmatically
moral emotions. These are, as it were, the basis for our moral perception.

(This text is an amended excerpt of the introduction of my forthcoming monograph: Roeser, S.
(2011). Moral emotions and intuitions. Basingstoke: Palgrave Macmillan.)

Applicability

In applied ethics, one can largely circumvent metaethical questions, but they can play a role in
the justification of normative-ethical approaches. For example, relativism is by non-
philosophers often seen as a justification for democratic procedures. But that is a
misunderstanding. Relativism is inadequate in providing for justification of any kind of moral
111
principles, since it explicitly states that there are no morally better or worse practices. From a
relativistic point of view, democracy is just as arbitrary as assigning a dictator or throwing
dice. Hence, one needs a more substantial metaethical theory in order to justify participatory
methods in applied ethics. In that sense, metaethics is an important discipline in the
background of doing applied or normative ethics.

References for discussions of relativism:

Rachels, J. (1999). The challenge of cultural relativism. In J. Rachels (Ed.), The elements of
moral philosophy (3
rd
edn.). New York: Random House.
Wellman, C. (1963). The ethical implications of cultural relativity. The Journal of
Philosophy, 60(7), 169-184.
Moser, P. K., & Carson, T. L. (Eds.) (2001). Moral relativism: A reader. New York, Oxford:
Oxford University Press.

Pitfalls / debate

The metaethical debate is very complex and theoretical, with lots of nuances and subpositions
and lots of strawmen. It can be wise to remain neutral in ones metaethical position and
restrict claims and discussion to purely normative ethical/applied ethical issues.

Key references

Website: http://plato.stanford.edu/entries/metaethics/
Miller, A. (2003). An introduction to contemporary metaethics. Blackwell.
Shafer-Landau, R., & Cuneo, T. (2007). Foundations of ethics: An anthology. Oxford
University Press.

Key articles from TPM-researchers

At this point, I am the only scholar at TPM/philosophy who engages in metaethical research, so
all references are to my work:
Roeser, S. (2009). Reid and moral emotions. Journal of Scottish Philosophy, 7, 177-192.
Roeser, S. (2006). A Particularist Epistemology: Affectual Intuitionism. Acta Analytica, 21
(1), 33-44.
Roeser, S. (2005). Intuitionism, moral truth, and tolerance. The Journal of Value Inquiry, 39,
75-87.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

A typical TPM-problem is how to deal with risk percpetions of laypeople. I have applied my
theoretical framework of affectual intuitionism to the debate about risk perception and argued
that laypeoples intuitions and emotions about technological risks are not irrational, as is
commonly thought. Rather, they are indispensible sources of ethical insight. Cf. for example
the following articles:
112
Roeser, S. (2010). Intuitions, emotions and gut feelings in decisions about risks: Towards a
different interpretation of Neuroethics. The Journal of Risk Research, 13(2), 175-190.
Roeser, S. (2007). Ethical intuitions about risks. Safety Science Monitor, 11, 1-30.
Roeser, S. (2006). The role of emotions in judging the moral acceptability of risks. Safety
Science, 44, 689-700.

Related theories / methods / concepts

/

Editor

Sabine Roeser (S.Roeser@tudelft.nl)




































113
NORMATIVE ETHICAL THEORIES

Brief discussion

Normative ethics is that branch of ethics that examines the question of how we ought to act,
as opposed to Meta-Ethics, which examines the nature of ethical propositions, beliefs, and
language (ontology, epistemology, and semantics), Applied Ethics, which is concerned with
concrete ethical issues such as abortion, euthanasia, or engineering ethics, or Descriptive
Ethics, which looks to case studies, examples, history, and culture to devise descriptions of
what are considered to be ethical norms. There are a number of major, general schools of
normative ethics, including consequentialism, deontological ethics, virtue ethics, intuitionism,
and rights-based contractarian ethical/legal theories. Each of these major schools of thought
has in turn numerous approaches to solving ethical problems.

In Consequentialism, right action can be judged according to the relative increase or decrease
in either general pleasure, or according to rules that generally increase pleasure (Mill). In
Virtue ethics, rather than looking at acts, intentions, or outcomes, the development of
character traits through education is deemed to help build a more stable, just society
(Aristotle). In deontological (duty-based) ethics, our actions are guided by principles such as
Kants categorical imperative that provide the basis for categories of obligations and duties
that we derive through reason (Kant). In contractarian, rights-based theories (Rawls),
ascribing to ethical norms is part of the implicit bargain struck among members of civil
societies. Intuitionists (Ross) state that there is a plurality of ethical principles that have to be
weighed on a case-by-case basis.

Applicability

Normative ethical theories may underlie attempts both to describe the nature of ethical
decision-making in practice, and to provide guidance for those faced with real-world ethical
dilemmas. In each of the schools mentioned, some over-arching principle is offered to give
guidance for decision-making, and each of these principles rests upon some comprehensible
axiom or set of premises. Deontology and consequentialism are monist theories: they claim
that there is one general principle with which we can solve all moral problems. The other
theories allow for a plurality of morally relevant considerations. Those involved in an ethical
problem may rely upon the reasoning of various normative theories to reach generally
acceptable conclusions.

Pitfalls / debate

All normative ethical theories depend upon certain axioms, acceptance of which is necessary
in order to abide by the application of that theory to a particular problem. Moreover, most
ethical decisions taken in the real world are not made in strict accordance with any one,
particular ethical theory. Rather, in practice, even trained ethicists use a combination of
theories to make or justify making decisions regarding ethical dilemmas. Hence, our practice
of ethical decision making probably comes closest to virtue ethics and intuitionism.

Key references

Aristotle, Nichomacean ethics. Ross, W. D. (trans.). (1980). Oxford: Oxford University Press.
114
Kant, I. Groundwork of the Metaphysic of Morals. Paton, H. J. (trans.). New York: Harper
and Row, 1964. Originally in Kant, I. (1911). Gesammelte Werke. Berlin: Akademie Verlag.
Mill, J. S. (1910). Utilitarianism, on liberty and considerations on representative government.
Everyman Library, London: J. M. Dent and Sons.
Rawls, J. (1972). A theory of justice. Oxford: Oxford University Press.
Ross, W.D. (1967) (1930). The Right and the Good. Oxford: Clarendon Press.

Key articles from TPM-researchers

Koepsell, D. (2010). On genies and bottles: Scientists' moral responsibility and dangerous
technology R&D. Science and Engineering Ethics. Science and Engineering Ethics, 16(1),
119-133
Koepsell, D. (2009). Who owns you? The corporate gold rush to patent your genes. UK:
Wiley-Blackwell.
Roeser, S. (2010). Introduction: Thomas Reids moral philosophy. In S. Roeser (Ed.), Reid on
Ethics, Series Philosophers in Depth, Basingstoke: Palgrave Macmillan.
Roeser, S. (2007). Ethical intuitions about risks. Safety Science Monitor, 11, 1-30.
Van de Poel, I. (2001). Investigating ethical issues in engineering design. Science and
Engineering Ethics, 7(3), 429-446.
Van de Poel, I., & Royakkers, L. (2007). The ethical cycle. Journal of Business Ethics, 71(1),
1-13.
Van den Hoven, J., & Weckert, J. (Eds.) (2008). Information technology and moral
philosophy, Cambridge studies in philosophy and public policy. Cambridge: Cambridge
University Press. available online.
Van den Hoven, M.J. (2007). Nanotechnology and privacy: The instructive case of RFID. In
F. Allhoff, P. Lin, J. Moor, & J. Weckert (Eds.), Nanoethics: The ethical and social
implications of nanotechnology. New York: Wiley.
Zwart, S. D., Van de Poel, I., Van Mil, H., & Brumsen, M. (2006). A network approach for
distinguishing ethical issues in research and development. Science and Engineering Ethics
12(4), 663-684.
Zandvoort, H (2007). Preparing engineers for social responsibility: Report of the TREE
special interest group D6 ethical issues in engineering education. In C. Borri, & F. Maffioli
(Eds.), Re-engineering Engineering Education in Europe. Firenze: Firenze University Press.
ISBN 978-88-8453-675-4. Available online.
Zandvoort, H. (2005). Globalisation, environmental harm, and progress. The role of
consensus and liability. Water Science and Technology, 52(6), 43-50.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

/

Related theories / methods / concepts

/
115

Editors

David Koepsell (D.R.Koepsell@tudelft.nl)
Sabine Roeser (S.Roeser@tudelft.nl)
Ibo van de Poel (I.R.vandePeol@tudelft.nl)












































116
ORGANISATIONAL LEARNING

Brief discussion

It may be said that an organisation will learn
when its members learn for it, carrying out on its behalf a process of inquiry that results in a
learning product... Inquiry does not become organizational unless undertaken by individuals
who function as agents of an organization according to its prevailing roles and rules.
(Argyris & Schn, 1996, p. 11)

Learning by organisations cannot be taken for granted, because organisations can only learn
through people. Figure below depicts the key processes and actions to invoke and maintain
organisational learning. Argyris emphasises that learning should be embedded in the whole
organisation as a part of its normal operation. It is not an add-on extra.


Figure: Organisational single- and double loop learning modified after Argyris (Koornneef,
2000)

Argyris and Schn have identified three modes of organisational learning: Organisational
Single Loop & Double Loop Learning (OSLL respectively ODLL), and Deuterolearning, the
latter for improving OSLL & ODLL processes. Furthermore, they introduced the Theory of
Action, consisting of espoused, explicit theory and theory-in-use, the latter being tacit
knowledge in the heads of humans that run the organisation's operations.

Applicability

Learning is a key to an organisation's viability. In terms of safety, there must be an intimate
link between the risk assessment process, which specifies what hazard scenarios there are, the
management process, which establishes control strategies and practices for them, the
operational process which carries them out and the learning process, which evaluates,
improves and fine tunes these controls. For maintaining the organisation's viability, learning
from operations and its operational surprises is essential for fostering learning from the
117
outside world about threats and challenges in order to adapt to and survive in the changing
world.

Principles of Organisational Learning apply to any organisation that last for a longer period of
time, or intends to survive. Organisational Learning must be embedded into normal
operations, and its elements and functions need to be designed and implemented.

Pitfalls / debate

Organisational learning is an essentially human process: an organisation learns through
people. Culture is an important aspect and relates also to maturity of an organisation. The
effectiveness of learning remains low when lessons are retained but cannot be retrieved for
(re-)use in a timely way. Designing an effective OL system requires a 'white box' approach to
make it work, and in research programmes, it also implies action research methodology.

A black box view on organisations may reveal characteristics of learning organisations, e.g.,
as identified by Senge (1990), but these characteristics will not inform the organisation about
what it takes to design and implement its Organisational Learning system.

Key references

Argyris, C., & Schn, D. A. (1996). Organizational learning II: Theory, method, and practice.
Amsterdam: Addison-Wesley.
Bateson, G. (1972). Steps to an ecology of mind. San Francisco, USA: Chandler Publishing
Co.
Beer, S. (1979). The heart of enterprise. Chichester, UK: John Wiley & Sons.
Senge, P.M. (1990). The fifth discipline: The art and practice of the learning organization.
New York, USA: Doubleday.
Stein, E. (1995). Organizational memory: Review of concepts and recommendations for
management. International Journal of Information Management, 15(1), 17-32.
Weick, K.E., & Sutcliffe, K.M. (2007). Managing the unexpected: Resilient performance in
an age of uncertainty. (2
nd
edition). San Francisco, USA: Jossey-Bass.

Key articles from TPM-researchers

Guldenmund, F. (2000). The nature of safety culture: A review of theory and research. Safety
Science, 34(1-3), 215-257.
Koornneef, F. (2000). Organised learning from small-scale incidents (PhD thesis). Delft
University Press.
Koornneef, F., Hale, A.R., & van Dijk, W. (2005). Critical assessment of the organisational
learning system of the fire service in response to fatal accidents to firemen. In ESREL 2005 -
Advances in Safety and Reliability. Tri City, Poland.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

118
Organisational Learning is a core issue in increasing air traffic in the next few decades while
enhancing aviation safety at global sector level.

Related theories / methods / concepts

Organisational Memory, Learning Organisations, Mindfulness (incl. Resilience), Systems
Theory, including the Viable System Model, Inquiry, Theory-of-Action, SOL (audit) model,
Safety Management System (functions, methods), Operational Readiness, MORT.

Editor
Floor Koornneef (f.koornneef@tudelft.nl)







































119
POLICY LEARNING IN MULTI-ACTOR SYSTEMS
6


Brief discussion

Complexity, uncertainty and the actor-network characteristics of policy making, feed an
increasingly felt need for resilience and adaptiveness in policy processes. This implies also a
need for a capability to learn from experience, including policy experiences. What policies
work, what policies do not work, and, most importantly: why? Traditionally, this question has
been asked by evaluators. They have addressed it through a notion of theory-based, or
realistic evaluations. At the same time, recent work has shown that the involvement of
multiple actors in policy processes poses additional challenges to evaluations. For instance:
Who is to participate, why, who is to define the agenda and scope of an evaluation? Also,
evaluations are used in policy processes to two main purposes: for learning, but also for
accountability (did agencies do what was agreed, and did they do so efficiently?). However,
there is known to be a tension in evaluations between accountability and learning; evaluations
that focus on accountability, tend to limit learning potential and in current practice, a focus
on accountability is predominant. Hence there is a need for new analytic approaches that
support policy learning in multi-actor systems.

Applicability

Participatory policy learning is applicable where multiple actors are involved in policy
formulation and implementation, and where policy systems are characterized by uncertainty.
The focus so far has been on environmental policy and water policy.

Pitfalls / debate

The approach for policy learning in multi-actor systems is still under development. It builds
on analytic concepts and methods from theory-based or theory-driven evaluations, actor
analysis and adaptive policy analysis, while it incorporates insights from literature on
performance management. However, it is not a tested-methodology so far experience is
limited to a project for a local water board in the Netherlands.

Key references

Key references would be in the various constituting bodies of literature mentioned above.
Some specific examples from non-TPM researchers:

Gysen, J., Bruyninckx, H., & Bachus, K. (2006). The modus narrandi: A methodology for
evaluating effects of environmental policy. Evaluation, 12(1), 95-118.
Patton, M.Q. (1997). Utilization-focused evaluation. (Third edition). Thousand Oaks: SAGE.
Pawson, R., & Tilley, N. (1997). Realistic evaluation. London: SAGE Publications Ltd.
Van der Knaap, P. (2004). Theory-based evaluation and learning: Possibilities and challenges.
Evaluation, 10(1), 16-34.
Van der Meer, F.-B., & Edelenbos, J. (2006). Evaluation in multi-actor policy processes:
Accountability, learning and co-operation. Evaluation, 12(2), 201-218.

6
Policy-oriented learning, participatory policy learning, evaluation in multi-actor policy processes
120

Key articles from TPM-researchers

Articles on this specific approach have not yet appeared in international academic journals,
but below papers are publically available and have been peer-reviewed on conference-level:

Hermans, L.M. (2007). The puzzle of policy learning in water resources management. In N.
van de Giesen, X. Jun, D. Rosjberg, & Y. Fukushima (Eds.), Changes in water resources
systems Methodologies to maintain water security and ensure integrated management (pp.
276-283). Wallingford: IAHS Publication 315.
Hermans, L.M. (2008). Towards a framework for sustainable development evaluations in
multi-actor systems. EASY-ECO Conference: Governance by Evaluation: Institutional
Capacities and Learning for Sustainable Development, Vienna, 11-14 March 2008, pp. 14
[online] URL: http://www.wu-wien.ac.at/inst/fsnu/vienna/papers/hermans.pdf
Hermans, L.M., & Muluk, C.B. (2008). Facilitating policy-oriented learning about the multi-
actor dimension of water governance. Proceedings of 13th IWRA World Water Congress
2008, Montpellier, 14 September, 2008, pp.15.
Hermans, L.M. (2009). A paradox of policy learning: Evaluation, learning and accountability.
Paper prepared for Workshop 30: the Politics of Policy Appraisal. ECPR Joint Sessions of
workshops. Lisbon, Portugal, 14- 19 April 2009, pp.22.

Below articles include articles in adjacent areas of policy learning in MAS (transplantation
and performance management):

De Jong, M., Lalenis, K., & Mamadouh, V. (2002). Institutional transplantation. The
Netherlands: Kluwer Academic publishers.
De Bruijn, H. (2008). Managing performance in the public sector. London: Routledge.

Description of a typical TPM-problem that may be addressed using the theory / method

Currently the ideas in this approach are being applied to the development of an evaluation
framework for the implementation of the Water Framework Directive in the area of Delfland.
The purpose is to design a framework that will facilitate the various parties involved in
learning about the impacts and usefulness of the policy measures, aimed at improving
ecosystem status in the regional water bodies, as they are being implemented; the planning
horizons is long, knowledge about the ecological processes in water bodies is limited, and
different actors are involved.

Related theories / methods

Policy/program evaluation, Actor analysis, Cognitive mapping, Lesson-drawing, Policy
transplantation and policy extrapolation, Performance management, Adaptive governance.

Editor

Leon Hermans (l.m.hermans@tudelft.nl)


121
PRINCIPAL AGENT THEORY

Brief discussion

In Principal Agent theory (agency theory: AT) the contractual relationship between the
principal (who is the owner, authority, sovereign) and the agent (the actor that executes) is
designed (normative AT) and described (positive AT). In case of information asymmetry and
different objectives the agency problem arises (the agent does not behave according the
objectives of the principal) and by contracting (resulting in monitoring, control, organisation)
the problems can be mitigated. At the level of the individuals in the model the notion of
maximising individuals is maintained: ex ante the agents are able to calculate the optimal
bonding and monitoring arrangements which align the objectives of the principal and the
agent.

Applicability

AT theory is applicable to any situation where the issue of an optimal contract is raised
between actors with different objectives and asymmetrical information. Because the theory is
constructed in the mainstream economic fashion it implies that its applicability is limited to
optimalisation under constraint issues. When the step is made towards more realistic situation
in which maximising is questionable more specific conditions of time and place (=
contextualisation) are needed.

Pitfalls / debate

The AT is an ex ante approach based on a methodological individualistic approach. Much
debate exist between the ex post approaches, like TCE, in which optimal governance
structures are selected afterwards. Much debate also exist about the operationalisation and
contextualisation of the concepts: information asymmetry to what extent?, of the degree of
difference between the objectives? In general there is much debate about the applicability of
AT to real world situations.

Key references

Jensen, M., & Meckling, W. (1976). Theory of the firm: Managerial behavior, agency costs
and capital structure. Journal of Financial Economics, 3(4), 305-360.

Key articles from TPM-researchers

Groenewegen, J., Spithoven, A., & Van den Berg, A. (2010). Institutional economics: An
introduction. New York: Palgrave.

Description of a typical TPM-problem that may be addressed using the theory / method

In public vertically integrated firms there is the AT problem between the ministry and the
state owned enterprise. After privatisation there is the regulation problem between the
regulator and the firm. Inside the firm there is the AT issue between the owners (shareholders)
and the agents (management).

Related theories / methods
122

Transaction cost economics, Theory of property rights.

Editor

John Groenewegen (J.P.M.Groenewegen@tudelft.nl)












































123
PROCESS MANAGEMENT

Brief discussion

One of TPMs key concepts is multi-actor decision making. Decision making is a process in
which many actors perform. These actors have different interests or different perceptions of
problems and solutions. There is no hierarchical structure, but the actors are embedded in a
network-like structure, meaning that that are many interdependencies between these actors.

As a consequence of this, many traditional management styles will not work. Hierarchical,
command and control like strategies will not work because no one is in charge and everyone
depends upon everyone. Content oriented strategies will not work since there is no objective
problem definition each actor will have its own definition of the real world. A linear, project
managerial approach will not work because decision making processes in networks are always
erratic and irregular.

Process management means that a problem owner tries to manage the decision making by
organizing a process of interaction between the main stakeholders. These stakeholders
participate voluntarily (versus command and control) and there is no well defined and
delineated problem at the start of the process. Problem definitions and solutions are the result
of the process (versus content). There is no strict planning (versus project management) but
there is a set of rules of the game and the planning will emerge during the process.

Since process management does not rely upon power or expertise or project managerial tools,
there are other mechanisms needed to get the commitment of the stakeholders, a commitment
to both the process and the results of the process. Key question that are answered in the theory
on process management are:
- How to make the right selection of stakeholders?
- How to commit them to a process? How to make a process attractive to each of them?
- How to develop the rules of the game of such a process?
- What is the role of a process manager and who should take that role?
- How to set the agenda in a process and how to make decisions in a process?
- How to protect the core values of the actor involved?
- How to speed up a process, to prevent a process of becoming too sluggish?
- What is the role of experts in a process?

The theory on process management answers these questions. The theory is embedded in a
broad range of theories, e.g. on interactive decision making, negotiation, game theory etc,
brings notions out of these theories together and applies these to real world situations.

Applicability

Applicable in the following situations:
- multi actor, multi level decision making;
- on unstructured problems, contested information;
- in a dynamic environment.

We apply this theory in engineering contexts and many decision making models in these
contexts are rather linear: a decision making process should start with a problem, followed by
setting a goal, gathering information, making a decision and implementing this decision. In
124
the real world, this linearity does not exist and process managerial approaches are much more
powerful.

Pitfalls / debate

What are the pitfalls of this theory? First, too strong a focus on the process of decision
making, ignoring the content of the issues at stake. This is what we see in the real world as
well: the pressure to reach consensus or to reach a decision is so strong, that actor ignore
information that is key to make a good decision.

Second, a process managerial approach might be implemented in too a static way. Actor
change, their power and perceptions change, their preferences change, their strategies change
and they learn to adapt themselves to the changed behaviour of others. This makes it
impossible to capture all the complexity of a multi actor setting ex ante. Those who do that
will probably develop a too static approach.

Third, a process managerial approach might be an incentive for very slow and sluggish
decision making.

Key references

De Bruijn, H., & Ten Heuvelhof, E. (2009). Management in networks. London: Routledge.
Klijn, E. H., & Koppenjan, J. (2009). Managing uncertainties in networks. London:
Routledge.

Key articles from TPM-researchers

De Bruijn, H., & Ten Heuvelhof, E. (2009). Management in networks. London: Routledge.
Klijn, E. H., & Koppenjan, J. (2009). Managing uncertainties in networks. London:
Routledge.

Description of a typical TPM-problem that may be addressed using the theory / method

Rotterdam wants to construct a new industrial site by reclaiming land from the sea. The total
costs are 5 billion euro. It needs the support of the national government and several regional
actors for both the permits and the funds. Many of these stakeholders do not agree with the
Rotterdam initiative are trying to block it. Rotterdam has asked external consultancies to
analyze both the economic and environmental impact of the new site. Many stakeholders state
that they do not agree with the findings of these consultancies, not with the data, the methods,
the system boundaries and therefore not with the results.

This results in a stalemate and than the issue is that a process has to be designed, that will
bring these parties together to reach a decision on this project.

Related theories / methods

Interactive decision making, Game theory, Negotiation theory

Editors
125

Hans de Bruijn (J.A.deBruijn@tudelft.nl)
Joop Koppenjan (J.F.M.Koppenjan@tudelft.nl)















































126
PUNCTUATED EQUILIBRIUM THEORY IN PUBLIC POLICY

Brief discussion

In the policy sciences in particular, the issue has been raised how waves of enthusiasm
sweep through the political system as political actors become convinced of the value of some
new policy, suggesting punctuated equilibrium dynamics (Bak and Sneppen 1993) in policy
development. Baumgartner and Jones (1993) explain these punctuations as the result of
feedback between policy venue and policy image. Punctuated equilibrium theory in public
policy has much in common with a class of revolutionary change theories based on the
punctuated equilibrium paradigm postulated by Eldredge and Gould (Gould and Eldredge
1977; Eldredge and Gould 1972). The punctuated equilibrium paradigm is used in different
strands of the social sciences; with reference to Eldredge and Gould and explicit use of the
concept of punctuated equilibrium, Castells (1996) analyses the rise of the network society at
the end of the twentieth century as . one of these rare intervals in history..., similarly,
Baumgartner and Jones (Jones and Baumgartner 2005; Baumgarter and Jones 1993) apply the
concept of punctuated equilibrium to explain the alternation of long periods of stability and
rapid change in American policy domains. Theoretical results from complexity research and
empirical results from social network analysis, suggest that social dynamics and network
structure are intimately related (Watts 2002; Whitmeyer and Yeingst 2006).Gersick (1991)
confronts the Darwinian concept of incremental cumulative change with the punctuated
equilibrium paradigm through exploring six empirically derived theories from psychology,
group dynamics, organization science, biology and self-organizing systems.

Pitfalls / debate

Where Gersick infers that punctuated equilibrium dynamics are related to a deep structure
stabilizing the system in periods of equilibrium and re-structuring in periods of revolutionary
change, Bak and Sneppen relate punctuated equilibrium to SOC, signified by power law
distributions of the outputs of these systems (Bak et al. 1987; Bak and Sneppen 1993). SOC
and power law distributions have since been related to many natural phenomenon like
earthquakes, sand and rice pile dynamics, super conduction and droplet formation (Field et al.
1995; Plourde et al. 1993; Held et al. 1990; Jaeger et al. 1989) while punctuated equilibrium
and deep structure approaches mainly continued in the social and management sciences
(Gersick 1991). Because of these diverging traditions the relation between SOC and deep
structure in relation to punctuated equilibrium remained unspecified. Except from some recent
contributions in social network analyses, where SOC and power laws are found to
characterize empirical and simulated social networks (Whitmeyer and Yeingst 2006; Albert
and Barabsi 2002, Watts 2002).

Key references

Baumgartner, F., & Jones, B. (1993). Agenda and instability in American politics. Chicago:
The University of Chicago Press.
Baumgartner, F. R., Berry, J. M., Hojnacki, M., Kimball, D. C., & Leech, B. L. (2009).
Lobbying and policy change: Who wins, who loses, and why. Chicago: The University of
Chicago Press.
127
Baumgartner, F. R., Breunig, C., Green-Pedersen, C., Jones, B. D., Mortensen, P. B.,
Nuytemans, M., & Walgrave, S. (2009). Punctuated equilibrium in comparative perspective.
American Journal of Political Science, 53, 603-620.
Baumgartner, F. R., & Jones, B. D. (2002). Policy dynamics. Chigago: The University of
Chicago Press.
Jones, B. D., & Baumgartner, F. R. (2005). A model of choice for public policy. J. Public
Adm. Res. Theory, 15, 325-351.
Jones, B. D., & Baumgartner, F. R. (2005). The politics of attention: How government
prioritizes problems. 304.
Jones, B. D., Baumgartnerm, F. R., & True, J. L. (1998). Policy punctuations: U.S. Budget
Authority, 1947-1995. The Journal of Politics, 60, 1-33.

Key articles from TPM-researchers

Timmermans, J. (2006). Complex dynamics in a transactional model of societal transitions.
InterJournal.
Timmermans, J. (2008). Punctuated equilibrium in a non-linear system of action.
Computational & Mathematical Organization Theory, 14, 350-375.
Timmermans, J., de Haan, H., & Squazzoni, F. (2008). Computational and mathematical
approaches to societal transitions. Computational & Mathematical Organization Theory, 14,
391-414.

Description of a typical TPM-problem that may be addressed using the theory / method

Policy change, transitions

Related theories / methods

Complex systems literature, especially punctuated equilibrium theory. See for example

Bak, P., & Sneppen, K. (1993). Punctuated equilibrium and criticality in a simple model of
evolution. Physical Review Letters, 71, 4083.
Bak, P. (1996). How nature works: The science of self-Organized criticality. Copernicus
Books.
Jensen, H. J. (1998). Self-organized criticality: Emergent complex behavior in physical and
biological systems. Cambridge University Press.
Gersick, C. J. G. (1991). Revolutionary change theories: A multilevel exploration of the
punctuated equilibrium paradigm. Academy of Management Review, 16, 10-36.

The agenda setting literature. Especially Kingdon.

Kingdon, J. W. (1984). Agendas, alternatives and public policy. New York: Longman,
imprint of Addison Wesley Longman Inc.
Kingdon, J. W. (1994). Agendas, ideas, and policy change. (2 Edn). New York: Harper
Collins.

128
The literature on policy entrepreneurship. Especially Mintrom

Levin, M. A., & Sanger, M.B. (1994). Making government work: How entrepreneurial
executives turn bright ideas into real results. San Francisco: Jossey-Bass.
Mintrom, M. (1997). Policy entrepreneurs and the diffusion of innovation. American Journal
of Political Science, 41, 738-770.
Mintrom, M., & Vergari, S. (1996). Advocacy coalitions, policy entrepreneurs, and policy
change. Policy Studies Journal, 24, 420-434.
Roberts, N. C., & King, P. J. (1996). Transforming public policy: Dynamics of policy
entrepreneurship and innovation. San Francisco, Calif.: Jossey-Bass Publishers.

Editor

Jos Timmermans (j.s.timmermans@tudelft.nl)



































129
RANDOM UTILITY MAXIMIZATION-THEORY (RUM-THEORY)

Brief discussion

Random Utility Maximization-Theory underlies the majority of Discrete Choice models
(these models in turn are used to estimate preferences of individuals and predict market shares
of products or choice options in general). RUM implies that the utility of choice-alternatives
is composed out of an observed and unobserved part. The former part is mostly specified as a
linear-in-parameters function of observed attributes of the alternatives (parameters represent
tastes or the importance of attributes). The unobserved or random part follows a specific
probability distribution. The chosen distribution determines the resulting choice probabilities
for different alternatives conditional on a set of parameters values. These choice-probabilities
play a crucial role in estimating (calibrating) the model, as well as in using the model to make
predictions of market shares. Model estimation on observed choices is done by means of
Maximum Likelihood Estimation.

The most well-known RUM-based Discrete Choice-model is the Multinomial Logit or MNL-
model, for which Daniel McFadden received the 2000 Nobel Prize in Economic Sciences.
Although recent years have witnessed a large increase in the development of models that relax
the sometimes unrealistic assumptions underlying the probability distribution of the
unobserved utility as postulated by the MNL-model, the MNL model is still the most widely
applied Discrete Choice model, and can be formalized as follows:

We assume that an individual n chooses alternatives i from a choice set C by maximizing
random utility
n
i
U . This random utility is composed out of a deterministic part
i
V and a
random error
n
i
c . The deterministic utility in turn is a linear function of parameters (tastes)
x
|
and attribute values
1 2
, ,...
i i
x x (e.g. a routes travel time and cost, etc.):
1 1 2 2
...
i i i
V x x | | = + + . The random error varies across individuals and alternatives and is
Extreme Value Type I distributed with variance
2
6 t (this distribution resembles the normal
distribution but has slightly fatter tails). In notation:

1 1 2 2
...
n n n
i i i i i i
U V x x c | | c = + = + + + .

McFadden (1974) showed that given these distributional assumptions, the probability
n
i
P that
an alternative i with a particular deterministic utility
i
V is chosen by individual n can be
written using the following equation:

| |
exp
exp
i n
i
j
j C
V
P
V
e
=
(

.

By comparing these predicted choice probabilities with observed choices (mostly done using
the Maximum Likelihood method), values for
x
| can be estimated. These parameters, and the
market share predictions they generate, are crucial for the design of products and services that
are successful in terms of attaining a maximum market share.

130
Applicability

Any situation where 1) choices of individuals need to be understood, and 2) data on observed
choice-behaviour is available in combination with knowledge concerning the characteristics
of choice-options.

Pitfalls / debate

RUM-theory is often considered to be a normative, rather than a descriptive theory of
behaviour. As a result, alternative theories such as the editors Random Regret
Minimization-Theory have been proposed to more closely mimic actual behaviour.

Key references

McFadden, D. (1974). Conditional logit analysis of qualitative choice-behaviour. In P.
Zarembka (Ed.), Frontiers in econometrics (pp.105-142). New York: Academic Press.
Train, K.E. (2003). Discrete choice methods with simulations. Cambridge UK: Cambridge
University Press.

Key articles from TPM-researchers

Chorus, C.G., & Timmermans, H.J.P. (2008). Revealing consumer preferences by observing
information search. Journal of Choice Modelling, 1(1), 3-25.
Chorus, C.G., Arentze, T.A., & Timmermans, H.J.P. (2008). A Random Regret Minimization
model of travel choice. Transportation Research Part B, 42(1), 1-18.

Description of a typical TPM-problem that may be addressed using the theory / method

Take the example of road-pricing: whether or not to introduce road pricing depends crucially
on how travellers respond to different pricing schemes in terms of adapting their
mode/route/departure time-choices. Such responses can be understood and predicted by
means of estimation and simulation of RUM-theory based discrete choice models.

Related theories / methods

Stated Choice, Conjoint analysis, Random Regret Minimization-Theory.

Editor

Caspar Chorus (c.g.chorus@tudelft.nl)





131
SOCIO-TECHNICAL SYSTEM-BUILDING

Brief discussion

A key concept in understanding the interaction of science, technology and society is that of
the large technological system or socio-technical system. Such a system is understood to
comprise a complex network of technical artefacts and all related social structures, characterised
by internal integration and external adaptation and aiming at fulfilling social needs. Hughes
(1983, 1987; see also his 1998 publication; compare e.g. Summerton, 1994) gives content and
shape to this concept in his study on electrification in Western society. He distinguishes a
number of phases in the building of a large technological system, notably the phases of
invention, development, innovation and expansion, and identifies corresponding types of system
builders: inventors, inventor-entrepreneurs, manager-entrepreneurs and financier-entrepreneurs.
He also formulates certain development mechanisms, one of which relates to critical problems or
reverse salients - a term derived from warfare - of a technical or other nature which have to be
solved before a technical system can grow further. Another notion is the impact of momentum,
meaning that the technical and organisational components of a system give it mass or
substance in such a way that it generates its own direction and rate of growth. The momentum
mechanism says something about the relationship between a large technical system and its
environment. Hughes suggests that in the initial period of system building the effect that the
environment has on the system and its technological core is more substantial than the reverse,
while later on the opposite holds true. A system characterised by momentum seems to be
autonomous, but Hughes holds that, ultimately, a socio-technical system is maintained and made
to grow because of the social entities involved.

The large technological system concept has been applied in many technical fields, including state
controlled activities (see e.g. Schot et al., 1998) and large water-based systems (Ravesteijn et al.,
2002).

Applicability

This theory is applicable in the case of large-scale complexes of technologies, social
institutions and cultural values. Underlying assumption is that technology is embedded in
society, in which agency is the mediating link, and that socio-technical complexes are the
relevant unit of analysis rather than technical artefacts or technical knowledge. This
assumption seems to be plausible in modern society from the end of the 19
th
century, but
earlier technical development might be considered as such as well. Another assumption is that
systems can be distinguished and system borders defined, which, however, is sometimes quite
arbitrarily; the systems concept is a tool of analysis and as such a theoretical construct (and
not covering an objective part of reality). In comparison with other theories of development
(technology push, market pull, and social construction of technology), it connects
technological and social developments as well as actors and factors. A drawback might be
that steering possibilities are considered as restricted, so that responsibility issues rise.

Pitfalls / debate

When it comes to system innovation, the theory focuses on gradual processes of change and
transformation, in which internal dynamics play an important part. Radical changes or
leapfrogging are difficult to deal with in cases of system-building and the possibilities of
intervening on behalf of radical system innovation, e.g. through transition management, are
132
considered to be limited. Consequently, system-building might be difficult to reconcile with
radical sustainable development goals.

Key references

Hughes, T.P. (1983). Networks of power: Electrification in western society, 1880-1930.
Baltimore: Johns Hopkins University Press.
Hughes, T.P. (1987). The evolution of large technological systems. In W.E. Bijker, T.P.
Hughes, & T.J. Pinch (Eds.), The social construction of technological systems: New
directions in the sociology and history of technology (pp.51-82). Cambridge, Mass. / London,
England: MIT Press.
Hughes, T.P. (1998). Rescuing prometheus. New York: Pantheon Books.
Schot, J.W., et al. (Eds.) (1998). Techniek in Nederland in de twintigste eeuw. I: Techniek in
ontwikkeling, waterstaat, kantoor en informatietechnologie. Zutphen: Walburg Pers.

Key articles from TPM-researchers

Ravesteijn, W., Hermans, L., & Van der Vleuten, E. (2002). Water systems. Special Issue of
Knowledge, Technology & Policy, 14(4).

Description of a typical TPM-problem that may be addressed using the theory / method

How could socio-technical systems be changed in relation to policy goals, given their
relatively autonomous development dynamics and their embedding in the environment?

Related theories / methods

Social Construction of Technology (SCOT), Strategic Niche Management, Innovation
systems, Backcasting, Transition Management, Creating Multi-functional Systems: multi-
source, multi-product.

Editor

Wim Ravesteijn (w.ravesteijn@tudelft.nl)















133
SYSTEM AND CONTROL THEORY

Brief discussion

System and Control theory is an interdisciplinary branch of engineering and mathematics
dealing with the behaviour and steering of dynamical systems. System and Control theory
studies principles underlying modelling (mostly by employing differential equations or
difference equations) and analysis of dynamic systems to control them by influencing the
system behaviour so as to achieve a desired goal.

It includes among others controllability, observability and stability studies, as well dynamic
feedback; Kalman filtering etc. Main control strategies include:
- Adaptive control
- Intelligent control (using neural networks, fuzzy logic, evolutionary computation and
genetic algorithms to control a dynamic system)
- Optimal control including Model Predictive Control
- Robust control (dealing explicitly with uncertainty)
- Stochastic control

Applicability

Dynamic systems

Pitfalls / debate

Every control system must guarantee the stability of the closed-loop behaviour, so checking
the stability is an important issue.

Key references

Ljung, L. (2000). Control theory. Taylor & Francis Group

Key articles from TPM-researchers

Negenborn, R.R., Sahin, A., Lukszo, Z., De Schutter, B., & Morari, M. (2009, October).
Cascaded optimization for control of irrigation canals. IEEE SMC09 Conference, San
Antonio.
Lukszo, Z., Weijnen, M.P.C., Negenborn, R.R., & De Schutter, B. (2009). Tackling
challenges in infrastructure operation and control: Cross-sectoral learning for process and
infrastructure engineers. Int. J. Critical Infrastructures, ,5(4), 308-322..

Description of a typical TPM-problem that may be addressed using the theory / method

It has found wide application in the process industry, and recently has also started to be used
in the domain of infrastructure operation, e.g., for the control of road traffic networks (Hegyi
et al., 2005), power networks (Hines, 2007; Negenborn, 2007), combined gas and electricity
networks (Arnold et al., 2009), railway networks (van den Boom & De Schutter, 2007) and
water networks (Negenborn et al., 2009).

134
Hegyi, A., De Schutter, B., & Hellendoorn, J. (2005). Optimal coordination of variable speed
limits to suppress shock waves. IEEE Transactions on Intelligent Transportation Systems,
6(1), 102112.
Hines, P. (2007). A decentralized approach to reducing the social costs of cascading failures
(PhD dissertation). Carnegie Mellon University.
Negenborn, R.R. (2007). Multi-agent model predictive control with applications to power
networks (PhD thesis). Delft University of Technology, Delft, The Netherlands.
Negenborn, R.R., Sahin, A., Lukszo Z., De Schutter, B., & Morari, M. (2009, October).
Cascaded optimization for control of irrigation canals. IEEE SMC09 Conference, San
Antonio.

Related theories / methods

/

Editor

Z.Lukszo (Z.Lukszo@tudelft.nl)













135
SYSTEMS ENGINEERING

Brief discussion

In the world of ever growing system complexity, service providers inter-dependability,
stakeholders dynamicity, and evolving changes in the system environment, the field of
systems engineering receives even more prominence. Along with this prominence, the field is
challenged by new dimensions, where humans and artefacts interplay vital roles. Therefore,
the Systems Engineering field becomes a study of complex socio-technical systems requiring
innovative methods, approaches, techniques based on profound theoretical and conceptual
foundations, i.e., engineering principles, collaborative, participative, and flexible ways of
working.

Systems engineering is an interdisciplinary field based on the theoretical foundations of social
science, information science, organizational science, management science, engineering, and
many more. These disciplines together constitute a holistic framework for a multi-facet study
of complex engineering systems (enterprise systems, business systems, transport systems,
industry systems, energy systems, etc). Systems engineering knowledge is applied to a
specific domain (energy, transport, service, ICT, etc), which makes systems engineering a
truly multi-disciplinary method. Within it, systems engineering encompasses both qualitative
and quantitative methods that allow analysis, design and development of complex systems. As
such, systems engineering signifies both an approach for creating a complex artefact (socio-
technical system) and a field of study. By definition, systems engineering is based on rigorous
principles, methods, models, frameworks, and lifecycles for system analysis, system design,
system implementation, system operation.

The reality of the modern world is changing and along with it the field of systems engineering
is evolving. Globalization of economy, liberalization of market, outsourcing, and many other
phenomena of the 21
st
century economy influence the field of systems engineering. As
systems grow in complexity, new methods, approaches, and theories are necessitated. Below
are some of the methods or sub-fields that address different challenges of the systems
engineering approach and field for complex systems design:

Modelling and Simulation:
Systems engineering implies alternative designs of an envisioned system and comparison and
critical analysis of the best design prior to the development. At the same time, each design
alternative is a product of consensus and shared understanding of different stakeholders. For
both shared understanding and alternatives analysis, Modelling and Simulation plays
prominent role and is widely adapted in complex engineering projects. Hence, Modelling and
Simulation lies in the core of systems engineering field, discipline, and practice.

Modelling Tools:
Both Modelling and Simulation require rigorous modelling languages and methods.
Modelling methods (languages) such as UML (Unified Modelling Language), IDEF0
(Integration definition for function modelling), DEMO Methodology, Petri Nets, EPC (Event-
driven Process Chain), and so on, find central role in the field of systems engineering and in
its practice pertinent to complex projects. Similarly, simulation tools and software such as
Arena, DSOL, etc provide important environments for automatic analysis of the models
(design alternatives).

136
Collaboration Support:
When designing and deploying complex systems multiple actors are involved in all phases of
the systems life cycle. Consequently collaboration among experts, users and stakeholders is
critical to the success of Systems Engineering. Examples of highly collaborative systems
engineering approaches are requirements negotiation, collaborative modelling and design,
product reviewing and organizational embedding. Collaboration support research offers a
rich combination of process and technology support for collaborative processes in multi-actor
knowledge intensive contexts. The challenge for collaboration support researchers is to design
sustainable processes and systems within and between organizations that allow people to
collaborate successfully. From a socio-technical perspective a key source of knowledge is
captured in a pattern language for computer mediated interaction. From a process perspective
a key source of knowledge is captured in thinkLets, patterns for collaborative work practices.
Still, the challenge has many dimensions, including a technical, a behavioural, an economical,
and a political. For this a multidisciplinary research perspective is required.

Serious Gaming:
As many of the future systems, solutions, policies implementation, and strategic planning
require rehearsing, serious gaming is emerging as an important sub-filed of systems
engineering in rehearsing the future and experimenting with the actors behaviour and attitude.
Gaining insights into a new complex socio-technical phenomenon, providing training,
learning about certain behaviours, implementing new policies, these are situations where
serious gaming proves to be a valuable tool and method of intervention.

Autonomic Systems:
Complex systems are becoming increasingly difficult for humans to manage. They often,
need to adapt to their dynamic open environments on their own with little or no human
intervention, to manage themselves, to handle autonomically. Such systems form groups to
accomplish a task, reassemble themselves if something changes or fails. They need to be
capable of self-organisation and self-healing. Adding autonomic components to systems helps
to make them more robust and able to adapt better to their environment.

Actually, the above sub-fields are addressing the traditional activities of systems engineering
lifecycles and each of these subfields is contributing towards some of these activities as listed
below:
- Problem articulation Modelling, Collaboration
- Solution space investigation Collaboration, Gaming, Autonomic Systems
- Model development Modelling, Collaboration, Autonomic Systems
- Model enactment Simulation and gaming
- Design Modelling and Simulation, Autonomic Systems
- Implementation Training, Gaming, Simulation, Autonomic Systems
- Operation and performance assessment Analytical modelling, Simulation

Applicability

- Systems engineering is an approach applied in the study of complex systems
- Systems engineering principles are used for multi-actors systems design
137
- Systems engineering methods are used for analyzing, redesigning and improving existing
systems and enterprise processes
- The systems engineering lifecycles are applied for process design

Pitfalls / debate

- Not applicable

Key references

Sage, A. P. (1992). Systems engineering. Wiley IEEE.
Sage, A.P., & Armstrong, J. E. (2000). An introduction to systems engineering. Wiley-
Interscience.
Maier, M.W., & Rechtin, E. (2002). The art of systems architecting. CRC-press.

Key articles from TPM-researchers

Sol, H. G. (1994). Shifting boundaries in systems engineering, policy analysis and
management. IFIP Congress (3), 394-401.
Van de Kar, E., & Verbraeck, A. (2008). Designing mobile service systems. 2nd Edition.
Amsterdam: IOS Press.
De Vreede, G.-J., Kolfschoten, G. L., & Briggs, R. O. (2006). ThinkLets: A collaboration
engineering pattern language. International Journal of Computer Applications in Technology,
Special Issue on 'Patterns for Collaborative Systems', 25, 140-154.
Schmmer, T., & Lukosch, S. (2007). Patterns for computer-mediated interaction. John
Wiley & Sons, Ltd.
Brazier, F.M.T., Kephart, J.O., Huhns, M., & Van Dyke Parunak, H. (2009). Agents and
service-oriented computing for autonomic computing: A research agenda. IEEE Internet
Computing, 13(3), 82-87

Description of a typical TPM-problem that may be addressed using the theory / method

- Socio-technical systems
- Enterprise systems / Enterprise engineering
- Multi-Actors Systems
- Infrastructure systems (transport, energy, land, etc)

Related theories / methods

- Multiple theories and methods are related

Editor

Joseph Barjis (J.Barjis@tudelft.nl)



138
THEORY OF INDUSTRIAL ORGANIZATION (IO)

Brief discussion

The theory of IO deals with the question about the relationship between market structures,
behaviour of actors in the market (esp. firms) and the performance of the market. It has been
(and still is) important in designing government policies in the area of competition. When the
structure is of a ppc (pure and perfect competition) type then price competition will be the
behaviour with the performance of prices for consumers close to marginal costs.

In the figure the traditional IO is shown; the Chicago school and Contestable market theory
are variations. Also game theoretic oligopoly theories belong to the group of IO.
The theory is deductive using case studies and statistical correlations. The strength is in the
simple causalities of the original model; when introducing feed backs and more complicated
behaviour the theory becomes less rigorous.

6
Structure-Conduct-Performance
Structure
Number of firms
Market shares
Concentration
Barriers to entry
Conduct
Pricing decisions
Strategies against rivals
Advertising activity
Performance
Welfare:
Prices and profits
Efficiency
Equity in distribution


Applicability

IO is applicable on different market structures such as monopoly, monopolistic competition,
oligopoly and perfect competition. It applies concepts of market power, abuse of power,
market shares, etc. It is applicable to situations where the market structure is the most
important explanatory variable. The drawback is related to the lack of environmental factors;
in most of the presentations of the SCP model one adds the so-called basic determinants
(including elements of technology and culture), but these do not play a role in the analysis.
Also the IO models are of a comparative static nature. The so-called DMT (Dynamic Market
Theory) can be considered an extension of SCP, but also then the feed backs are of very
limited nature.

Pitfalls / debate

139
Assuming the conditions make the IO applicable the danger is that the researchers draws too
straightforward conclusions concerning competition policy. The pitfall is to assume that ppc
is best. That might be the case for static efficiency but for dynamic efficiency (innovation)
this is questionable to say the least

Key references

Scherer, F., & Ross, D. (1990). Industrial market structure and economic performance.
Boston: Houghton Mifflin Company.
Tirole, J. (1988). The theory of industrial organization. Boston: MIT Press.

Key articles from TPM-researchers

Lemstra, W. (2006). The Internet bubble and the impact on the development path of the
telecommunication sector (PhD dissertation). TPM faculty, Delft University of Technology:
Industry Insights.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

When liberalizing infrastructures, the question of the role of competition in market structures
is crucial, especially concerning investments and innovation.

Related theories / methods

Michael Porter Competitive advantage, Theories concerning market penetration, Pricing
strategies.

Editor

John Groenewegen (J.P.M.Groenewegen@tudelft.nl)



















140
THEORY OF INSTITUTIONAL CHANGE

Brief discussion

Changes in institutions can be conceptualised from the efficiency design or evolutionary
approach (so-called new institutional economics see TCE and evolutionary game theory), but
also from the perspective known as the Original Institutional Economics (OIE). Then the
theory of institutional change explains why institutions emerge and develop as path dependent
phenomena, or as the purposefully designed instruments of interest groups that protect their
vested interests. The OIE approach is holistic (the system as a whole is more than the sum of
its constituent parts) and dynamic (analysing the process of change is more important than a
comparative static analysis). The method is abductive (to explain causes), historical, and
based on case studies and so-called pattern modelling. American pragmatism (Dewey, Peirce)
is central. The methodology is interactionistic. The figure shows the elements of the system,
the interactions and the heterogeneity of the actors. The strength is it encompassing model
and relevancy for many questions and contexts.



Applicability

The OIE is designed to analyse institutional change in an encompassing way: depending on
the questions and the specificities of the context the researcher decides about
endogenous/exogenous and what the relevant links between the systems are: sometime it is
about the socio-technological system and at another occasion about the economic-ecological
Public and private actors with characteristics of procedural rationality, satisficing behavior,
opportunism, trust, habits, and a specific shared mental map. They have the capacity to learn.
They have a power base that can be applied to protect their vested interests.
Private organizations creating institutions like conventions and codes, as well as
different types of governance structures like market contracts, firms and hybrids
(joint ventures, strategic alliances and networks).
Informal institutions
Technology
Natural environment
Public organizations and the political system: state, parliament, ministries,
bureaucracy, regulatory agents, networks of politics, and PPPs (Public-Private
Partnerships).
Institutions: laws, regulation and state-owned enterprise.
141
system. Boundaries are not drawn upfront based on disciplines, but question, context and
results of research, determine the boundaries of the research.

Pitfalls / debate

OIE runs the danger of story telling and conclusions that everything depends on everything.
According to mainstream (economics) it lacks theory and rigour.

Key references

Bromley, D.W. (2006). Sufficient reason: Volitional pragmatism and the meaning of
economic institutions. Princeton, NJ: Princeton University Press.
North, D.C. (2005). Understanding the process of economic change. Princeton: Princeton
University Press.
Tool, M.R., & Bush, P.D. (Eds.) (2003). Institutional analysis and economic policy. Boston,
Dordrecht: Kluwer Academic Publishers.

Key articles from TPM-researchers

Groenewegen, J.P.M., & de Jong, W. M. (2008). Assessing the potential of new institutional
economics to explain institutional change: The case of the road management liberalization in
the Nordic countries. Journal of Institutional Economics, 4(1), 51-71.
Groenewegen, J., Spithoven, A., & van den Berg, A. (2010). Institutional economics: An
introduction. Palgrave.

Description of a typical TPM-problem that may be addressed using the theory / method

Typically in infrastructures, the analysis should imply different subsystems and their
interaction. The logic of the system as such and the way how over time technology,
institutions and actors mutually constitute structures is essential in understanding the
dynamics of infrastructures. To assess the effectiveness of for instance regulation in
liberalized sectors demands insights in the relationships analysed through the lens of OIE.

Related theories / methods

/

Editor

John Groenewegen (J.P.M.Groenewegen@tudelft.nl)









142
THEORY OF PLANNED BEHAVIOUR

Brief discussion

One of the most well-known social psychological theories is the theory of planned behaviour.
The theory of planned behaviour is described by Ajzen in 1991 and is an extension of the
theory of reasoned action earlier described by Ajzen and Fischbein (1980). The theory
describes that peoples planned behaviour is preceded by an intention to behave. While
behaviour can be observable, intention to behave can only be elicited by asking people to
express it.

Intention to behave is explained by three other concepts; attitude towards the behaviour,
subjective norm and perceived behavioural control. Attitude towards the behaviour refers to
the degree to which a person has a favourable or unfavourable evaluation or appraisal of the
behaviour in question. Subjective norm, often also called social norm, refers to perceived
social pressure to perform or not perform the behaviour. Perceived behavioural control refers
to the perceived ease or difficulty of performing the behaviour (Ajzen, 1991). These three
attitudinal variables are also preceded by specific beliefs of the person. The theory of planned
behaviour (tpb) is depicted in the figure below.



Figure: Model of the theory of planned behaviour
(http://people.umass.edu/aizen/tpb.diag.html)

Applicability

The theory has been tested, and extensions and changes are suggested, but in general it is still
acknowledged that the factors put forward in the original model are the most important ones
for predicting behaviour (e.g. Armitage and Connor, 2001). A note to the theory, however, is
that it is particularly suitable for the prediction and understanding of behaviour that is
deliberated upon, while it is less suitable for the prediction and understanding of behaviours
that have a strong habitual component. The theory of planned behaviour also seems less
suitable for cases where a strong affective base influences the behaviour, for example a fear-
reaction. Finally, the theory can be considered incomplete when you specifically want to
understand how perceived aspects of the technical system influence technology acceptance
and technology use.

Pitfalls

143
When applying the theory, it is very important to understand the compatibility principle. This
principle says that the level of measurement needs to be the same for all the factors in the
TPB-model. For example, asking peoples intention to use the bus to work tomorrow and the
actual use of the bus to work tomorrow are measurements with the same level of specificity.
Predicting peoples intention to use the bus tomorrow to work from attitudes towards bus use
in general is an example of a violation of the principle.

Key references

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human
Decision Processes, 50, 179-211.
Armitage, C. J., & Connor, M. (2001). Efficacy of the theory of planned behavior: A meta-
analytic review. British Journal of Social Psychology, 471-499.

Key articles from TPM-researchers

To my knowledge, this theory has not been explicitly researched by TPM scholars.

Description of a typical TPM-problem that may be addressed using the theory / method

A typical example that is relevant for TPM is understanding and influencing public bus use.
Understanding which psychological factors influences bus use the most provides an entry for
designing an intervention.

Related theories / methods

Unified theory of use and acceptance of information technology (Venkatesh et al, 2003),
Value-belief-norm theory (VBN, Stern et al., 1999); this theory can be considered a
competing or additional theory.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
information technology: Toward a unified view. MIS Quarterly, 27, 425-478.
Stern, P. C., Dietz, T., Abel, T., Guagnano, G. A., & Kalof, L. (1999). A value-belief-norm
theory of support for social movements: The case of environmentalism. Research in Human
Ecology, 6, 81-97.

Editor

Nicole Huijts (n.m.a.huijts@tudelft.nl)










144
TRANSACTION COST THEORY/ECONOMICS

Brief discussion

Transaction Cost Theory/Economics (TCE) is the core of the so-called new institutional
economics (NIE). It is designed for answering the question about efficient governance
structures to coordinate transactions. The figure demonstrates its methodological
individualistic nature (actors and their preferences are exogenous and have specific
characteristics and rules of behaviour). The method is deductive and comparative static.
Central are the concepts of equilibrium and efficiency.

Case studies are used to confirm hypothesis as well as statistical correlation to demonstrate
the relation between the characteristics of the transaction and the corresponding efficient
governance structure.



The strength of TCE is its rigour in formulating hypothesis. In that sense it belongs to the
paradigm of Neoclassical Economics: construct actors and environment in such a way that
precise hypothesis can be formulated. The testing is limited to finding confirmation and
correlation, which is different from causality.

Informal
institutions
Formal
institutions
Institutional
arrangements
Individual
actors
T
- Bounded
rationality
- Opportunism
- Cost minimizing
behavior
145
Applicability

The applicability (or relevance, usefulness appropriateness) of the theory depends on the
question for which the theory is designed as well as the presence of the conditions that the
theory assumes. TCE is designed for optimization questions: what is the most efficient
governance structure under the constraints. The conditions assumed are about the competitive
environment (selection mechanism), rationality of actors and available information
(uncertainty). Under conditions of homogeneous actors operating in a highly competitive
environment in which modes of governance are very flexible, the theory is applicable.

The drawback of TCE is in the problem to deal with questions of process, dynamics, change
and with phenomena of path dependence, inefficient governance structures, power, strategic
behaviour, routines and habits. In other words: the strength and weaknesses are comparable to
the ones of neoclassical economics.

Pitfalls / debate

When TCE is considered applicable then a pitfall is that the researcher looks for
confirmations and ignores the falsifications (the competition is not selecting).

Key references

Groenewegen, J. (1996). Transaction cost economics and beyond. Dordrecht, Berlin,
Heidelberg, New York: Springer.
Williamson, O.E. (2000). The new institutional economics: Taking stock, looking ahead.
Journal of Economic Literature, 38(3), 595-613.

Key articles from TPM-researchers

Finger, M., Groenewegen, J., & Knneke, R. (2006). The quest for coherence between
institutions and technologies in infrastructures. Journal of Network Industries, 6(4), 227-260.

Description of a typical TPM-problem that may be addressed using the theory / method

TCE is relevant for questions of unbundling in the infrastructures. When firms are vertically
integrated that can minimize transactions costs and legal unbundling could then be inefficient.
Another TPM issue is the identification of critical transactions in infrastructures and then
raise the questions of the alignment of the governance structure with those transactions.

Related theories / methods

Principal- agent theory, Theory of property rights, Resource based theory of the firm.

Editor

John Groenewegen (J.P.M.Groenewegen@tudelft.nl)




146








Method



































147
Table of Contents of Method

ACCIDENT DEVIATION MODEL ..................................................................................... 149
ACCIDENT STATISTICS .................................................................................................... 151
ACTOR ANALYSIS .............................................................................................................. 153
ADAPTIVE POLICYMAKING ............................................................................................ 157
AGENT BASED MODEL ..................................................................................................... 161
BACKCASTING ................................................................................................................... 163
BAYESIAN ANALYSIS ....................................................................................................... 166
BAYESIAN BELIEF NETWORKS (BBN) .......................................................................... 168
BOUNCECASTING .............................................................................................................. 171
CAUSAL MODEL ................................................................................................................. 173
COLLABORATION ENGINEERING .................................................................................. 175
COLLABORATIVE STORYTELLING ............................................................................... 178
COST BENEFIT ANALYSIS ............................................................................................... 180
CPI MODELLING APPROACH ........................................................................................... 182
DEMO METHODOLOGY .................................................................................................... 185
DESIGN SCIENCE ................................................................................................................ 187
DISCRETE EVENT SIMULATION (DES) .......................................................................... 190
DISCRETE EVENT SYSTEMS SPECIFICATION (DEVS) ............................................... 192
ECFA+: EVENTS & CONDITIONAL FACTORS ANALYSIS.......................................... 194
(EMPIRICALLY INFORMED) CONCEPTUAL ANALYSIS ............................................ 197
ENGINEERING SYSTEMS DESIGN .................................................................................. 199
ETHNOGRAPHY .................................................................................................................. 202
EVENT TREE ANALYSIS (ETA)........................................................................................ 206
EXPERT JUDGMENT & PAIRED COMPARISON ........................................................... 208
EXPLORATORY MODELLING (EM) ................................................................................ 210
EXPLORATORY MODELLING ANALYSIS (EMA) ........................................................ 213
FAULT TREE ANALYSIS (FTA) ........................................................................................ 216
FINITE-DIMENSIONAL VARIATIONAL INEQUALITY ................................................ 218
GAMING SIMULATION AS A RESEARCH METHOD.................................................... 220
GENEALOGICAL METHOD ............................................................................................... 223
GRAMMATICAL METHOD OF COMMUNICATION...................................................... 225
HAZARD AND OPERABILITY STUDY (HAZOP) ........................................................... 228
HUMAN INFORMATION PROCESSING .......................................................................... 230
IDEF0 INTEGRATION DEFINITION FOR FUNCTION MODELLING ....................... 232
LINEAR PROGRAMMING ................................................................................................. 234
MODEL PREDICTIVE CONTROL ..................................................................................... 237
MODELLING AND SIMULATION ..................................................................................... 240
MONTE CARLO METHOD ................................................................................................. 242
MORT MANAGEMENT OVERSIGHT & RISK TREE ................................................... 245
MULTI-AGENT SYSTEMS TECHNOLOGY ..................................................................... 248
OREGON SOFTWARE DEVELOPMENT PROCESS (OSDP) .......................................... 250
POLICY ANALYSIS (HEXAGON MODEL OF POLICY ANALYSIS) ............................ 253
POLICY ANALYSIS FRAMEWORK .................................................................................. 258
POLICY SCORECARD ........................................................................................................ 261
Q-METHODOLOGY ............................................................................................................. 264
QUANTITATIVE RISK ANALYSIS (QRA) ....................................................................... 266
REAL OPTIONS ANALYSIS AND DESIGN ..................................................................... 268
SAFETY AUDITING: THE DELFT METHOD ................................................................... 270
148
SCENARIO APPROACH (FOR DEALING WITH UNCERTAINTY ABOUT THE
FUTURE) ............................................................................................................................... 272
SERIOUS GAMING .............................................................................................................. 275
STATED PREFERENCE EXPERIMENTS .......................................................................... 277
SADT - STRUCTURED ANALYSIS AND DESIGN TECHNIQUE .................................. 279
STRUCTURAL EQUATION MODELLING ....................................................................... 281
SUSTAINABLE DEVELOPMENT FOR (FUTURE) ENGINEERS (IN A
INTERDISCIPLINARY CONTEXT) ................................................................................... 283
SYSTEMS INTEGRATION .................................................................................................. 286
TECHNOLOGY ASSESSMENT .......................................................................................... 288
TECHNOLOGY TRANSFER & SUPPORT ........................................................................ 290
THE MICROTRAINING METHOD ..................................................................................... 293
TRANSITION MANAGEMENT .......................................................................................... 296
UML (UNIFIED MODELLING LANGUAGE) - CLASS DIAGRAMS ............................. 298
VALIDATION IN MODELLING AND SIMULATION ..................................................... 300
VALUE FREQUENCY MODEL .......................................................................................... 302
VALUE RELATED DEVELOPMENT ANALYSIS ............................................................ 305

































149
ACCIDENT DEVIATION MODEL

Definition

The Accident Deviation Model is a concept which specifies a number of safety state in which
a system can be. The first is the normal operating situation in which hazards are built in and
controlled within certain system boundaries chosen during system design. Dynamic systems
are constantly changing and may at some point get into stages outside the defined normal
process, in other words remaining a deviation from the normal process. Not all deviations
result necessarily in accidents. Nevertheless, in a stage that is characterized as a deviation the
level of risk is increased. People or measures, which are able to recognize the increased level
of risk, can intervene and try to get the system back to its normal operation. If the increased
level of risk is not recognised or the actions were not fully appropriate, the system is
confronted with a loss of control and the deviation continues into the accident process. At this
point an accident becomes inevitable. Still, the damage process can be influenced to reduce
the harm to the targets, both during the transmission of energy and in the stabilisation phase
after the accident has happened.


Figure: Simplified accident deviation model (based upon Hale & Glendon (1987))

Applicability

The concept is used in safety science to understand how danger is built-up and released in a
system. Deviation models show us how incidents and accidents arise from normal operations
as a by-product. The major advantage of the Accident Deviation Model is that it focuses
attention far more at the point when the deviation starts than at the point where harm occurs.
The concept can basically be applied to any system and can be added to system analyses
focusing on designing, operating and optimizing processes.

Pitfalls / debate

The concept is used in safety science to understand how danger is built-up and released in a
system. Deviation models show us how incidents and accidents arise from normal operations
as a by-product. The major advantage of the Accident Deviation Model is that it focuses
attention far more at the point when the deviation starts than at the point where harm occurs.
The concept can basically be applied to any system and can be added to system analyses
focusing on designing, operating and optimizing processes.

choice and design of (sub)system
normal situation with in-built hazards
deviations from normal situation
loss of control (release of energy+exposure)
transmission of energy
damage process
stabilisation
normal process
incident phase
accident phase
150
Key references

Hale, A.R., & Glendon, A.I. (1987). Individual behaviour in the control of danger.
Amsterdam: Elsevier Science Publication.
MacDonald, G.L. (1972). The involvement of tractor design in accidents (Research report
3/72). St. Lucia: Department of Mechanical Engineering, University of Queensland.
Kjellen, U., & Larsson, T.J. (1981). Investigating accidents and reducing risks: A dynamic
approach. Journal of Occupational Accidents, 3(2), 129-140.

Key articles from TPM-researchers

Hale, A.R., & Glendon, A.I. (1987). Individual behaviour in the control of danger.
Amsterdam: Elsevier Science Publication.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

This concept is useful for system safety analysis in order to improve system design and to set
up a safety management and control system.

Related theories / methods

The Accident Deviation Model can be combined with various barrier analysis techniques to
define safety measures in systems. The concept of various phases before running into an
accident related to another safety concept, namely the Safe Envelope of Operations.

Editor

Ellen Jagtman (h.m.jagtman@tudelft.nl)




















151
ACCIDENT STATISTICS

Brief discussion

Accident statistics are gathered in organisations of all kinds to improve the performance of
their systems and processes, particularly to improve safety. Use of accident data can be split
into in-depth accident analysis of one or a small number of similar accidents or analysing
large accident databases to find accident rates and their major contributing factors. For an in-
depth study of one accident the data used should contain many different factors. For accident
patterns on the other hand many accident cases should be available to be able to find
statistical significant relations. In-depth studies are more commonly applied in fields
characterized by are relatively small number of accident but with large consequences per
accident. Analysing accident patterns is more common in fields in which accident happen
more frequent with less sever consequences.

Before analysing accident statistics, we should identify the contributing factors of interest for
which definitions of risk can be used. Kaplan and Garrick (1981) define risk based on: a
scenario s (what accident can happen), a probability p indicating how likely it is that s will
happen, and the consequences x that arise if s happens. Risk is the sum of all possible
scenarios with associated probability and consequences. In formula:

{ }
i i i
x p s R , , = i = 1,2,,N+1

Other definitions include the exposure which is required to calculate an accident rate. The
following formula shows an example:

(

=
(

outcome adverse
e consequenc
magnitude
events
outcome adverse
y probabilit
time of unit
events
xposure e
time of unit
e consequenc
risk



Applicability

In safety analysis accident data is used to improve system ultimately to prevent similar
accident from occurring again or otherwise reducing the severity of the outcomes of such
accidents in the future. Accident statistics can be used for various purposes, amongst others
analysis of a large disaster, finding accident patterns, finding causal relations, validating
models and before/after evaluation of measures. Each type of application sets requirements
for the data, such as the type of details required about the data of an accident.

Pitfalls / debate

Using accident statistics one should be aware of the data quality. Many accident databases
suffer from under-registration in which it can depend on the severity and/or the type of
accident how well it is registered (e.g. accidents resulting in casualties are normally better
registered). Moreover, the analyst should be aware of the purpose for which the data was
registered in the first place. It may well be that not all factors important for the occurrence of
the accident are included in the database since these were not relevant for the primary
registration purpose (often liability). Finally, the use of accident statistics requires the object
of study to be in used otherwise no data is available.

Key references
152

Henley, E. J., & Kumamoto, H. (1981). Reliability engineering and risk assessment. London:
Prentice-Hall.
Kaplan, S., & Garrick, B. J. (1981). On the quantitative definition of risk. Risk Analysis, 1(1),
11-27.
OECD. (1997). Road safety principles and models: Review of descriptive, predictive, risk and
accident consequence models (OCDE/GD(97)153). Paris: Organisation for Economic Co-
operation and Development.

Key articles from TPM-researchers

Ale, B. (2009) Risk: An introduction: The concepts of risk, danger and change. Oxon:
Routledge.
Hale, A.R. (2001). Conditions of occurrence of major and minor accidents. Journal of
Institution of Occupational Safety and Health, 5(1), 7-20.

Description of a typical TPM-problem that may be addressed using the theory / method

Accident statistics are available for many transportation systems (e.g., road traffic and
aviation), but also occupational accident and industrial accident. To some extend access to the
data is possible for anyone. TPM application is of interest for complex systems in which
multiple accident or incidents occur. Data collection can be improved and analysed as part of
safety management systems.

Related theories / methods

Methods related to accident data are incident methods, such as Traffic Conflict Technique,
which assume incidents to be a predictor for the occurrence of accidents. There are however
debates about the assumed relations.

Accident and incident statistics are moreover used in Qualitative Risk Assessment methods,
amongst others FTA, ETA and BBN.

If accident data is unavailable expert judgement can be used for data prediction.

Editor

Ellen Jagtman (h.m.jagtman@tudelft.nl)









153
ACTOR ANALYSIS
7


Brief discussion

Actor analysis refers to the collection of methods that support the analysis and understanding
of multi-actor systems and processes, making them more amenable to the contributions of
policy analysts. The most popular methods for actor analysis are the methods that have come
to be known as stakeholder analysis, rooted in the corporate strategic management literature.
However, there are several other methods available to policy analysts who seek to get a better
understanding of multi-actor policy situations. These include methods for social network
analysis, cognitive mapping, and conflict analysis (see Table). These methods employ
different theoretic perspectives, focus on different aspects of multi-actor processes, and put
different requirements on analysts expertise and time. Therefore, to understand the possible
uses of the various methods, it is useful to have a conceptual framework for the main
dimensions involved in actor networks and their role in policy processes as provided in the
Figure below.

Table: Theories and methods for the analysis of multi-actor systems (based on Hermans,
2005)
Main conceptual
focus
Theoretical
frameworks
Methods
Perceptions and
values of actors
Advocacy
coalition
framework
Deliberative
policy analysis
Discourse analysis: argumentative analysis,
narrative analysis, Q-methodology
Comparative cognitive mapping, dynamic actor
network analysis
Resources and
objectives of actors
Game theory
Analysis of options, metagame analysis,
hypergame analysis, graph model for conflict
resolution, confrontation analysis/drama theory
Social theory and
rational choice
Transactional process models, expected utility
model, vote-exchange, dynamic access model
Strategic
management
Stakeholder analysis
Structural
characteristics of actor
networks
Institutional
rational choice
Policy network
theory
Social network analysis, configuration analysis
(linking network structures and perceptions)


7
Actor network analysis, stakeholder analysis, multi-actor network analysis
154
Actor 1
resources
values
perceptions
Actor 3
resources
values
perceptions
Actor 4
resources
values
perceptions
Actor 2
resources
values
perceptions
Actor 5
resources
values
perceptions
Actor n
resources
values
perceptions
Policy system
Rules


Applicability

The actor analysis methods in the above table can be used in different ways, ranging from a
quick and dirty scan of the actor networks to a more scientific study. In relation to policy
analysis and policy modelling, they are useful for at least three purposes:

1. To support problem framing, providing insight into the policymaking context of policy
analysis.
2. To support the analysis of a policy system, when these systems are essentially multi-actor
systems in relation to social, economic or institutional policy measures for instance.
3. To support preparations for the management and facilitation of policy (analysis)
processes.

As for any collection of methods, an important question is they applicability, or the question
when to use what method? Experiences with these methods in ex-ante policy analyses
suggest that there are as of yet no arguments that can suggest which method is best in which
situation, based on purely theoretic grounds. However, there are some questions that seem to
be important in making a choice for a certain method. These include:

- What is it an analyst wants to learn about? What concepts in relation to actor processes
and actor networks are most important?
- What are the possibilities for data collection?
- What is the role of the actor analysis in the policy analysis process?

Pitfalls / debate

The limitations of actor analysis include:
- Data collection is often a bottle-neck: it is a difficult and time-consuming task. Documents
that describe actors and their characteristics are often rare; interviews can be time-
consuming and may run into problems of getting access to actors or reliable key-
informants.
155
- Hidden agendas and other manifestations of ambiguous power structures limit the
validity of any actor analysis.
- Some actors havent taken a clear position (yet), or are internally divided
- Actors and networks change, but actor analysis only provides a snapshot
- Some types of actor analysis may work polarizing, as they point out the differences
and/or the (potential) conflicts between actors
- There may be a risk of a self-fulfilling prophecy, especially if the actor analysis is used
to identify allies and opponents in a policy process.
- Ethical considerations may be lost when actor analyses identify the critical actors only
with regard to the resources and the potential influence of actors on outcomes of a policy
process.

Key references

NB: Key references would differ depending on the specific method to be selected. Below
references key references related to the stakeholder analysis approach that is dominant in
practice, and an edited book volume that cover more than one method. Articles from TPM-
researchers are excluded here, but given in the next section.

Brugha, R., & Varvasovszky, Z. (2000). Stakeholder analysis: A review. Health Policy and
Planning, 15(3), 239-246.
Bryson, J.M. (2004). What to do when stakeholders matter: Stakeholder identification and
analysis techniques. Public Management Review, 6(1), 21-53.
Freeman, R. E. (1984). Strategic management: A stakeholder approach. Marshfield,
Massachusetts: Pitman Publishing Inc.
Grimble, R., & Wellard, K. (1997). Stakeholder methodologies in natural resource
management: A review of principles, contexts, experiences and opportunities. Agricultural
Systems, 55(2), 173-193.
Marin, B., & Mayntz, R. (Eds.) (1991). Policy networks: Empirical evidence and theoretical
considerations. Frankfurt am Main: Campus Verlag. / Boulder, Colorado: Westview Press.
Mitroff, I. I. (1983). Stakeholders of the organizational mind: Toward a new view of
organizational policymaking. San Francisco: Jossey-Bass Publishers.

Key articles from TPM-researchers

Bots, P.W.G., Van Twist, M.J.W., & Van Duin, J.H.R. (2000). Automatic pattern detection in
stakeholder networks. In J.F. Nunamaker, R.H. Sprague (Eds.), Proceedings HICSS-33. Los
Alamitos: IEEE Press.
Hermans, L.M. (2005). Actor analysis for water resources management: Putting the promise
into practice. Delft, the Netherlands: Eburon Publishers.
Hermans, L.M., & Thissen, W.A.H. (2009). Actor analysis methods and their use for public
policy analysts. European Journal of Operational Research, 196 (2), 808-818.
Schouten, M.J., Timmermans, J.S., Beroggi, G.E.G., & Douven, W.J.A.M. (2001). Multi-
actor information system for integrated coastal zone management. Environmental Impact
Assessment Review, 21, 271-289.
156
Timmermans, J. (2008). Interactive actor analysis for rural water management in the
Netherlands: An application of the transactional approach. Water Resources Management
23(6), 1211-1236.
Timmermans, J. S., & Beroggi, G.E.G. (2000). Conflict resolution in sustainable
infrastructure management. Safety Science, 35, 175-192.
Timmermans, J. S., & Beroggi, G.E.G. (2004). An experimental assessment of Coleman's
linear system of action for supporting policy negotiations. Computational & Mathematical
Organization Theory, 10, 267-285.
Timmermans, J. S. (2004). Purposive interaction in multi-actor decision-making. Delft, the
Netherlands: Eburon Publishers
Van der Lei, Telli. (Forthcoming). Safeguarding Public Values in Infrastructure Design
through Actor Analysis Techniques. (PhD thesis).
Van Eeten, M.J.G. (2006). Narrative policy analysis. In F. Fischer, G.J. Miller, & M.S.
Sidney (Eds.), Handbook of public policy analysis: Theory, methods, and politics (pp. 251-
269). Taylor & Francis CRC Press.

Websites:
www.actoranalysis.com
www.dana.tudelft.nl

Experts at TPM (among others):
Pieter Bots (p.w.g.bots@tudelft.nl)
Scott Cunningham (s.cunningham@tudelft.nl)
Leon Hermans (l.m.hermans@tudelft.nl)
Jos Timmermans (j.s.timmermans@tudelft.nl)
Michel Van Eeten (m.j.g.vaneeten@tudelft.nl)

Description of a typical TPM-problem that may be addressed using the theory / method

Actor analysis methods have been applied to different domains, including water resources
management and airport planning. For instance, actor analysis has been used as part of the
preparation of water policies or river basin management plans in Egypt, Turkey and the
Philippines. A review of the actors involved in the water sector in those locations provided
water experts with new insights related to the important questions to be addressed by their
policy analyses, and to organization of the process. However, the subsequent use of these
insights has been limited (Hermans, 2005).

Related theories / methods

- Methods: Cognitive mapping, Q-methodology;
- Theories: (Evolutionary) game theory, Expected utility theory;
- Concepts: Actor, Belief system, Institution.

Editor

Leon Hermans (l.m.hermans@tudelft.nl)

157
ADAPTIVE POLICYMAKING

Brief discussion

The first ideas on adaptive policies came early in the 1900s. Dewey (1927) put forth an
argument proposing that policies be treated as experiments, with the aim of promoting
continual learning and adaptation in response to experience over time. A large literature
review point out that the literature relating directly to the topic of adaptive policies is limited
(IISD, 2006). Recently, Walker et al. (2001) have developed a specific, stepwise approach for
adaptive policymaking. This approach allows implementation to begin prior to the resolution
of all major uncertainties, with the policy being adapted over time based on new knowledge.
It enables the implementation of long-term strategic policies to proceed despite deep
uncertainties. The approach makes adaptation explicit at the outset of policy formulation.
Thus, the inevitable policy changes become part of a larger, recognized process and are not
forced to be made repeatedly on an ad-hoc basis. Adaptive policies combine actions that
should be taken right away with those that make important commitments to shape the future,
preserve needed flexibility for the future, and protect the policy from failure.

Under this approach, policy changes would be based on a policy analytic effort that first
identifies system goals, and then identifies policies designed to achieve those goals and ways
of modifying those policies and protecting them from failure as conditions change. Within the
adaptive policy framework, individual actors would carry out their activities as they would
under normal policy conditions. But policymakers, through monitoring and mid-course
corrections, would try to keep the system headed toward the original goals.

Adaptive policymaking involves a number of steps, which are illustrated in Figure below.
Both the first and the second steps are basically the same steps as are used currently in policy
formulation. The first step constitutes the stage-setting step in the policymaking process. This
step involves the specification of objectives, constraints, and available policy options. This
specification should lead to a definition of success, i.e. the specification of desirable
outcomes. In the next step, a basic policy is assembled, consisting of the selected policy
options and additional policy actions, together with an implementation plan. It involves (a)
the specification of a promising policy and (b) the identification of the conditions needed for
the basic policy to succeed. These conditions should support policymakers by providing an
advance warning in case of failure of policy actions.

In the third step of the adaptive policymaking process, the rest of the policy is specified.
These are the pieces that make the policy dynamic and adaptive. This step is based on
identifying in advance the vulnerabilities of the basic policy (the conditions or events that
could make the policy fail), and specifying actions to be taken in anticipation or in response to
them. Actions are defined related to the type of vulnerability and when the action should be
taken. Both certain and uncertain vulnerabilities can be distinguished. Certain vulnerabilities
can be anticipated by implementing mitigating actions actions taken in advance to reduce
the certain adverse effects of a policy. Uncertain vulnerabilities are handled by implementing
hedging actions actions taken in advance to reduce or spread the risk of the possible
uncertain adverse effects of a policy. Shaping actions are actions taken in advance to control
the future as much as possible; i.e., to reduce the chance that an external condition or event
that could make the policy fail will occur. For possible future actions, signposts are defined
and a monitoring system established to determine when actions are needed to guarantee the
progress and success of the policy. In particular, critical values of signpost variables (triggers)
158
are specified, beyond which actions should be implemented to ensure that the policy
progresses in the right direction and at the proper speed.

Once the above policy is agreed upon, the final step involves implementation. In this step, the
actions to be taken immediately are implemented, a monitoring system is established, signpost
information related to the triggers is collected, and policy actions are started, altered, stopped,
or extended. After implementation of the initial mitigating and hedging actions, the adaptive
policymaking process is suspended until a trigger event is reached. As long as the original
policy objectives and constraints remain in place, the responses to a trigger event have a
defensive or corrective character that is, they are adjustments to the basic policy that
preserve its benefits or meet outside challenges. Under some circumstances, neither defensive
nor corrective actions might be sufficient. In that case, the entire policy might have to be
reassessed and substantially changed or even abandoned. If so, however, the next policy
deliberations would benefit from the previous experiences. The knowledge gathered in the
initial adaptive policymaking process on outcomes, objectives, measures, preferences of
stakeholders, etc., would be available and would accelerate the new policymaking process.
Figure: The adaptive policymaking process (based on Walker, et al., 2001)

Necessary Conditions
for Success
Objectives
Constraints
Definitions of
Success
Options Set
Policy Actions
II . Assembling Basic Policy
Signposts
Mitigating Actions
Hedging Actions
Triggers
Certain
Vulnerabilities
Uncertain
Vulnerabilities
III . Specifying Rest of Policy
I V . Implementation Phase
Defensive Actions
Corrective Actions
Reassessment
Others Actions
Unforeseen Events
Changing Preferences
I . Stage Setting
Vulnerabilities
Shaping Actions
159

Applicability

Adaptive policymaking can be used to develop policies for practically any policy domain. It is
particularly useful for long-term strategic policymaking involving deep uncertainties about
the future situation.

Pitfalls / debate

- Identifying vulnerabilities. The success of an adaptive policy is largely dependent on the
quality of the identification of vulnerabilities. If a vulnerability is missed, the adaptive
policy can fail if this vulnerability manifests itself. Often, wildcard/thinking outside the
box/brainstorming techniques are used to identify events that would negatively affect the
basic policy. However, these are external events and are not the same as the policy
vulnerabilities. For example, a vulnerability of Schiphols current Master Plan is its
assumption that it has a hub carrier. Its current hub carrier (KLM) could disappear for
many reasons (e.g., bankruptcy). But, if you have prepared yourself for the disappearance
of a hub carrier, you do not care if it happens because of bankruptcy or because of merger
Air France.

- More generally, there is currently no structured approach for carrying out Steps III and IV
of the adaptive policymaking approach. Researchers at TPM are working on this.

- The implementation of adaptive policymaking will require significant institutional
/governance changes, since these types of policies are currently not supported by laws and
regulations.

Key references / articles from TPM-researchers

Walker, W.E., Rahman, S.A., & Cave, J. (2001). Adaptive policies, policy analysis, and
policy making. European Journal of Operational Research, 128, 282-289.
Van der Pas, J.W.G.M., Agusdinata, D.B., Walker, W.E., & Marchau, V.A.W.J. (2006).
Dealing with uncertainties in transport policy making: A new paradigm and approach. In Joint
International Conference, 11th Meeting of the EURO Working Group on Transportation and
Extra EURO Conference Handling Uncertainty in Transportation: Analyses, New
Paradigms, Applications and Comparisons, Bari (Italy).
Agusdinata, D.B., Marchau, V.A.W.J., & Walker, W.E. (2007). Adaptive policy approach to
implementing Intelligent Speed Adaptation. IET Intelligent Transport Systems, 1, 186-198.
Marchau, V., Walker, W., & Van Duin, R. (2007). An adaptive approach to implementing
innovative urban transport solutions. Transport Policy,15(6), 405-412.
Marchau, V.A.J.W., Walker, W.E., & Van Wee, G.P. (2007). Innovative long-term transport
policy analysis: From predict and act to monitor and adapt. Proceedings of the European
Transport Conference (http://www.etcproceedings.org/paper/download/3215/), AET,
London.
Kwakkel, J., Walker, W.E., & Marchau, V.A.W.J. (2008). Adaptive airport strategic planning.
In H.J. van Zuylen, & A.J. van Binsbergen (Eds.), 10
th
International TRAIL Congress,
Selected Papers (pp. 115-142). TRAIL Research School, Delft.
160
Rahman, S.A., Walker, W.E., & Marchau, V.A.W.J. (2008). Coping with uncertainties about
climate change infrastructure planning: An adaptive policy making approach. Ecorys,
Rotterdam. (Available for download at:
http://www.raadvenw.nl/extdomein/raadvenw/Images/Final%20Report%20V2_tcm107-
257701.pdf).

Description of a typical TPM-problem that may be addressed using the theory / method

Adaptive policies are promising for all TPM policy problems that are characterized by deep
uncertainty. Some examples of TPM applications of adaptive policies are in airport strategic
planning, sea-level rise, intelligent speed application, and road pricing.

Please note that the course EPA-2132, which deals with uncertainties in policy analysis,
discusses a wide range of methods for long-term policymaking. Amongst others, a lot of
attention is given to adaptive policymaking and its application to cases in several policy
domains.

Related theories / methods

Uncertainty, Exploratory modelling, Assumption-Based Planning.

Editors:

Warren Walker (W.E.Walker@tudelft.nl)
Vincent Marchau (V.A.W.J.Marchau@tudelft.nl)
Jan Kwakkel (J.H.Kwakkel@tudelft.nl)
Jan-Willem van der Pas (J.W.G.M.vanderPas@tudelft.nl)























161
AGENT BASED MODEL

Definition

Agent-based models take agents and their interactions as central modelling focus points.
Stuart Kauffman has been quoted to say that an agent is a thing which does things to things.

Another definition by Shalizi: An agent is a persistent thing which has some state we find
worth representing, and which interacts with other agents, mutually modifying each others
states. The components of an agent-based model are a collection of agents and their states, the
rules governing the interactions of the agents and the environment within which they live.

Applicability

Agent Based Modelling is a technique that has been used to model anything from biological
ecosystems, innovations, markets, complex socio-technical systems, etc.

Pitfalls / debate

Similar yet distinct field are the Multi Agent Systems (MAS). Difference can be state as:

MAS: How can I make a ...?
Multi-Agent Systems are employed to engineer a certain desired emergence from a system.
MAS acknowledge that a given control problem (e.g. traffic control, agenda synchronization,
etc.) must be solved using discrete, parallel, autonomous components, namely agents. Work
needs to be done to ensure that all types of conflict resolution are performed in order to
achieve the desired outcome, (i.e., no traffic jams, and non-conflicting appointments).

ABM: What happens when ...?
Agent-Based Models are constructed to discover possible emergent properties from a bottom-
up perspective. ABM acknowledges that reality consists of many components acting in
parallel. ABM describes these entities and lets them interact in parallel, observing all possible
interaction models. There is no desired state or task that needs to be achieved, only an
exploration of the systems possible states.

Key references

There are literally countless examples of Agent Based Models in literature. Some of the
earliest and most well known are:
Epstein, J.M. (1999). Agent-based computational models and generative social science.
Complexity, 4(5), 41-60.
Epstein, J.M., & Axtell, R. L. (1996). Growing artificial societies: Social science from the
bottom up. MIT Press.

Key articles from TPM-researchers

Nikolic, I. (2009). Co-evolutionary method for modelling large scale socio-technical systems
evolution (PhD Thesis). TU Delft.
162
Van Dam, K.H. (2009). Capturing socio-technical systems with agent-based modelling (PhD
Thesis). TU Delft.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Any socio-technical system, especially multi actor systems, can be captured in a ABM.

Related theories / methods

Complexity, Complex Adaptive Systems, Emergence.

Editor

I. Nikolic (I.nikolic@tudelft.nl)



































163
BACKCASTING

Brief discussion

Backcasting literally means looking back from the future; it can be seen as the opposite of
forecasting, which is about looking to the future from the present. A more comprehensive
description of backcasting is to develop first a desirable future, before looking back how that
future could have been achieved and through what pathways or trajectories that could have
happened. This is followed by setting agendas towards that desirable future and defining next
steps.

Key elements of participatory backcasting include: (1) the construction and use of desirable
normative scenarios and visions; (2) broad stakeholder participation and stakeholder learning;
and (3) combining process, participation, analysis and design using a wide range of methods
within the overall backcasting approach. The framework below consists of five steps and a
toolkit containing four groups of methods and tools: (1) participatory tools; (2) design tools;
(3) analytical tools, and; (4) management, coordination and communication tools. The
backcasting approach reflected by the framework is not only inter-disciplinary (combining
and integrating tools, methods and results from different disciplines), but also trans-
disciplinary (through the involvement of stakeholders).


Figure: Methodological framework for backcasting (Quist, 2007)

Applicability

According to Dreborg (1996), backcasting is particularly useful if it concerns highly complex
problems on a societal level, if there is a need for a major change, if dominant trends are part
of the problem, and if scope and time-horizon are wide enough to leave room for very
different choices and development pathways. Sustainability problems are obvious examples
of such problems.

164
It has been argued that the distinctive features of backcasting make it more appropriate for
sustainability applications than regular foresighting and scenario approaches (Dreborg, 1996).
This has strongly to do with the idea of taking desirable (here sustainable) futures or a range
of sustainable futures as a starting point for analysing its potential, its feasibility and possible
ways how this future can be achieved.

Backcasting can be applied to system innovations, transitions and organisations, though
operationalisation must be adjusted to the level.

Pitfalls / debate

The approach has the usual stakeholder involvement pitfalls. In addition, the quality of the
application of the approach depends strongly on the quality of the developed desirable future
vision. Furthermore, applying backcasting must be aligned to the different types of goals that
can be pursued.

Key references

Dreborg, K. H. (1996). Essence of backcasting. Futures, 28(9), 813-828.
Green, K., & Vergragt, P. (2002). Towards sustainable households: A methodology for
developing sustainable technological and social innovations. Futures, 34, 381-400.
Hjer, M., & Mattsson, L.G. (2000). Determinism and backcasting in future studies.
Futures, 32, 613-634.
Holmberg, J., & Robrt, K.H. (2000). Backcasting: A framework for strategic planning.
Journal of Sustainable Development and World Ecology, 7, 291-308.
Quist, J., & Vergragt, P. (2006). Past and future of backcasting: The shift to stakeholder
participation and a proposal for a methodological framework. Futures, 38(9), 1027-1045.
Robinson, J. (1990). Futures under glass: A recipe for people who hate to predict. Futures,
22, 820-843.
Robinson, J. (2003). Future subjunctive: Backcasting as social learning. Futures, 35, 839-
856.
Weaver, P., Jansen, L., Van Grootveld, G., Van Spiegel, E., & Vergragt, P. (2000).
Sustainable technology development. Sheffield UK: Greenleaf Publishers.

Key articles from TPM-researchers

Geurs, K., & van Wee, B. (2000). Backcasting as a tool to develop a sustainable transport
scenario assuming emission reductions of 80-90%. Innovation, 13(1), 47-62.
Geurs, K., & van Wee, B. (2004). Backcasting as a tool for sustainable transport policy
making: The environmentally sustainable transport study in the Netherlands. EJTIR, 4, 47-69.
Quist, J. (2007). Backcasting for a sustainable future: The impact after ten years. Eburon
Publishers, Delft NL, ISBN 978-90-5972-175-3.
Quist, J., Knot, M., Young, W., Green, K., & Vergragt, P. (2001). Strategies towards
sustainable households using stakeholder workshops and scenarios. International Journal of
Sustainable Developments (IJSD), 4(1), 75-89.
165
Quist, J., Rammelt, C., Overschie, M., & De Werk, G. (2006). Backcasting for sustainability
in engineering education: The case of Delft University of Technology. Journal Cleaner
Production, 14, 868-876.

Description of a typical TPM-problem that may be addressed using the theory / method

Any sustainability problem at a societal level, many multi-actor system problems. Good
examples can be found in the references

Related theories / methods

Transition management, Vision assessment, Socio-technical scenarios.

Editor

Jaco Quist (j.n.quist@tudelft.nl)


































166
BAYESIAN ANALYSIS

Brief discussion

Relying solely on the most basic laws of probability, Bayes Theorem provides a logically
consistent and mathematically elegant way to model processes of learning about the
probability of some event (A) under conditions of uncertainty, based on observations (O). For
example, A can denote the event of an infrastructure failure and O can be a (series of)
observation(s) of particular elements of that infrastructure, such as inspections of a particular
bridge that is part of the infrastructure. Formally Bayes Theorem can be derived as follows:

( ) ( ) ( ) ( ) ( )
( )
( ) ( )
( )
P A O P O P A O P O A P A
P O A P A
P A O
P O
= =

=
.

Here
( )
P A O is the updated perceived probability (or: posterior) of infrastructure failure
given that a particular observation is made. ( ) P A represents the initially perceived
probability (or: prior) of infrastructure failure.
( )
P O A denotes the ex ante perceived
probability of observation O, given infrastructure failure and ( ) P O is a normalizing factor
representing the ex ante perceived probability of doing observation O.

Applicability

Bayes Theorem has proven to provide an enormously useful tool for virtually any analysis
that involves learning from observations. As such, Bayesian analysis has been applied in
practically any field of (social) science and engineering. Roughly, three categories of
Bayesian may be distinguished, based on the goal of the analysis.

First, Bayesian analysis is used to model human behaviour. For example, Bayesian analysis is
very popular in fields such as marketing and microeconomics to model how consumers over
time learn about the quality of products. This type of Bayesian analysis has started with
Edwards et al.s seminal paper (1963).

Second, Bayesian analysis is a branch of statistics and econometrics, being a counterpart of
classical statistics and econometrics that most of TPM-students and researchers use. A
description of the difference between these two approaches is outside the scope of this piece,
but note that the Bayesian approach is very popular in many fields (it can be conceived as a
kind of statistical version of the Microsoft versus Apple war). See Zellner (2007) for an
excellent overview of this Bayesian perspective on econometrics.

Third, Bayesian analysis is often applied to support decisions of policy makers and planners.
Many other policy and planning problems that require decision-making under conditions of
uncertainty and that involve the possibility of doing observations have been tackled by means
of Bayesian analysis. See Raiffa and Schlaifer (1961) for a seminal discussion of the power of
Bayesian analysis for supporting policy-making and planning.

167
Pitfalls / debate

The use of Bayesian analysis for modelling behaviour has been criticized for providing a
normative, rather than a descriptive account of behaviour. In general, finding suitable priors is
non-trivial when little data is available; this is considered by many a drawback of Bayesian
methods. When enough data is available however, priors become irrelevant.

Key references

Edwards, W., Lindman, H., & Savage, L.J. (1963). Bayesian statistical research for
psychological research. Psychological Review, 70, 193-242.
Raiffa, H., & Schlaifer, R. (1961). Applied statistical decision theory. Boston, MA.: Harvard
University Press,
Zellner, A. (2007). Some aspects of the history of Bayesian information processing. Journal
of Econometrics, 138, 388-404.

Key articles from TPM-researchers

Chorus, C.G., Arentze, T.A., Molin, E.J.E., Timmermans, H.J.P., & van Wee, G.P. (2006).
The value of travel information: Decision-strategy specific conceptualizations and numerical
examples. Transportation Research Part B, 40(6), 504-519.
Chorus, C.G., Arentze, T.A., & Timmermans, H.J.P. (2008). A general model of Bayesian
travel time learning. Proceedings of the 87
th
annual meeting of the Transportation Research
Board, Washington, D.C.

Description of a typical TPM-problem that may be addressed using the theory / method

To support transport policy-making, researchers have applied Bayesian analysis to predict
behaviour of individual travellers: for example, in an attempt to understand the economic
value of travel information, the editor of this piece has applied Bayesian analysis to model
how travellers react to messages they receive from information services.

Related theories / methods

Reinforcement learning, Information theory.

Editor

Caspar Chorus (c.g.chorus@tudelft.nl)










168
BAYESIAN BELIEF NETWORKS (BBN)

Brief discussion

Bayesian Belief Nets (BBNs) are a theory of reasoning from uncertain evidence to uncertain
conclusions, providing a framework for graphical representation of the causal relations
between variables and capturing the uncertain dependencies between them using conditional
probabilities. A BBN consists of nodes (for random discrete or continuous variables) and arcs
(for causal relations between variables). The arcs are directed from the parent (cause) node to
the child (effect) node. The structure of the BBN from Figure below shows that the variable B
is influenced by variables A and C, and that the variables A and C are independent. Each node
has two possible values, corresponding to the working (OK) and failure (notOK) states. The
probability tables for nodes A and C and the conditional probability table for node B are given
in Table below.


Figure: Simple example of BBN


Table: (Conditional) Probability Tables (CPTs) for the example given in Figure above

New information about one variable can be included in a BBN and the other probabilities can
be recomputed. The information can be propagates in up and down through network, making
possible two types of reasoning:
- predictive reasoning, when the probability of occurrence of some child nodes is computed
based on evidence in some of the parent nodes, and
- diagnostic reasoning, when the probability of any set of variables that may cause the
evidence is computed.

Applicability

169
The causal relations between variables in a BBN are expressed in terms of conditional
probabilities. The model is probabilistic rather than deterministic: parent nodes influence the
frequency of child nodes, but do not determine their occurrence. Fault Trees (FTs) and Event
Trees (ETs) are also considered probabilistic, since they associate a probability with the
occurrence of an event. The relations between events are however deterministic: the failure of
all cause components (for an AND gate) or of at least one (for an OR gate) definitely
determines the failure of the effect event (in FTs); or the occurrence or not of an event is
followed by the occurrence or not of another event (in ETs). BBN is therefore more
applicable in problems where the dependence between variables is important.
The BBN method is also able to combine information derived from existing databases of past
accidents with information obtained from experts, using expert judgment exercises.

Pitfalls / debate

The application of BBN has two basic steps: a qualitative step, in which the structure of the
network is defined, and a quantitative one, in which distributions are associated with nodes
and conditional probabilities are associated with arcs. Although the BBN method has been
applied in numerous fields, the literature does not provide description of the ways in which
the graphical structure of the network is defined. When databases are available, the structure
of the network can be derived from it. But when the database is missing, the analyst should
pay attention to the importance of the qualitative part.

Key references
Dempster, A.P. (1990). Construction and local computation aspects of network belief
functions (with discussion). In R.M. Oliver, & J.Q. Smith (Eds.), Influence diagrams, belief
nets and decision analysis. Chichester: Wiley.
Jensen, F.V. (1996). An introduction to Bayesian networks. Springer.
Jensen, F.V. (2001). Bayesian networks and decision graphs. Springer.

Key articles from TPM-researchers

Ale, B.J.M., Bellamy, L.J., van der Boom, R., Cooke, R.M., Duyvism, M., Kurowicka, D.,
Lin, P.H., Morales, O., Roelen, A., & Spouge, J. (2009). Causal model for air transport
safety. Final report.
Hanea, D.M. (2009, November). Human risk of fires: Building a decision support tool using
Bayesian networks (PhD thesis). Safety Science Group, TPM.

Description of a typical TPM-problem that may be addressed using the theory / method

The BBN method has been applied for air transport safety as well as for fire safety in
buildings. Other possible fields of application in the area of safety are the studies of factors
which influence the safety culture, or the study of the causes of numerous accidents on road.

Related theories / methods

BBN is a generalization of Fault Trees (FTs) and Event Trees (ETs). It can be used in
combination with expert judgment, when databases are not available. BBN can be used to
170
analyze accident databases, for finding causal relations between factors contributing at the
accident. BBN is also part of Qualitative Risk Analysis (QRA).

Editor

Daniela Hanea (d.m.hanea@tudelft.nl)












































171
BOUNCECASTING

Brief discussion

Bouncecasting is a scenario-based method that can be used to support long-term
policymaking in the face of deep uncertainty. Traditionally, two types of scenarios have been
used to support long-term policymaking:

- Forecasting scenarios: a range of plausible futures that are relevant for the system of
interest are developed based on current conditions and hypotheses about the external
forces; robust policies are identified that can deal with the policy problems identified in
most of these future worlds.

- Backcasting scenarios: a future dream world is defined and policies are identified that
enable the achievement of this dream; or, a nightmare scenario is defined, and policies
are identified that try to avoid it.

Each of these methods has its disadvantages. Forecasting scenarios typically depend on trends
that are assumed to continue into the future. Trendbreaks are often not taken into account.
Backcasting scenarios tend to lock into predetermined endpoints that might change as the
future develops. Bouncecasting combines both approaches in order to overcome the
disadvantages of each of them.

In bouncecasting, one looks both forward from the present to possible futures, and backwards
from these possible futures to the present. Bouncecasting first develops a number of
forecasting scenarios and then backcasts from each of them to see the hindsights (what can
be learned, in case the future turns out this way). By comparing the hindsights among these
futures, we can identify what sort of planning is needed regardless of which future arises, and
what sort of planning is dependent upon the specific future. During the decisionmaking
process, simulation experiments can be used to help understand the consequences of the
decisions taken in terms of the (operational) performance of the various possible changes to
the system.

Steps in conducting a bouncecasting exercise are the following:
1. Forecasting: Construction of a range of futures. This will produce a set of scenarios
spanning the future plausibility space that provide different pictures of the future system.
The different pictures of the future system and their outcomes can be partly quantified
with the help of simulation experiments;

2. Backcasting: Crucial stakeholders in the system analyze the strengths, weaknesses,
opportunities, and threats (SWOT analysis) represented by each of the scenarios. Next,
taking the SWOT analysis into consideration, stakeholders indicate, with hindsight, what
could be done currently in order to be better positioned for the future -- mitigating the
weaknesses and threats inherent in the future situation, while maintaining the strengths.
This part focuses on the question: If we had only known that this would happen in the
future, what should we have done now?

3. Bouncing back: Hindsight policies and other actions are used to modify the scenarios. The
revised scenarios are then used in conjunction with additional simulations to identify
172
robust policies, to draw lessons for present decisionmaking by any and all of the
stakeholders, and to evaluate the decisionmaking on the performance of the system.

Key references

Kahan, J.P. (2004). Bouncecasting: Combining forecasting and backcasting to achieve policy
consensus. Paper presented at the APPAM Fall Research Conference (28-30 October, 2004),
Atlanta, GA.
Botterman, M., Cave, J., Kahan, J.P., Robinson, N., Shoob, R., Thomson, R., & Valeri, L.
(2004). Cyber trust and crime prevention: Gaining insight from three different futures.
Foresight Directorate, Office of Science and Technology, United Kingdom. (Downloadable
at:
http://www.foresight.gov.uk/Cyber/Gaining%20insight%20from%20three%20different%20fu
tures.pdf)

Key articles from TPM-researchers

Y. Liu (2005). Dealing with uncertainties by bouncecasting, application in ISA
implementation (Master of Science dissertation). Faculty of Technology, Policy and
Management, Delft University of Technology.

Description of a typical TPM-problem that may be addressed using the theory / method

Long-term policy problems that specifically involve uncertainty about stakeholder
preferences with respect to solutions and actor behaviour in the system (e.g. road pricing,
Intelligent Speed Adaptation)

Please note that the course EPA-2132, dealing with uncertainties in policy analysis, discusses
a wide range of methods for long-term policy analysis, including bouncecasting.

Related theories / methods

Scenario development, Scenario analysis, Uncertainty

Editors

Warren Walker (W.E.Walker@tudelft.nl)
Vincent Marchau (V.A.W.J.Marchau@tudelft.nl)
Jan-Willem van der Pas (J.W.G.M.vanderPas@tudelft.nl)







173
CAUSAL MODEL

Definition

A causal model is an abstract model that uses cause and effect logic to describe the behaviour
of a system. According to a formal definition, a casual model is a mathematical object that
provides an interpretation and computation of causal queries about the domain (Galles &
Pearl, 1998). A casual model can be associated with a directed graph in which each node
corresponds to a variable of the model and the directed edges (arrows) point from the
variables that have direct influence to each of the other variables that they influence.

Applicability

A causal model is a model useful for a (complex) system analysis to determine the part played
by multiple factors. It has been widely used to assess risk in critical infrastructure, such as air
transport, chemical industry, and nuclear power plant. It can also be applied in the critical
infrastructure protection in homeland security, such as defence industry, internet, and telecom.
The other implication is in the filed of decision making. Through manipulating the factors
incorporated in the causal model, it can visualize the impact on the others and this will greatly
support business and organizational decision-making activities.

Pitfalls / debate

Reality is complex. Transparency, model completeness and correctness are often in conflict.
A causal model is an abstraction of reality which in itself is not transparent. If more aspects of
reality are represented in the model it will become less transparent. How to balance the issue
between transparency and completeness of the model and put in use is usually difficult.
Besides, to quantify a complex system requires a large data. The reliability of the data and the
amount of the data available can be a problem to quantify and validate the model we have
built.

Key references

International journals
- Reliability Engineering & System Safety
- Quality and Reliability Engineering International
Handbooks
- Smith, D. (2005). Reliability, maintainability and risk: Practical methods for engineers.
Butterworth-Heinemann.
- Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge University
Press.
- Vesely, W.E. (1987). Fault tree handbook. NUREG.
- Ale, B.J.M., et al. (2008). Causal model for air transport safety-final report. Ministerie
van Verkeer en Waterstaat, Den Haag, Netherlands.

Key articles from TPM-researchers

Ale, B.J.M., Bellamy, L.J., Van der Boom, R., Cooper, J., Cooke, R.M., Goossens, L.H.J.,
Hale, A.R., Kurowicka, D., Morales, O., Roelen, A.L.C., & Spouge, J. (2009). Further
174
development of a Causal model for Air Transport Safety (CATS): Building the mathematical
heart. Reliability Engineering & System Safety, 94(9), 1433-1441.
Ale, B.J.M., Bellamy, L.J., Cooke, R.M., Goossens, L.H.J., Hale, A.R., Roelen, A.L.C., &
Smith, E. (2006). Towards a causal model for air transport safety an ongoing research
project. Safety Science, 44(8), 657-673.
Roelen, A. (2008). Causal risk models of air transport: Comparison of user needs and model
capabilities (PhD thesis). TU Delft.
Ale, B.J.M., Bellamy, L.J., Van der Boom, R., Cooke, R.M., Goossens, L.H.J., Kurowicka, D.,
Lin, P.H., Roelen, A.L.C., Cooper, H., & Spouge, J. (2008, May). Further development of a
Causal model for Air Transport Safety (CATS): The complete model. PSAM9, Hongkong.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Currently the technique is used in a large international project called CATS (Causal
Modelling for Air Transport Safety). The main purpose of this research is to understand what
are the technical failure, human factors, and management influences that affect risk in aviation
and to help in their prevention. We use the notion of control function and causal modelling
technique to formulate occurrences or events that might lead to loss of control for a complex
and dynamic system, as in aviation. The use of a fully integrated directed acyclic graph
made it possible to model much more realistically the complex interactions between the
technical and organisational systems that make an aircraft operate and fail.

Related theories / methods

This concept is related to several modelling techniques, for instance Bayesian network (a
probabilistic graphical model that represents a set of random variables and their conditional
independencies via a directed acyclic graph (DAG)) is a method for presenting graphs and
probabilities; structure equations is one of the functional causal models which has been well
known in the social science; and when it require quantification for the modelling, different
expert judgment techniques may applied when there is no sufficient data available.

Editor

P.H. Lin (p.h.lin@tudelft.nl)














175
COLLABORATION ENGINEERING

Brief discussion

Collaboration Engineering is an approach to create sustained collaboration support by
designing collaborative work practices for high-value recurring tasks, and deploying those for
practitioners to execute for themselves without ongoing support from professionals.
Collaboration Engineering is an approach to designing collaboration processes and tools.
Collaboration Engineering has developed into a growing research field. Collaboration
Engineering intends to enable an improvement in the quality of collaboration for a recurring
task in an organization, as for instance requirements negotiation or risk assessment.

Collaboration support exists of process and technology support. For these two types of
support we can distinguish a design task (to design the process and the technology), an
application task (to apply the process and to use the technology) and a management task (to
manage the implementation and control of the process and to manage the maintenance of the
technology). In Collaboration Engineering the tasks are divided among several roles which
enables outsourcing and dividing the workload of collaboration support. In Collaboration
Engineering the collaboration engineer designs a reusable and predictable collaboration
process prescription for a recurring task, and transfers the prescription to practitioners to
execute for themselves without the ongoing intervention of group process professionals.
These practitioners are domain experts, and are trained to become experts in conducting one
specific collaboration process, but are not necessarily experts in designing new processes for
themselves or others. They execute the designed collaboration process as part of their regular
work. When using collaboration support technology, the technical execution can be
performed by a single practitioner, or two practitioners may work together, one moderating
the process while the other runs the technology. The skills required for the application roles
are very light focused only on executing a single approach.

Applicability

Initial experiences with Collaboration Engineering are very promising. Examples of
Collaboration Engineering projects include, but are not limited to:

- Integrity assessments at the Dutch Government (Kolfschoten et al., 2008)
- Usability testing of a health emergency system (Fruhling & De Vreede, 2005).
- Software requirements negotiations (Boehm, Gruenbacher, & Briggs, 2001).
- Risk assessment at an international bank (De Vreede & Briggs, 2005).
- End-user reflection (Bragge, Merisalo-Rantanen, & Hallikainen, 2005).

The Collaboration Engineering task is used for collaborative work, or practices that in the
future require a collaborative approach. With collaboration we mean tasks that require joint,
highly interactive effort of stakeholders or experts to achieve goals and high quality
outcomes. Collaboration Engineering relies on a pattern language of design patterns called
thinkLets. ThinkLets describe methods to support collaborative activities such as collecting
input from the group, clustering, voting, consensus building and creating shared
understanding. They support a collaborative approach to problems and are therefore less
suitable to settings where a negotiation approach is required. Collaboration Engineering has
emerged from the research on Group Support Systems, but has evolved to become platform
and tool independent.
176

Fruhling, A., & De Vreede, G.J. (2005). Collaborative usability testing to facilitate
stakeholder involvement. In S. Biffl, A. Aurum, B. Boehm, H. Erdogmus, & P. Grnbacher
(Eds.), Value based software engineering (pp. 201-223). Berlin: Springer-Verlag.
Boehm, B., Gruenbacher, P., & Briggs, R.O. (2001). Developing groupware for requirements
negotiation: Lessons learned. IEEE Software, 18(3), 46-55.
De Vreede, G.J., & Briggs, R.O. (2005). Collaboration engineering: Designing repeatable
processes for high-value collaborative tasks. In Hawaii International Conference on System
Science, Los Alamitos.
Bragge, J., Merisalo-Rantanen, H., & Hallikainen, P. (2005). Gathering innovative end-user
feedback for continuous development of information systems: A repeatable and transferable
e-collaboration process. IEEE Transactions on Professional Communication, 48, 55-67.
Kolfschoten, G.L., Kosterbok, J., & Hoekstra, A. (2008). A transferable ThinkLet based
process design for integrity risk assessment in government organizations. Proceedings of the
Group Decision and Negotiation Conference, Coimbra, INESC Coimbra.

Pitfalls / debate

As described, like Group Support Systems, Collaboration Engineering is meant to support
tasks that benefit from a collaborative approach. The approach is less suitable when groups
are unwilling to collaborate, share information and resources and create a joint result.
Collaboration Engineering uses principles that ensure equal and active participation, and that
help to establish democratic principles. These are less suitable when purposeful power
distances are present, or when the intention is to push a solution, rather than developing one in
collaboration with stakeholders.

Key references

De Vreede, G. J., Davison, R., & Briggs, R. O. (2003). How a silver bullet may lose its shine -
learning from failures with group support systems. Communications of the ACM, 46(8), 96-
101.
De Vreede, G. J., & De Bruijn, J. A. (1999). Exploring the boundaries of successful GSS
application: Supporting inter-organizational policy networks. DataBase, 30(3-4), 111-131.
De Vreede, G. J., Briggs, R. O., & Kolfschoten, G. L. (2006). ThinkLets: A pattern language
for facilitated and practitioner-guided collaboration processes. International Journal of
Computer Applications in Technology, 25(2/3), 140-154.
Kolfschoten, G. L., & De Vreede, G. J. (2009). A design approach for collaboration
processes: A multi-method design science study in collaboration engineering. Accepted for
Journal of Management Information Systems, in press.
Briggs, R. O., De Vreede, G. J., & Nunamaker, J. F., Jr. (2003). Collaboration engineering
with ThinkLets to pursue sustained success with group support systems. Journal of
Management Information Systems, 19(4), 31-63.

Key articles from TPM-researchers

177
Kolfschoten, G. L., Briggs, R. O., De Vreede, G. J., Jacobs, P. H. M., & Appelman, J. H.
(2006). Conceptual foundation of the ThinkLet concept for collaboration engineering.
International Journal of Human Computer Science, 64(7), 611-621.
Kolfschoten, G. L., Den Hengst, M., & De Vreede, G. J. (2007). Issues in the design of
facilitated collaboration processes. Group Decision and Negotiation, 16(4), 347-361.
Kolfschoten, G. L., De Vreede, G. J., Briggs, R. O., & Sol, H. G. (2007). Collaboration
engineerability. Group Decision and Negotiation, 19(3), 301-321.

Description of a typical TPM-problem that may be addressed using the theory / method

In Multi Actor settings, in order to gain support for a solution, participation of stakeholders in
the process that leads to this solution is critical. Therefore, Collaboration Engineering
methods are broadly applicable in multi actor research. Collaboration Engineering has been
applied to design collaborative work practices for e.g. Requirements negotiations (Alaa et al.,
2006), Crisis response (Appelman & Driel, 2005) and e-democracy (Kolfschoten, 2007).

Alaa, G., Appelman, J.H., & Fitzgerald, G. (2006). Process support for agile requirements
modeling and maintenance of e-projects. In Americas Conference on Information Systems,
Acapulco, Mexico.
Appelman, J.H., & Van Driel, J. (2005). Crisis-response in the port of Rotterdam: Can we do
without a facilitator in distributed settings? In Hawaii International Conference on System
Science, Los Alamitos.
Kolfschoten, G.L. (2007). A toolbox for the facilitation of e-democracy. In Group Decision
and Negotiation Conference, Mt. Tremblant.

Related theories / methods

In Collaboration Engineering a collaboration process is designed and deployed for a recurring
task. Therefore Collaboration Engineering will have similarities with process engineering, and
the approach for deployment will resemble to approaches for organizational change. The
general steps in the Collaboration Engineering approach to implement collaboration support
in an organization are therefore similar to the phases in a process for organizational process
change or business process reengineering (BPR). The thinkLet design patterns, and patterns
for computer mediated interaction (submitted chapter on design patterns) are both used in
Collaboration Engineering.

Editors

Gwendolyn Kolfschoten (G.L.Kolfschoten@tudelft.nl)
Stephan Lukosch (S.G.Lukosch@tudelft.nl)








178
COLLABORATIVE STORYTELLING

Brief discussion

Telling stories is not only a given phenomenon of human practice; it is purposefully used as a
method or a procedure in different areas of application under the designation storytelling.
Collaborative storytelling aims at the development of a common understanding within a group
by coordinated narrating activities (when each person contributes his or her own knowledge
and his or her own interpretation of a common experience), in order to make implicit
knowledge explicit. In doing so, the social relations, roles, and responsibilities of the people
collaboratively telling a story are revealed.

In general, three approaches to collaborative storytelling can be differentiated. Peer-to-peer
approaches focus on equal discourse, e.g. for shared knowledge construction, informal
knowledge management (Lukosch et al., 2008) or in narrative reflection on completed
projects (Perret et al., 2004). Top-down approaches to storytelling address knowledge transfer
or dissemination of information where the storytellers have means to define knowledge
(Snowden, 2000). Bottom-up approaches target participation or qualification for participation
and can serve to disseminate knowledge from experts to laymen.

A special quality in collaborative storytelling arises from the restriction on verbal telling of
stories, i.e. from the restriction on auditory production and perception. Firstly, spoken
language is the basis for telling stories. Secondly, spoken language is an essential and quite
natural part of human communication. The act of telling stories in groups ties to the everyday
experience of discussing collectively remembered episodes. Here, narrative structures do not
develop exclusively as a creation of an autonomously narrating person, but are formed by
demands, additions, references, interpretation offers and much more from listening persons,
who become co-tellers (Ochs & Capps, 2001).

Applicability

The scope of possible areas of application is very broad and spans from knowledge elicitation
and management, collaborative learning and shared knowledge construction, open channels
on local radio to user oriented requirement analysis during product development.

Pitfalls / debate

One major issue in relation to collaborative storytelling is anonymity. Actors might not be
willing to reveal knowledge or stories which can be connected to them and might be used
against them. Thus, the set of people accessing and working with the stories has to be
carefully selected and communicated to the persons participating in a collaborative
storytelling process.

Key references

Ochs, E., & Capps, L. (2001). Living narrative: Creating lives in everyday storytelling.
Harvard University Press.
Perret, R., Borges, M.R.S., & Santoro, F. M. (2004). Applying group storytelling in
knowledge management. In Groupware: Design, implementation, and use (pp. 3441). 10
th

International Workshop, CRIWG 2004, LNCS 3198. Berlin Heidelberg: Springer-Verlag.
179
Snowden, D. (2000). The art and science of story or 'are you sitting uncomfortably?' part 1:
Gathering and harvesting the raw material. Business Information Review, 17, 147156.
Snowden, D. (2000). The art and science of story or 'are you sitting uncomfortably?' part 2:
The weft and the warp of purposeful story. Business Information Review, 17, 215226.

Key articles from TPM-researchers

Lukosch, S., Klebl, M., & Buttler, T. (2008). Facilitating audio-based collaborative
storytelling for informal knowledge management. In Proceedings of the 14th Collaboration
Researchers' International Workshop on Groupware (CRIWG 2008), 289-304. Berlin
Heidelberg: Springer-Verlag.
Lukosch, S., Klebl, M., Buttler, T., & Hackel, M. (2009). Eliciting requirements with audio-
based collaborative storytelling. Proceedings of the International Conference on Group
Decision and Negotiation (GDN), 99-104.

Description of a typical TPM-problem that may be addressed using the theory / method

In multi-actor settings, collaborative storytelling can be used to elicit tacit knowledge about
existing processes and policies. It can also be applied to reveal requirements for the design of
multi-actor systems. Collaboratively told stories converge from a simple structure and an
open moral stance towards well-structured story with a constant moral stance. Thus, the
collaborative storytelling can help in identifying best practices.

Related theories / methods

As collaborative storytelling helps in identifying best practices, it is related to the theory of
design patterns. Here, storytelling can help mining patterns. In addition, storytelling is related
to collaboration engineering as it can be one method to support group decision making by
creating a shared understanding.

Editor

Stephan Lukosch (S.G.Lukosch@tudelft.nl)
















180
COST BENEFIT ANALYSIS

Brief discussion

Cost Benefit Analysis (CBA or social CBA) is a tool to help politicians making choices. A
CBA is a systematic overview of all pros (benefits) and cons (costs) of a project or plan, as
much as possibly quantified and expressed in monetary terms. CBA is based on the neoclassic
economic notion of rational individuals, these are, persons who make decisions on the basis of
a comparison of benefits and costs. Benefits are based on the consumers willingness to pay
for an effect of a project. Costs are what consumers are willing to receive as compensation for
giving up resources. In mainstream CBA the measures of benefits and costs underlie the
concept of economic efficiency. A reallocation of resources building a road in a scenic
landscape for example increases economic efficiency if the sum of the benefits to those who
gain by that reallocation exceeds the sum of costs to those that lose. Another way of saying
this is that there is an increase in economic efficiency, if (in principle) the gainers could
compensate the losers without becoming losers themselves. This test is the efficiency criterion
(or compensation test).

Applicability

CBA is a tool to be used in all policy applications. It can be used in ex post and ex ante
evaluations. Conditions for applying a successful CBA are: a clear project or plan description,
good quality reference scenarios (future or past descriptions of the world without the project
or plan) and good quality impact studies or model calculations. Since 2000 it is compulsory in
the Netherlands to carry out a CBA prior to large transport infrastructure investment
decisions.

Pitfalls / debate

Some point out that it would not be sensible to use highest total utility (benefits of winners
outweigh costs of losers) as the only yardstick in decision-making. It is not wise to disregard
the presence of tragic choices in politics, as when CBA leads to a choice of course A (many
winners), but course A leads to uncompensated losers (a potentially small group whose
members may suffer from serious illnesses and even death). It is advisable to use CBA only
as a source of policy-relevant information; CBA should not automatically determine the
decision- making. Another pitfall of CBA is the high complexity, making the tool and the
results incomprehensible for the target user group: the policy-makers.

Key references

Brent, R.J. (1997). Applied cost-benefit analysis. Cheltenham, UK / Lyme, US: Edward Elgar.
Journal of Legal Studies, Vol. XXIX (2000, June). [contains a very interesting and inspiring
special issue on cost-benefit analysis, the issue consists of essays on CBA by several
important Nobel Prize economists]
NEI, & CPB (2000). Evaluation of transport infrastructure: A guide for cost-benefit analysis.
(Only in Dutch, summary in English). ISBN 90 1209 0563, from http://www.cpb.nl

Key articles from TPM-researchers

181
Annema, J.A., Koopmans, C., & van Wee, G.P. (2007). Evaluating transport infrastructure
investments: The Dutch experience with a standardised approach. Transport Reviews, 27(2),
125150.

Brief description of a typical TPM-problem

CBA is relevant in all TPM-problems related to policy planning. Three examples of recent
applications in bachelor and masters theses and in TPM research:
1. The Dutch government has introduced recently a car scrapping scheme. Do the
environmental benefits of this scheme outweigh the costs?
2. The air craft industry is planning to use bio-fuels in the future. Is the use of bio-fuels in air
crafts form a broad societal perspective beneficial?
3. CBA results have been used in the debate on halting or continuing the construction of a
new subway in the city of Amsterdam.

Related theories / methods

CBA is related to Multi-criteria Analysis (MCA). Both methods try to present information in
a reasoned, consistent and orderly way open to the interpretation of the decision maker. The
methods differ in their objective. The objective of CBA is to facilitate a more efficient
allocation of societys resources. MCA can be defined as the study of methods and procedures
by which concerns about multiple conflicting criteria can be formally incorporated in a
decision making process.

Editors

Jan Anne Annema (J.A.Annema@tudelft.nl)
Bert van Wee (G.P.Vanwee@tudelft.nl)






















182
CPI MODELLING APPROACH

Brief discussion

Enterprise modelling is a daunting task to be carried out from a single perspective as it tackles
not a single business process, but a complex interaction of actors, business units, and inter-
organizational flows.

CPI Modelling implies a collaborative, participative, and interactive modelling approach for
enterprise modelling. This is rather an approach than a method, but because of the emphasis
on the modelling activity and the importance of the underlying modelling language, it is
rather emerging as a method with rigorous theoretical foundation and application guidance
under development. The notion of collaboration, participation, and interaction are the
three constituents of success for complete, accurate, and fast enterprise modelling. The CPI
Modelling approach consolidates three aspects of modelling of complex processes the expert
aspect, the user aspect, and the tool (technology) aspect:

- Collaboration/expert aspect: brings along business analysts, modelling experts, facilitator
(cross disciplinary group) to conduct modelling
- Participation/user aspect: brings along business process owners, managers, stakeholders
that provide input for modelling
- Interaction/tool aspect: emphasizes collaboration enabling technologies and tools.

Below follows a more elaborated discussion of what is meant by CPI Modelling

1: Collaboration
Collaboration has been proven to result in a fast and accurate model development when
modelling of complex phenomenon of socio-technical nature is involved. But collaboration
itself is a subject of engineering approach, where explicit scenarios and guidelines are
required to be designed for facilitating collaboration.

Collaboration also requires guidance and orchestration by an experienced facilitator.
Facilitators with modelling experience and basic domain knowledge are more successful to
lead the participants.

2: Participation
Often, the ultimate deliverables of enterprise modelling are changes in the current practice,
organizational restructuring, or investment in new technology. It will be hard to achieve these
ultimate goals without an extended participation and approval of stakeholders.

3: Interaction
Innovative tools and technologies are needed to furnish interaction of modellers, analysts, and
participants of enterprise modelling. While technologies such as large interactive smart boards
that can be shared remotely are creating interactive environment, specific tools furnish the
success.

Tools used should be intuitive, easy to follow, and powerful to capture complex interactions.

183
Static models are no longer sufficient to create the shared understanding that complex
enterprise processes models should furnish. Tools that simulate the processes allow capturing
dynamic behaviour of the constructed models.

Applicability

CPI Modelling approach is especially applicable for complex enterprise business process
modelling when different users might give conflicting accounts of the enterprise operations
and business processes OR when redesign of enterprise processes is needed and the changes
require a wide support and confirmation of the stakeholders.

An example of application can be found in the paper (Barjis, 2009).

Pitfalls / debate

- Organization of CPI Modelling sessions require commitment of a large number of
participants;
- The modelling language should be truly simplified in order to be understood for the first
time users.

Key references

Persson, A. (2001). Enterprise modelling in practice: Situational factors and their influence
on adopting a participative approach (PhD dissertation). Stockholm University.
Stirna, J., Persson, A., Sandkuhl, K. (2007). Participative enterprise modeling: experiences
and recommendations. In J. Krogstie, A. L. Opdahl, & G. Sindre (Eds.) CAiSE 2007 and WES
2007. LNCS, vol. 4495 (pp. 546560). Heidelberg: Springer.
De Vreede, G.-J. (1996). Participative Modelling for understanding: Facilitating
organizational change with GSS. In: Proceedings of the 29th HICSS.

Key articles from TPM-researchers

Barjis, J. (2009). Collaborative, participative and interactive enterprise modeling. In J. Filipe,
& J. Cordeiro (Eds.), ICEIS 2009, LNBIP 24 (pp. 651662). Berlin Heidelberg: Springer-
Verlag.

Description of a typical TPM-problem that may be addressed using the theory / method

- Modelling complex socio-technical systems such as enterprises and organizations;
- Modelling inter-organizational business processes;
- Reengineering existing business processes (e.g., with the purpose of IT alignment);
- Conducting collaborative modelling for consolidation of different views and inputs;
- Modelling interaction of multiple actors.

Related theories / methods

- Communicative action modelling
- Collaboration engineering

184
Editor

Joseph Barjis (J.Barjis@tudelft.nl)















































185
DEMO METHODOLOGY

Brief discussion

DEMO is an acronym for Design and Engineering Methodology for Organizations. DEMO
Methodology is a methodology for enterprise study. The core concept of the DEMO
Methodology is the concept of Ontological Transaction, which is used for communicative
modelling.

According to the DEMO Methodology, social actors in organization perform two kinds of
acts: production act (P-acts, for short) and coordination acts (C-acts, for short). By engaging
in P-acts, the actors bring about new results or facts, e.g., they deliver service or produce
goods. Examples of P-acts are: register a student into new course; issue a ticket for a show;
make a payment. By engaging in C-acts, the actors enter into communication, negotiation, or
commitment towards each other. These two types of acts constitute an ontological transaction
(see Figure below).

C-Act
P-Act
+ =
Ontological
Transaction

Figure: An ontological transaction consist of coordination act (C-Act) and production act (P-
Act)

An ontological transaction is a chain of three phases where a series of coordination and
production acts take place. The coordination act (or C-Acts) actually repeats in two phases
prior to a production act (or P-Acts) when actors communicate and agree upon some actions
and after the production act when the same actors discuss the result and outcome of the
production act. These three phases constitute a generic pattern in which a transaction is
carried out:

- All the coordination acts that take place before a production act is started are referred to as
Order Phase (or O-Phase). This phase is a negotiation of two parties for a product (service or
good, tangible or intangible) to be delivered.

- All the production acts that follow this phase are referred to as Execution Phase (E-Phase).
During this phase, the requested product is created solely by the executor. This can be a
tangible (good or service) or intangible (judgment, decision) product.

- All the coordination acts, after the execution phase is completed, are referred to as Result
Phase (R-Phase). This phase is presentation of the result to the actor from whom the request
or order originated.

These three phases of an ontological transaction are often indicated as O, E, and R. An
ontological transaction involves two actor roles. The actor role that initiates a transaction is
called initiator. The actor role that carries out a production act is called executor. The generic
process diagram of a transaction is illustrated in Figure below.

186
Executor Initiator
O
R
E

Figure: Basic transaction concept

Applicability

DEMO Methodology is applicable to social systems (enterprise, organizations, and business
systems). The methodology has a set of graphical notations allowing visual modelling of
business processes and human actors interaction. These graphical notations or elements
follow a rigorous syntax and form a modelling language that is developed for modelling
purpose. Examples of application can be found in (Barjis 2007; Barjis 2008) or on the website
of the methodology (www.demo.nl).

Pitfalls / debate

- The main challenge of DEMO Methodology is its underlying theoretical foundation that
requires a thorough training and mastering before this methodology can be successfully
applied.
- Lack of computer-based tools to support the modelling environment.

Key references

Dietz, J.L.G. (2006). Enterprise ontology: Theory and methodology. New York: Springer-
Verlag, Inc.

Key articles from TPM-researchers

Barjis, J. (2008). The importance of business process modeling in software systems design.
Journal of the Science of Computer Programming, 71(1), 73-87.
Barjis, J. (2007). Automatic business process analysis and simulation based on DEMO.
Journal of Enterprise Information Systems, 1(4), 365-381.

Description of a typical TPM-problem that may be addressed using the theory / method

- Modelling complex socio-technical systems such as enterprises and organizations;
- Modelling and engineering inter-organizational business processes;
- Conducting collaborative modelling for consolidation of different views and inputs;
- Modelling interaction of multiple actors.

Related theories / methods

- Speech act theory
- Language-action perspective
- Communicative action modelling

Editor

Joseph Barjis (J.Barjis@tudelft.nl)
187
DESIGN SCIENCE

Brief discussion

Design as an approach and perspective on inquiry has recently evolved in Design Science
where design is proposed as a research strategy to gain knowledge and understanding about
the object (or artefact) under construction. Design science can be used to research not just
instantiations (prototypes or systems) but also constructs (symbols, vocabulary), models
(abstractions and representations) and methods (algorithms and practices) (Hevner et al.,
2004). Design science provides seven guidelines for research with a design component. These
guidelines call for addressing the innovative artefact that is to be designed, the problem it
solves, the evaluation criteria for success, the research contribution, the rigor of the research
methods used, the design approach itself, and the communication of findings (Hevner et al.,
2004). In design science the idea is to learn from a design effort. For this, the design science
researcher should combine rigor and relevance by firmly grounding the study in the
knowledge base of the field, as well as in the environment.

The knowledge base can be described as accumulated literature, theories and established ways
of conducting research. Using these resources, the designer ensures the rigor of the design.
The environment can be seen as the business world, and the experiences the field has
accumulated over the years. The field is the other facet of design research, and enables the
grounding of the research to the reality faced by practitioner in their routines. This ensures the
relevance of the research. Both are important for design, as design as scientific activity is not
only problem solving, but it is also about applying existing knowledge through a structured
methodology to solve an existing problem as Cross (1993) puts it.

Applicability

Design science is useful when the research strategy is akin to problem solving (Simon, 1996).
The resulting design is thus, first and foremost, a solution to an organizational problem for
which there is yet no solution available in the knowledge base (in the literature). This makes
design science particularly relevant in an engineering context, where creation/ design of
artefacts, methods or tools are conducive to a contribution in a particular field of knowledge.
Before the introduction of design science it was difficult to justify and validate the scientific
knowledge gained in engineering contexts. Design science enables a scientific justification
and approach to knowledge gained in engineering studies.

Design science differentiates itself from behavioural science, which focuses on truth and is
guided by the development or justification of theories, with an emphasis on rigor. Design
science is guided by the development and evaluation of an artefact and places emphasis on
utility attempting to bridge the gap between rigor and relevance (Hevner et al., 2004).

Design science has been very influential recently in the field of information systems,
conceived as an applied research field, where the artefact under development constitutes an
information technology for a particular set of business needs (Peffers et al., 2007). Within this
field, design science can be used for theory development, especially when the objective is to
create a prescriptive design theory for a class of artefacts (Kuechler & Vaishnavi, 2008).

Pitfalls / debate

188
Given the relatively short history of design science and its origins in design and engineering,
there are still a few issues which have not received attention or which remain as open issues.
These may make design science difficult to publish, validate and generalize. The key
literature on design science is not explicit on the epistemological assumptions underlying the
approach. While the aim is to find a balance between design and behavioural science, there is
no indication of what this means in terms of the philosophical assumptions behind a design
science research project. Along those lines, there is little discussion on how to validate a
design science contribution. While the emphasis is placed on evaluation in terms of utility
(Hevner et al., 2004), there is still a vacuum regarding how exactly the scientific contribution
should be validated and published.

Key references

Hevner, A.R., Ram, S., March, S.T., & Park, J. (2004). Design science in information systems
research. MIS Quarterly, 28(1), 75-105.
Cross, N. (1993). Science and design methodology: A review. Research in Engineering
Design, 5, 63-69.
Kuechler, B., & Vaishnavi, V. (2008). On theory development in design science research:
Anatomy of a research project. European Journal of Information Systems, 17(5), 489-504.
Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science
research methodology for information systems research. Journal of Management Information
Systems, 24(3), 45-77.
Simon, H. A. (1996). The sciences of the artificial (Third edition). Cambridge, MA: MIT
Press.

Key articles from TPM-researchers

Kolfschoten, G. L., & De Vreede, G. J. (2009). A design approach for collaboration
processes: A multi-method design science study in collaboration engineering. Journal of
Management Information Systems, 26(1), 225-256.

Description of a typical TPM-problem that may be addressed using the theory / method

Design science is especially suitable for the development of supporting tools; e.g. methods,
tools, approaches that can be implemented in organizations or systems to improve the ways
people are working. It is also very well suited to support a typical master thesis project at
TPM, as it requires both a literature analysis and data collection at the organization where the
solution has to be applied. This fits well to a typical master student project, where
stakeholders in the organization are interviewed, and a literature study is performed. When
complex solutions have to be designed rigorously, while remaining relevant for stakeholders
in an organization, design science offers a suitable framework for research.

Related theories / methods

Competing methods are for instance case study research or action research. However, these
approaches are more focused on social research. Design science is more focused on the
engineering context, when emphasis is on the T in TPM.

189
Editors

Gwendolyn Kolfschoten (G.L.Kolfschoten@tudelft.nl);
Rafael Gonzales (R.A.Gonzalez@tudelft.nl)













































190
DISCRETE EVENT SIMULATION (DES)

Brief discussion

The field of dynamic modelling and simulation generally accepts a taxonomy of system
representations based on two major characteristics: Time base (discrete or continuous), and
State change (discrete or continuous). Discrete event systems are the systems having a
continuous time base and where state changes occur at discrete instants on the time base. An
event is defined as an instantaneous change of the state of the system. In discrete event
simulation, the behaviour of the system is abstracted to only consider instants where state
changes occur. This abstraction is particularly suited to complex unstructured problems where
mathematical formulation in terms of equations is difficult if not impossible.

Within the discrete event class of systems, different technical implementations of modelling
and simulation concepts are possible. This justifies the existence of three worldviews in the
discrete event simulation (DES) realm: event scheduling, process interaction, and activity
scanning.

Discrete event simulation is the most used technique in engineering, management, and
logistics activities. A great majority of discrete event simulation projects are performed with
commercial packages among which the most famous are Arena, Automod, Simul8, Entreprise
Dynamics, EM Plant. Packages of this type offer user friendly modelling and simulation
interfaces, predefined modelling building blocks, and additional applications for
experimentation, input and output statistical analysis. They are generally satisfactory for the
most common simulation studies. For more specific and complex applications where the
analyst wants more freedom on the modelling constructs to use and in-depth control on the
simulation, commercial packages reach their limits. Discrete event simulation libraries based
on general purpose programming languages (and generally developed in academia) impose
less constrains on the constructs to use and allow in-depth scrutiny of the simulation process.

Applicability

DES is generally used in design and reengineering studies to assess the performance of
alternative solutions in the presence of uncertainty. It is well suited for what-if analysis. The
mainstream DES packages adopt a flow based description of reality with such constructs as
entities, processes, queues, resources. This makes DES particularly applicable to
manufacturing and logistics.

Pitfalls / debate

The major pitfall in discrete event simulation is that of validity. Arbitrarily complex models
are built to support critical decision making on known system under new circumstances or on
completely new system. A good simulation model is capable of bringing new knowledge that
is consistently derived from prior knowledge of the system.

No generally accepted and comprehensive validation methodology exists. The different actors
in a simulation project should conduct sufficient tests to gain sufficient confidence in the
model before using it for decision support.

191
As another pitfall, DES is a data intensive approach. To conduct a useful simulation, high
quality data is needed from the source system. The quality of the data conditions the quality
of results.

Key references

Banks, J. (Ed.) (1998). Handbook of simulation: Principles, methodology, advances,
applications, and practice. New York: John Wiley.
Carson, J.S. (1993). Modeling and simulation world views. In G.W. Evans, M. Mollaghasemi,
E.C. Russell, & W.E. Biles (Eds.), Proceedings of the 1993 Winter Simulation Conference
(pp. 18-23). Institute of Electrical and Electronics Engineers, Piscataway, NJ.
Law, A.M., & Kelton, W.D. (2000). Simulation modeling and analysis (3
rd
Ed.). New York:
McGraw-Hill.

Key articles from TPM-researchers

Jacobs, P.H.M., Lang, N.A., & Verbraeck, A. (2002). D-SOL: A distributed Java based
discrete event simulation architecture. In E. Yucesan, C.-H. Chen, J.L. Snowdon, & J.M.
Charnes (Eds.), Proceedings of the 2002 Wintersim Simulation Conference (pp. 793-800). San
Diego, California: IEEE.

Description of a typical TPM-problem that may be addressed using the theory / method

To improve the performance of an existing system, say the emergency healthcare
transportation service of a city, a model of the current organization is designed using a DES
Package. Data obtained from the authorities are used as inputs. The model is calibrated to
match the current observations. Experimenting with the model allows identifying the major
bottlenecks in the system. Through what-if analysis, new organizations are tested to suppress
the bottlenecks. The new alternatives are experimented with and compared according to
various performance indicators. A simulation report with all the findings is given to the
authorities to support their decision on the best organization.

Related theories / methods

DES can use the DEVS formalism as a theoretical framework. Another theoretical approach
to DES is Petri nets.

Editor

Mamadou D. Seck (m.d.seck@tudelft.nl)








192
DISCRETE EVENT SYSTEMS SPECIFICATION (DEVS)

Brief discussion

The discrete event simulation paradigm is one of the preferred methods of inquiry in complex
unstructured problems where mathematical formulation in terms of equations is difficult if not
impossible.

Unlike the continuous modelling and simulation paradigm whose mathematical underpinnings
(differential equations, numerical integration techniques), have largely preceded the existence
of digital computers, discrete event simulation emerged and flourished after the generalization
of computers with ad-hoc solutions and without any firm theoretical basis.



The DEVS formalism is the most successful attempt to endow discrete event modelling and
simulation with a sound formal basis. It has its roots in Systems Theory and fits within a
theoretical framework for modelling and simulation which distinguishes between system
specification formalisms and levels of system specification.

The DEVS formalism allows expressing the behaviour of any system of the discrete event
class as a structure (mathematical logic) with well defined sets, functions, and relations. It
also allows modular and hierarchical composition of systems in a clean way thanks to its
property of closure under coupling. Another important feature of DEVS is that it manages a
clear separation of concerns between the model and its simulator. The models expressed
syntactically in the DEVS formalism are interpreted by a DEVS abstract simulator which
provides the operative semantics to the model.

It has been shown in the modelling and simulation literature that DEVS is a highly expressive
modelling formalism subsuming all other model representations in the discrete event
simulation realm and that it can be a computational basis for the implementation of
behaviours expressed in other paradigms (discrete time, continuous).

Applicability

DEVS is used for the formal specification of the behaviour and the structure of complex
adaptive systems. It has been used for the specification and simulation of a broad array of
systems from the engineering, physical, biological, and human and social sciences.
193

Its clean formal basis and generality make it a good candidate for model transformation and
interoperability.

Various implementations of DEVS modelling and simulation environments are available in
all major object oriented programming languages.

Pitfalls / debate

The formal approach of DEVS is unusual in the discrete event simulation community and can
create minor communication difficulties. It requires explanation of the formalism before any
useful discussion of the models. Yet, once the formalism is understood, DEVS makes the
model interpretation straightforward.

Key references

Zeigler, B., Praehofer, H., & Kim, T. (2000). Theory of modeling and simulation. (2nd
edition). San Diego, CA, USA: Academic Press.
Vangheluwe, H. (2000). DEVS as a common denominator for multi-formalism hybrid
systems modelling. In A. Varga (Ed.), IEEE International Symposium on Computer-Aided
Control System Design (pp.129-134). Anchorage, Alaska: IEEE Computer Society Press.

Key articles from TPM-researchers

Seck, M.D., Verbraeck, A. (2009). DEVS in DSOL: Adding DEVS semantics to a generic
Event Scheduling Simulation Environment. Proceedings of the Summer Computer Simulation
Conference SCSC 2009, Istanbul (Turkey), July 13, 17, 2009.

Description of a typical TPM-problem that may be addressed using the theory / method

Container Terminals are complex engineered systems with various components interacting.
To achieve simulation based design of a terminal, it is necessary to have models of the
possible components of a real terminal (cranes, trucks, etc.), and to be able to compose them
hierarchically. A library of components can be built using DEVS atomic models. Different
alternative terminals can be designed with little effort by instantiating model components and
coupling them with each other. Each alternative can be simulated and performance indicators
measured in order to make the final choice.

Related theories / methods

DEVS has strong ties with Cybernetics and Systems Theory in general, and with George
Klirs system epistemology in particular. This is where most of the concepts used in DEVS
can be traced back to. It also has strong links with the theory of sequential machines and
timed automata.

Editor

Mamadou D. Seck (m.d.seck@tudelft.nl)


194
ECFA+: EVENTS & CONDITIONAL FACTORS ANALYSIS

Brief discussion

ECFA+ is a method of producing a multi-linear sequential description of an incident which
accounts for the logical relationships between the facts presented. Using witness narratives,
logs and other sources of evidence, ECFA+ helps an investigator to build an account of the
events that comprise an incident. The events are put into chronological order and linked
together by identifying logical relationships. These links are tested to ensure that each event is
explained satisfactorily. When needed, conditions are identified to ensure the completeness of
these explanations. Every event, condition and logical relationship must be established to the
standard of evidence required by the investigator. ECFA+ is developed from ECFA - Events
and Causal Factors Analysis (Buys & Clark, 1995), and includes a rule set to test the ECF
reconstruction for logic and completeness. ECFA+ building blocks consist of 'events',
enabling 'conditions' and 'queries' to be answered in order to understand a specific condition
or event sequence.

Applicability

ECFA+ will work with virtually any type of occurrence, whether positive or negative. It is
scalable according to the magnitude and complexity a critical experience, such as an incident
or accident. ECFA+ serves three main purposes in investigations: (1) assists the verification
of causal chains and event sequences; (2) provides a structure for integrating investigation
findings; (3) assists communication both during and on completion of the investigation.

ECFA+ analysis is generally an iterative process, runs in parallel with other investigative
activities. New information is added to the evolving ECF chart and this often raises new
topics for further inquiries. If one were to add together the various iterations of work on an
ECFA+ analysis, it will seldom take less than one hour, often two hours and sometimes more
than this if the incident is complex. The fact that ECFA+ benefits from a team approach will
add to opportunity cost associated using the method.

195

It is useful to start the analysis as early as possible and run it in parallel with other
investigative activities, because ECFA+ has a dual role - structuring what is known and
identifying gaps. As more data become available, they can be transcribed into the evolving
ECF chart. This provides a means for structuring what is known at a given point in the
investigation and, just as importantly, identifying gaps in knowledge and evidence. These
gaps provide the stimulus for further enquiries. The ECFA+ diagram provides an evidence-
based description of the process leading to an operational surprise. The paired method 3CA
(Control Change Cause Analysis) is used to perform systemically a Root Cause Analysis of
events selected from the ECFA+ reconstruction.

ECFA+ can be taught at skills level in half a day of training.

Pitfalls / debate

Like any tool, ECFA+ should be the servant of the investigator and not the master. Only apply
ECFA+ in situations where you believe that the benefits outweigh the costs in time and effort.

The format, test en stop-rules in ECFA+ shall be adhered to. It is possible to bring in
hypotheses that might explain an specific condition or event in order to put it to the test

Key references

Buys, J.R., & Clark, J.L. (1995). Events and causal factors analysis (revised). SCIE-DOE-01-
TRAC-14-95, available from www.nri.eu.com.
Hendrick, K., & Benner Jr., L. (1987). Investigating accidents with STEP. New York: Marcel
Dekker.
Hesslow, G. (1988). The problem of causal selection. In D.J. Hilton (Ed.), Contemporary
science and natural explanation: Commonsense conceptions of causality. Harvester Press.

S1+3 12:42
LFF(L) Directs water
jet into Car 1 engine
compartment
S4 12:42
OiC orders FF(B) to
get N/S HPHR to
work
S1+4 12:42+
FF(B) Directs water
jet into Car 1 engine
compartment
S4 12:42+
Wind carries smoke
and steam into road
Large volumes of
smoke & steam
from engine
compartment
S1,3,4 S4
Fire Appliance
Radiator is ruptured
S1+2 12:42+
Car 2 enters smoke
plume (at 80kph)
S1+2 12:42+
Car 2 hits N/S front of
fire appliance
S4
S.E. Wind (10kph)
S1+4 12:42+
OiC walks to rear of
Car 2
S1+4 12:42+
Car 3 Hits OiC
(left leg)
S1+4
OiC lays supine on
verge of road
How did OiC
assess overall
situation?
Q1
S4
Car 2 Driver is injured
S1+4 12:42+
Car 3 Hits Car 2
(rear)
S1 12:45+
FF(W) switches off
appliance engine
S1,2,3+4
Smoke & steam
obscures road
S5
Car 2 Driver
believes smoke is
from grass fire
S1+2 12:42+
Car 3 drives through
smoke plume (80kph)
S4
OiC wants to assess
injuries of Car 2 driver
S4
Car 3 Driver has seen
Car 2 drive into smoke
S4,P1+2
Car 2 is badly
damaged
S1+4
OiC in path of
oncoming traffic
Legend
(Q_)
Query Event
S = Statements: 1=FF(B), 2=LFF(L), 3=FF(P),
4=OiC, 5=PC(P)
P = Photographs
Condition
S1+4
Car 1 engine
compartment is well
alight
S1,2+4
Road is open to traffic
in both directions
S1
No water for fighting
fire
Was Car 1 fire
extinguished
before pump
turned-off?
Q2
S3+4
Appliance parked on
opposite carriageway
(facing traffic)
a
b
c
d
e
f
g
h
i
j
l
m
n
o
1
2
3 4
5 6
7
8
9
10
11
k
196
Kingston, J., Koornneef, F., Frei, R., & Schallier, P. (2008). 3CA FORM B - Control change
cause analysis investigators manual. NRI-5. NRI Foundation.
Frei, R., Kingston, J., Koornneef, F., & Schallier, P. (2003, May 12-13). Investigation tools in
context. JRC/ESReDA Seminar on Safety Investigation of Accidents, Petten, The
Netherlands, available from www.nri.eu.com.

Key articles from TPM-researchers

Kingston, J., Jager, J., Koornneef, F., Frei, R., & Schallier, P. (2007). ECFA+: Events and
conditional factors analysis manual. NRI-4. NRI Foundation.

Description of a typical TPM-problem that may be addressed using the theory / method

Event reconstruction method for Learning Agencies, (see Organisational Learning), e.g. to
equip patient safety committees in health care. Collection, ordering and filtering of relevant
facts and gaps from much data in accident investigation, structured to identify systemic causal
facts, e.g. as in the Enschede (2000) and Volendam disasters (2001).

Taal, R., Theuws, H.A.J., & Wester, W.J. (2001). Vuurwerkramp Enschede 13 mei 2000 -
Onderzoek naar het brandweeroptreden tot en met de fatale explosie: Van proactie tot en met
repressie. Appendices G & H. Inspectie Brandweerzorg en Rampenbestrijding (IBR).
Retrieved on 18 October 2009 from
www.minbzk.nl/aspx/download.aspx?file=/contents/pages/7298/03_onderzoek_brandweeropt
reden_1-01.pdf
Van Zanten, P.J., Theuws, H.A.J., & Plug, N.E.M.P.M. (2001). Rapport Incident cafbrand
nieuwjaarsnacht 2001. Appendix 2. Retrieved on 18 October 2009 from
www.minbzk.nl/aspx/download.aspx?file=/contents/pages/2408/eindrapport_commissie_cafe
brand_vollendam_het_incident.pdf

Related theories / methods
3CA, MORT, STEP

Editor

Floor Koornneef (f.koornneef@tudelft.nl)













197
(EMPIRICALLY INFORMED) CONCEPTUAL ANALYSIS

Brief discussion

One of the main methods of philosophy is conceptual analysis, that is, a critical/radical
examination of the basic concepts (conceptual frameworks) underlying our thinking and
doing. Empirically informed conceptual analysis of technology takes as its starting point the
basic concepts (conceptual frameworks) employed by engineers in their professional practice.
It is best practiced in a constant dialogue with engineers both by analyzing their writings and
by actively participating in meetings and conferences. Focal points of interest for conceptual
analysis are more general, theoretical and methodological work by engineers as well as
detailed descriptions of particular cases in engineering.

Applicability

In principle any conceptual framework may be subjected to this method. Its advantage over
more traditional forms of conceptual analysis is its possible relevance for engineering
practice. Examples of topics to which this method is applied: engineering ontologies, design
methodology, design, rationality in engineering design, technical failures, moral status of
technical artefacts, the fact/value distinction in engineering practice

Pitfalls / debate

One of the main problems with this approach is to how to practice conceptual analysis of
engineering and technology in such a way that its results may be relevant for engineering
practice but also for (mainstream) philosophy. Another problem concerns the issue of
combining in a balanced way descriptive and normative elements in a conceptual analysis.

Key references

/

Key articles from TPM-researchers

Kroes, P., & Meijers, A. (2001). The empirical turn in the philosophy of technology.
Amsterdam, New York: JAI.
Kroes, P. A. (2001). Engineering design and the empirical turn in the philosophy of
technology. In P. A. Kroes, & A. W. M. Meijers. The empirical turn in the philosophy of
technology, 20, 19-43. Amsterdam: JAI (Elsevier Science).

Description of a typical TPM-problem that may be addressed using the theory / method

How to model socio-technical systems? Since we are dealing here with hybrid systems
(consisting not only of technical but also of social (intentional) objects, one of the main issues
is that we have to combine different (physical/technical and social/intentional) conceptual
frameworks. Another one: How to formally model functions and functional decompositions.

Related theories / methods

The naturalistic turn in philosophy.
198

Editor

Peter Kroes (p.a.kroes@tudelft.nl)














































199
ENGINEERING SYSTEMS DESIGN

Brief discussion

An actor perspective and the systems perspective are competing perspectives which must be
used alongside each other in the design of large scale socio-technical systems. Full integration
will erode this competing character rendering both perspectives of lesser value. Using both
perspectives alongside each other means that complex socio-technical systems need to be
designed by engineering systems designers who are able to switch perspectives continuously,
and are able to apply both perspectives in a fruitful manner.

Perspectives:
Aspect System perspective Actor perspective
Unit
Subsystems, dependencies,
multiple conflicting objectives
Actors, different types of
interdependency, conflicting interests
Unit rationality
Subsystems show rational
behaviour
Actors are reflective: they adopt
strategic behaviour and they learn; no
optimality
Constellation of
units
Complex systems: systems no
longer recognisable or
impossible to model rigorously
Overall set of dependencies is not
recognisable: information is frequently
contested, problems are wicked
Unit aggregation
and segregation
System decomposition Networks of networks
Dynamic
behaviour
Dynamics in subsystems and in
the topology of a system
Dynamics in the constellation of actors
and in the topology of the network
Design approach
Technical-rational, successive,
phased approach is useful
Network and contested information
frustrate rational, successive phased
decision-making processes
Design results Result / performance is key
Decision-making is the result of the
interaction: knowledge of this process
is crucial

For modelling:
Systems perspective Actor perspective
Modelling is an analytical activity Modelling is a political activity
Separation of analysis and intervention
is possible
Separation of analysis and intervention is more
difficult
The model facilitates a better
understanding of the system
Modelling can be an incentive to further strategic
behaviour on the part of actors
Modelling is a testable activity Modelling is the result of negotiation

The use of models for modelling objects in a certain perspective:
Perspective
Modelled object
Systems perspective Actor perspective
Hard, technical
systems
Substantive and optimal design of
desired system
Rules of the game for modelling
systems
Actors Modelling actor behaviour
Process design: negotiated rules
of the game

200
Applicability

Its applicable to large scale sociotechnical system design, such as infrastructures.

Pitfalls / debate

Engineering systems design is easily confused with systems engineering. The latter, however,
is generally narrower in focus. As a general rule of thumb, one could consider Engineering
Systems Design to design open systems, i.e. undefined or dynamic/fluid boundaries for all
system aspects (technical, economic, actors, etc), whereas the Systems Engineering theory
tends to want to define the boundaries of the system under consideration before starting the
design.

Key references

Maier, M. W., & Rechting, E. (2002). The art of systems architecting. Boca Raton, FL: CRC
Press.
De Bruijn, J.A., & Herder, P.M. (2009). System and actor perspectives on sociotechnical
systems. IEEE Transactions on Systems Man and Cybernetics Part A-Systems and Humans,
39(5), 981-992.
Bauer, J., & Herder, P.M. (2009). Designing socio-technical systems. In D. Gabbay, P.
Thagard, & J. Woods (Eds.), Handbook of the philosophy of science: Handbook philosophy of
technology and engineering sciences (pp.601-630). Elsevier Publishers.

Key articles from TPM-researchers

De Bruijn, J.A., & Herder, P.M. (2009). System and actor perspectives on sociotechnical
systems. IEEE Transactions on Systems Man and Cybernetics Part A-Systems and Humans,
39(5), 981-992.
Herder, P.M., Bouwmans, I., Dijkema, G.P.J., Stikkelman, R.M., & Weijnen, M.P.C. (2008).
Designing infrastructures from a complex systems perspective. Journal of Design Research,
7(1), 17-34.
Bauer, J., & Herder, P.M. (2009). Designing socio-technical systems. In D. Gabbay, P.
Thagard, & J. Woods (Eds.), Handbook of the philosophy of science: Handbook philosophy of
technology and engineering sciences (pp.601-630). Elsevier Publishers.

Description of a typical TPM-problem that may be addressed using the theory / method

- Design of a Syngas infrastructure
- Design of a CO2 market
- Design of a gas infrastructure
- Design of a heat infrastructure
- etc.

Related theories / methods

Engineering Systems, Complex systems, Socio-technical systems

201
Editor

Paulien Herder (p.m.herder@tudelft.nl)













































202
ETHNOGRAPHY
8


Brief discussion

Ethnography is a form of social analysis (a methodology) based on descriptions of
communities of people, their practices and their experiences in daily life. An ethnography
literally the written description of a folk resembles a policy science case study in its holistic
approach of real life situations. In contrast, an ethnographic case is not bounded by a certain
institutional setting, but by the culture and practices of the community under study. As the
core strategy of anthropology, ethnography is best known for studies of foreign cultures and
exotic rituals. In recent decades, new types of folks are studied ethnographically. These
include communities central to the study of technology, policy and management, like public
managers, laboratory staff, and social movements. Although a qualitative style clearly stands
out, ethnographers are not parochial on methods of data gathering. Most studies are based on
in-depth interviewing and up-close participant observation, but they may also employ
surveys, conversations, genealogy, biographies, document analysis, more distant observation,
etc. Ethnography provides detailed, inspiring insights in the variety of practices in policy and
management. It is particularly useful for exploratory studies and theory building on new
topics. In established areas of study, it opens up new perspectives as well. A fuller
understanding of how management and policy come about in everyday circumstances helps to
undercut conventional, all too generic explanations of change, conflicts and governance in our
societies and organizations.

Applicability

The ethnographic method applies to the operational activity of actors, for example in real-life
interaction between actors at various meetings, in control rooms, in the field or even in virtual
contexts. A main underlying assumption of ethnography is that unwritten social rules,
structures and mechanisms may underlie everyday practice. Actors may not be fully aware of
them. Consequently, interviews may overlook these rules, structures and mechanisms when
actors state how they think they act in daily practice. Observing these actors in the process
under study may then be required to uncover these implicit practices. Whatever methods the
researcher chooses or is able to use within the social context under study, for them to be
ethnographic, they should enable the researcher to look behind this normative self-image of
actors. A second underlying assumption is that inductive social-scientific research is
important as it allows a researcher to challenge existing theories or create new ones when
none seem available or applicable. Quantitative research approaches have the disadvantage
that researchers have to know what to look for in advance. Ethnographic methods apply when
the researchers are still less sure what to find, except than to expect the unexpected.

Pitfalls / debate

Generally acknowledged but surmountable pitfalls of this method are how to gather data
systematically, how to remain objective, how to aggregate and how to acknowledge and
account for the effect of the researcher on the subject. Over the years, many ethnographers
have confronted many of these pitfalls by adopting specific science philosophical paradigms.
For instance, the pitfall of unsystematic data gathering has led many an ethnographer to adopt

8
Participant observation
203
techniques of grounded theory
9
and formulate clear handbooks of their research. Also, many
an ethnographer adopts an interpretivist epistemological paradigm, arguing that objective
knowledge does not exist, but is always constructed and contextual. Thus, over the course of
the 20
th
century, ethnography has been highly influenced by ongoing science philosophical
debates, which as a result leads many ethnographers to consider ethnography more of a
methodology than a method.

Key references

Malinowski, B. (1922). Argonauts of the Western Pacific: An Account of Native Enterprise
and Adventure in the Archipelagoes of Melanesian New Guinea. New York: Dutton.

Malinowski is thought to be one of the first anthropologists to challenge the then existing
methods of anthropological research (i.e. colonialist anthropology - visiting an island,
observing people briefly, taking artefacts and returning home). In this book, he develops
certain methods that would later be considered ethnographic. Not an interesting read for the
typical TPM researcher, but still important for ethnography.

Hammersley, M. (1992). What's wrong with ethnography? London: Routledge.

Hammersley is considered an influential theorist when it comes to ethnography. This book
offers many useful insights into the particulars of ethnography, i.e. the act of participant
observation and the ethical dilemmas of ethnographic research. The book goes further than
merely describing problems and possible solutions that these particulars bring forth. It reveals,
discusses and challenges underlying suppositions of such problems and solutions. This book
is not available in the TU Delft Library. However, Hammersleys later book entitled
Ethnography is available there.

Hine, C. (2000). Virtual ethnography. London: SAGE.

Hine is often quoted as an influential theorist who first applied ethnography for Internet
research, specifically on the World Wide Web. This raised and continues to raise issues about
how close the researcher can get to the people under study. Moreover, it challenges the idea
that ethnography is always bound by clear borders, a common idea in anthropology. The book
offers insights into the practicalities of doing ethnography on the Internet. Not available in the
TU Delft Library.

O'Reilly, K. (2005). Ethnographic methods. Oxon: Routledge.

In this book OReilly offers clear guidelines for doing ethnography, as well as insights into
where these guidelines come from, and what influence ethnographys different interpretations
and developments have on them. Not available in the TU Delft Library

Schwartzman, H. B. (1993). Ethnography in organizations. Newbury Park: SAGE.


9
Glaser, B. G., & Strauss, A. L. (1967/1995). The discovery of grounded theory: Strategies
for qualitative research. Piscataway: AldineTransaction.
204
When it comes to applying ethnography for an organizational study, Schwartzman is a good
book as it explains how this can be done and how it differs from anthropological applications
of ethnography. Available in the TU Delft Library.

Key articles from TPM-researchers

De Bruijne, M., & Van Eeten, M. (2006). Assuring high reliability of service provision in
critical infrastructures. International Journal of Critical Infrastructures, 2(2/3), 231-246.
Van den Top, J., & Steenhuisen, B. (2009). Understanding ambiguously structured rail traffic
control practices. International Journal of Technology, Policy and Management, 9(2), 148-
161.
Van der Arend, S. (2007). Pleitbezorgers, procesmanagers en participanten. Interactief
beleid en de rolverdeling tussen overheid en burgers in de Nederlandse democratie.
(Dissertatie). Delft: Eburon.

Description of a typical TPM-problem that may be addressed using the theory / method

Ethnography can be applicable when a researcher wishes to analyze and describe
organizational behaviour of a social-technical system when the researcher wishes to uphold a
critical stance towards existing organization theory. Examples: researching organizational
behaviour of open source communities or players of massively multiplayer online games.

Ethnography can be useful to give meaning to the current results of restructuring utility
sectors. As we do not exactly understand how these technological systems are managed under
the current conditions, ethnographic studies within these industries enable to get a rich picture
of what constitutes the current performance levels. As this method is closely involved with
practitioners, the outcomes may often help to explain current strategies or help practitioners
further adjusting or developing their strategies.

Ethnography can be applicable studying implementation processes. Ethnography enables the
researcher to go beyond the fairly straightforward rules, criteria or policy. It can provide
detailed insight in the processes as they occur in the more complex and dynamic field. (The
classic problem: the books versus practice.)

Related theories / methods

Cultural anthropology is the discipline of origin for ethnography. In (western) sociology,
related theories and approaches are manifold: Chicago-school sociology, dramaturgy,
symbolic interactionism, (social) constructivism, interpretive policy analysis, action research,
actor-oriented approach, conversation analysis (discourse analysis).

The case study approach can resemble ethnographic research. It is not unreasonable to
consider e.g. an ethnographic account of a social-technical system that incorporates results of
the ethnographers own experiences as well as interviews or other data as an in-depth single
case study. Nevertheless, an ethnographic account will always focus on the people who were
studied and explain their behaviour using their terms, which reveals the cultural
anthropological underpinnings of ethnography. A case study does not necessarily uphold the
same premise.

205
Editors

Sonja van der Arend (s.h.vanderarend@tudelft.nl)
Hester Goosensen (h.r.goosensen@tudelft.nl)
Bauke Steenhuisen (b.m.steenhuisen@tudelft.nl)
Harald Warmelink (h.j.g.warmelink@tudelft.nl)












































206
EVENT TREE ANALYSIS (ETA)

Brief discussion

An Event Tree (ET) is a graphical representation of the sequence of events that occur in a
system, following an initiating event (IE), such as an initial fire, pushing a wrong button in a
control room. The ET analysis is an inductive procedure which starts with an initiating event
and shows all the possible outcomes resulting from that initial event, following a series of
branches which represent the possible chains of events. The result is the possible scenarios or
outcomes that can arise from the IE, in terms of the occurrence/non-occurrence of some
intermediary events, called pivotal events (see Figure below). Any pivotal event has a certain
probability, p
i
. In most of the ETAs, the pivotal event splits are binary; a phenomenon either
does or does not occur. But the binary character is not strictly necessarily; some ETs divide
into more than two branches. In this case, the paths should be distinct and mutually exclusive
and quantified as such. Using an ET, the probability of occurrence of one outcome can be
computed by multiplying the probabilities associated with the branch that leads to that
outcome. For the example given in Figure below, the probability of occurrence of Outcome 5
is: (1-p
1
)p
2
p
3.



Figure: A generic example of an Event Tree

Applicability

The ETA is particularly applicable when consequences of an accident are the main interest. It
helps at studying whether the small disturbances or combination of them lead to high
consequences. It has been applied successfully in nuclear and process industry. Important
consideration can be made about the possible barriers and safety measures that can prevent or
limit the consequences of an accident.

Pitfalls / debate

The events included in an ET can have not only two states, as in FTA, but multiple states.
Although this is possible, the quantification with multi-state events becomes more complex.

Usually, the consequences in an ETA are expressed in terms of material damage (money),
injuries or deaths, and different categories of consequences in terms of these measures can be
207
defined. However, the categories of consequences have to cover all possible outcomes of the
accident.

As for FTA, it is assumed that the events occur in a linear causal order, more simultaneously
influences not being possible. In fact, ET is a sequence of instantaneously events. The
branches of the three should be mutually exclusive, describing different scenarios, although
the type of consequences could be the same.

Another problem with ET is that when the events are dependent, the probability of each event
is a conditional probability, conditional on all predecessors. The use of unconditional
probabilities can lead to wrong results.

Key references

Aven, T. (1992) Reliability and risk analysis. London: Elsevier Applied Science. ISBN: 1
651668969.
Stamatelatos, M. (2002) Probabilistic risk assessment procedures guide for NASA managers
and practitioners. Version 1.1. NASA Office of Safety and Mission Assurance

Key articles from TPM-researchers

Ale, B.J.M., Bellamy, L.J., van der Boom, R., Cooke, R.M., Duyvism, M., Kurowicka, D.,
Lin, P.H., Morales, O., Roelen, A., & Spouge, J. (2009). Causal model for air transport
safety. Final Report.

Description of a typical TPM-problem that may be addressed using the theory / method

ETA can be used for determining the effectiveness of different safety measures. One can
study how the consequences of a fire in a building change when the sprinkler system does not
work. ETA also helps at describing the course of events following a human error in a process,
for example. It has been applied for air transport safety.

Related theories / methods

ETA is part of Quantitative Risk Analysis (QRA) and together with Fault Tree Analysis
(FTA) is used in the Bow-tie analysis, representing the right hand side of the bow-tie. ETs are
a particular type of Bayesian Belief Nets (BBNs).

Editor

Daniela Hanea (d.m.hanea@tudelft.nl)









208
EXPERT JUDGMENT & PAIRED COMPARISON

Brief discussion

Knowledge from field experts is very often used in the area of risk assessment. The main
reasons are the unavailability of the data and the need for assessing future risks. Experts can
be used both for understanding how the system under analysis works and for quantifying
different parameters that describe the system and cannot be quantified. The expert judgment
method can be applied to obtain either quantitative assessments (the classical model) or only
qualitative and comparative assessment (the paired comparison model).

In the quantitative assessment, the expert subjective opinion is used for determining the
probability distribution functions for parameters of interest. Experts are usually asked to
assign the 5
th
, 50
th
and 95
th
quantiles of the distributions of the parameters according to their
beliefs. The individual expert assessments are combined using weights computed on the basis
of their answers to a set of seed questions, whose values are known to the analyst, but
unknown to the experts at the time of the assessment. Two criteria are used to measure
experts performance, and, thus, to compute weights: calibration and information. Calibration
means, in fact, how right the expert is, while information measures how sure the expert is.
An expert who is right and sure is preferred above experts who are right but not sure or
wrong but sure. The calibration score (be right) is considered more important than the
information score (be sure).

Paired comparison is the qualitative expert judgment technique which is especially designed
for psychological scaling. A range of options are compared against each other, and repeat for
each experts. Several statistic tests are used to examine whether there is a sufficient consensus
among experts to make a scaling decision among the options.

Applicability

Assigning probability distributions using expert judgment is a widely used method in risk
analysis, when new designs are analyzed. When data is missing or sparse, the expert
knowledge can bring valuable information about the values of the important variables. The
paired comparison is widely used in helping people deal with complex decisions, to construct
ratio scales that are useful in making important decisions.

Pitfalls / debate

Locating suitable experts is a difficult stage of the process. The selection criteria with the term
expert can be ambiguous and biased, and it can be very difficult to be certain once there is
no data to validate who is the expert. In consequence, the opinion from the expert groups
may not be validated as well.

Additionally, the choice of the seed questions used to weight the experts in a quantitative
assessment is essential and therefore might raise questions. When the available data is limited,
the seed questions cover only partially the domains of expertise which may cause problems.
Expertise is assessed not covering the full object of study but only those topics for which seed
questions can be formulated.

Key references
209

Cooke, R.M. (1991). Expert in uncertainty: Opinion and subjective probability in science.
New York: Oxford University Press.
Cooke, R.M., & Goossens, L.H.J. (2008). TU Delft expert judgment data base. Reliability
Engineering and System Safety, 93, 657-674.
Meyer, M., & Booker, J. (2001). Eliciting and analyzing expert judgment: A practical guide.
Statistical Sciences Group, Los Alamos National Laboratory.
Bradley, R.A., & Terry, M.E. (1952). Rank analysis of incomplete block designs: I. the
method of paired comparisons. Biometrika, 39, 324345.
Thurstone, L.L. (1927). A law of comparative judgment. Psychological Review, 34, 278286.

Key articles from TPM-researchers

Cooke, R.M., & Goossens, L.H.J. (2002). Procedures guide for structured expert judgment.
Report EUR 18820.
Hanea, D.M. (2009, November). Human risk of fires: Building a decision support tool using
Bayesian networks (PhD thesis). Safety Science Group, TBM.
Ale, B.J.M., Bellamy, L.J., van der Boom, R., Cooke, R.M., Duyvism, M., Kurowicka, D.,
Lin, P.H., Morales, O., Roelen, A., & Spouge, J. (2009). Causal model for air transport
safety. Final Report.

Description of a typical TPM-problem that may be addressed using the theory / method

The quantitative expert judgment assessment has been used for quantifying failure
probabilities in nuclear industry, occupational field, fire safety and air transport safety.

In safety science field, this technique can be applied to safety management decision, for the
purposes of evaluating the influences of management actions on human error probability of
executing a specific task. Application of this method is quite new in the field of safety
management, which is now developing under this group.

Related theories / methods

Expert judgment method is one of the techniques used very often for getting data needed for
Fault Tree Analysis (FTA), Event Tree Analysis (ETA), Bayesian Belief Nets (BBNs) when
the field data does not exist or it is sparse.

Pair comparison is related with several models applied in the psychological scaling models,
such as Thurstone model, Bradley-Terry model, and NEL (negative exponential lifetime)
model.

Editors

P.H. Lin (p.h.lin@tudelft.nl)
Daniela Hanea (d.m.hanea@tudelft.nl)



210
EXPLORATORY MODELLING (EM)

Brief discussion

Exploratory Modelling (EM) is a research methodology to analyze complex and uncertain
systems (Bankes, 1993; Agusdinata, 2008). It originated at RAND in the early 1990s. EM
can be contrasted with a consolidative modelling (CM) approach, in which all the existing
knowledge about a system is consolidated in a best estimate model that is subsequently used
as a surrogate for the real world system. The CM approach, which is the traditional approach
to modelling, is valid only when there is sufficient knowledge at the appropriate level and of
adequate quality available. However in conditions in which there is deep uncertainty
(situations in which analysts or parties involved in a decision do not know or cannot agree on
an appropriate representation of the system of interest), which is normally the case when
dealing with future systems, the CM approach proves problematic, since it is impossible to
build a validatable model. Using a best estimate model in those cases can lead to
recommending seriously wrong policies. However, in such situations there still is a wealth of
knowledge and information available that can be used to support a variety of structurally
different models, each of which might have different parameters with a range of parameter
values. EM aims at offering support for exploring this set of models across the range of
plausible parameter values, and for drawing valid inferences from this exploration. In a sense,
it is a greatly expanded version of sensitivity analysis.

In the context of EM, a single run of a given model structure and a given parameterization of
that structure constitutes a computational experiment that reveals how the real world would
behave if the various hypotheses presented by the structure and the parameterization were
correct. By exploring a large number of these hypotheses, one can get insights into how the
system will behave under a large variety of assumptions, provided that the rationales of the
argument from premises to conclusions are clear and correct. To support the exploration of
these hypotheses, data mining techniques for analysis and visualization are employed.

Davis (2003) and Agusdinata (2008) have broadened the EM approach to include other
aspects of policy analysis needed to support the decisionmaking process. This broader
approach is called Exploratory Modelling and Analysis (EMA). Figure below shows an
overview of the EMA process. Different policies and large numbers of scenarios (which
describe changes external to the system, such as economic changes, demographic changes,
etc.) are simulated using a fast system model, resulting in a database containing the results of
the simulations. Using a variety of visualization tools and analysis techniques, the results of
the EM can be analyzed and displayed.

Because EM aims to cover the whole space of possibilities, it is usually necessary to make
huge numbers of computer runs (thousands to hundreds of thousands). With traditional best
estimate models this would take too much time. With fast and simple models (low-resolution
models), one can cover the entire uncertainty space, and then drill down into more detail
where initial results suggest interesting system behaviour (e.g., the boundaries between policy
success and failure).

211

Figure: Overview of the exploratory modelling and analysis process

Applicability

If performed well, the results of an EMA study can contribute tremendously to making
decisions under conditions of uncertainty. It is most useful for decision problems in which
analysts do not know, or the parties to a decision cannot agree on (1) the system model, (2)
the probability distributions to represent uncertainty about key variables and parameters in the
model, and/or (3) how to value the desirability of alternative outcomes. Exploratory
modelling has proven to be useful for the discovery of low regret adaptive policies (e.g., see
Agusdinata, et al., 2006).

Pitfalls / debate

Performing EMA is a craft, not a science. In particular, how to select the finite sample of
hypotheses to run from the large or infinite set of possibilities is the central problem. Also
problematic is how to analyze, visualize, and examine the results of such a large number of
runs, and how to build a appropriate fast and simple model that can be used for the relevant
explorations.

Key references

Agusdinata, D.B. (2008). Exploratory modeling and analysis: A promising method to deal
with deep uncertainty (PhD Thesis). Delft University of Technology, Delft.
Bankes, S. (1993). Exploratory modeling for policy analysis. Operations Research, 41(3),
435-449.
Bankes, S. (2001). Exploratory modeling. In S.I. Gass, & C.M. Harris (Eds.), Encyclopedia of
operations research and management science. Dordrecht: Kluwer Academic Publishers.
Brooks, A., Bankes, S., & Bennett, B. (1999). An application of exploratory analysis: The
weapon mix problem. Military Operations Research, 4(1), 67-80.
212
Davis, P.K. (2003). Uncertainty-sensitive planning. In S.E. Johnson, M.C. Libicki, & G.F.
Treverton (Eds.), New challenges new tools for defense decisionmaking, MR-1576-RC. Santa
Monica: The RAND Corporation.
More references can be found on the following Website:
http://www.evolvinglogic.com/el_news.html.

Key articles from TPM researchers

Agusdinata, D.B., Walker, W.E., & Marchau, V.A.W.J. (2006). Exploratory modeling to
support a robust policy for implementing Intelligent Speed Adaptation. In H. J. van Zuylen
(Ed.), Proceedings of the 9th International TRAIL Congress (pp. 1-23). The Netherlands
TRAIL Research School, Delft.
Agusdinata, D.B., Van der Pas, J. W. G. M., Walker, W.E., & Marchau, V.A.W.J. (2009).
Multi-criteria analysis for evaluating the impacts of intelligent speed adaptation. Journal of
Advanced Transport Systems _(special issue on multiple criteria evaluation and optimization
of transportation systems), 43(4), 413-454.
Van der Pas, J.W.G.M., Agusdinata, D.B., Walker, W.E., & Marchau, V.A.J.W. (2007).
Exploratory modeling to support multi-criteria analysis to cope with the uncertainties in
implementing ISA. 14th World Congress on ITS, Beijing, China, 9-13 October 2007.
Van der Pas, J.W.G.M., Agusdinata, D.B., Walker, W.E., & Marchau, V.A.J.W. (2008).
Developing robust Intelligent Speed Adaptation policies within a multi-stakeholder context:
An application of exploratory modelling. International Conference on Infrastructure Systems,
Rotterdam, 10-12 November 2008.

Description of a typical TPM problem that may be addressed using the theory / method

In his PhD thesis, Agusdinata (2008) applies EMA to three cases:
1. The uncertain effects of the implementation of a traffic safety technology (Intelligent
Speed Adaptation).
2. An investment decision with respect to an electrical power plant.
3. A decision on household heating policies.

Related theories / methods

Uncertainty, Adaptive policymaking

Editors

Warren Walker (W.E.Walker@tudelft.nl)
Jan Kwakkel (J.H.Kwakkel@tudelft.nl)
Vincent Marchau (V.A.W.J.Marchau@tudelft.nl)
Jan-Willem van der Pas (J.W.G.M.vanderPas@tudelft.nl)






213
EXPLORATORY MODELLING ANALYSIS (EMA)

Brief discussion

Exploratory Modelling was developed at RAND in the early 90s at RAND. The traditional
approach to modelling is to develop a best estimate model that can be validated by
comparison with real world outcomes and can then be used as a surrogate for the real world
system. In cases in which the system model can be fairly well specified, this approach has
been successful. However in conditions were there is deep uncertainty (situations in which
analysts or parties involved in a decision do not know or cannot agree on an appropriate
representation of the system of interest), the traditional modelling approach proves
problematic because it is impossible to build a validatable model, using a best estimate model
in those cases can lead to serious policy failure. Therefore, rather than attempting to predict
system behaviour, the Exploratory Modelling and Analysis (EMA) approach has been
developed, which aims to analyze and reason about the systems behaviour (Bankes, 1993).
EMA uses these non-validatable models as a hypothesis generator, to understand the
behaviour of a system. One set of input and system variables can be established as a
hypothesis about the system. One can then ask what the system behaviour would be if this
hypotheses were correct. By constructing a large number of these sets, one may get insights
into how the system will behave under a large variety of assumptions, provided that the
rationales of the argument from premises to conclusions are clear and correct.

In EMA, relatively fast and simple computer models of the policy domain are applied.
Because EMA aims to cover the whole space of possibilities, it is usually necessary to make
huge numbers of computer runs (often hundreds to thousands runs). With traditional best
estimate models this would take too much time. With fast and simple models (low-resolution
models), one can cover the entire uncertainty space, and then drill down into more detail
where initial results suggest interesting system behaviour (e.g. the boundaries between policy
success and failure). To perform these runs software is available called CARs
(http://www.evolvinglogic.com/)

Figure below shows an overview of an EMA process. Different policies and a large number of
scenarios (that describe external changes, like economic changes, demographic changes etc.)
are simulated in an impact assessment model, resulting in a database with the results of the
simulation. These can values using different values that represent the current and future goals
and objectives of different stakeholders. Using different visualization tools and analysis
techniques the results of the analysis can be displayed.

214

Figure: Overview of the exploratory modelling process

Applicability

Performing an EMA is difficult and more of an art than a science, it is hard work and
especially searching and analyzing the results of the EMA is a challenge. However, if
performed well the results of and EMA study can contribute tremendously to making
decisions under conditions of uncertainty. The field of application is that of decision problems
and decision analysis that is difficult because analysts or parties involved in a decision do not
know or cannot agree on an appropriate representation of the system of interest

Pitfalls / debate

Pitfalls are:
- Interpretation of the huge database with modelling results
- Building an exploratory model and including the appropriate uncertainties.

Key references

Bankes, S. (1993). Exploratory modelling for policy analysis. Operations Research, 41(3),
435-449.
Brooks, A., Bankes, S., & Bennett, B. (1999). An application of exploratory analysis: The
weapon mix problem. Military Operations Research, 4(1), 67-80.
Agusdinata, D.B. (2008). Exploratory modelling and analysis: A promising method to deal
with deep uncertainty (PhD thesis). Delft University of Technology, Delft.
More references interesting references can be found on the website of evolving logic.

Key articles from TPM-researchers
215

Agusdinata, B., Marchau, A.W.J., & Walker, W.E. (2006). Exploratory modelling to support
a robust policy for implementing Intelligent Speed Adaptation. Paper presented at the TRAIL
in MOTION (9th International TRAIL Congress 2006), ISBN: 90-5584-081-5, Rotterdam.
Agusdinata, B., Van der Pas, J.W.G.M., Walker, W.E., & Marchau, A.W.J. (2009). An
innovative multi-criteria analysis approach for evaluating the impacts of Intelligent Speed
Adaptation. Journal of Advanced Transportation (Special issue on multiple criteria
evaluation and optimization of transportation systems), 43(4), 413-454.
Van der Pas, J.W.G.M., Agusdinata, B., Marchau, A.W.J., & Walker, W.E. (2008).
Developing robust Intelligent Speed Adaptation policies within a multi-stakeholder context:
An application of exploratory modelling. Paper presented at the NGInfra International
Conference, Building Networks for a Brighter Future.
Van der Pas, J.W.G.M., Agusdinata, B., Walker, W.E., & Marchau, V.A.W.J. (2007).
Exploratory modelling to support multi-criteria analysis to cope with the uncertainties in
implementing ISA. Paper presented at the 14th World Congress on Intelligent Transport
Systems "ITS For a Better Life".

Description of a typical TPM-problem that may be addressed using the theory / method

In his PhD thesis Agusdinata (2008) gives three examples of typical TPM problems that can
be addressed using Exploratory Modelling and Analysis. These three examples are worked-
out in three cases, concerning:
1. The problem of road traffic safety due to speeding. The uncertain effect of the
implementation of a traffic safety technology is assessed using EMA.
2. An investment decision on electrical power plant.
3. A public policy decision on household heating policies.

Related theories / methods

/

Editor

Jan-Willem (J.W.G.M.) van der Pas (J.w.g.m.vanderpas@tudelft.nl)















216
FAULT TREE ANALYSIS (FTA)

Brief discussion

Fault trees (FTs) are graphical models that represent the causal chains of events that may lead
to an accident and the logical relations between these events. The FTA starts by specifying the
undesired event, the top of the tree (Figure below). The system is then analyzed in order to
find all the conditions in which the top event occurs. The undesired event can be a system
failure, or a component failure, loss of function, a human error, etc. The logic in a FT is
binary: a fault either does or does not occur. The causes of the top event are therefore
connected through logic gates, usually AND (which means that all the causes have to be
present for the effect event to occur) and OR gates (which means that the effect event occurs
if at least one of the causes occurs). A FT can be quantified by establishing the probability of
occurrence of the basic events. Using simple basic rules and symbols, the probability of
occurrence of the top event can be computed. For the example given in Figure, the probability
of the top event is:
( ) ) 5 ( ) 4 ( ) 3 ( ) 2 ( ) 1 ( ) 2 ( ) 1 ( ) 2 ( ) 3 ( ) 1 ( ) ( BE P BE P BE P BE P BE P BE P BE P E P BE P E P TE P + = =


Figure: A generic example of a Fault Tree

Applicability

The FTA is used to detect the failures or combinations of failures that lead to the failure of a
system or a system component. This can be done by determining the minimal cut sets, the set
of failures for which, if one of the failures is excluded, the system does not fail anymore. The
big advantage of the FTA comparing with other accident analysis methods (for example
HAZOP) is that the FT can be quantified. Knowing the failure probabilities of the basic
components of the systems and using logical and probability laws, the failure probability of
the top event, can be computed.

Pitfalls / debate

1. The main assumption with FTA is that there is no event that may influence the occurrence
of more events (events are independent). A cooling system with two pumps may fail if both
pumps fail (it can work if at least one of them works). If the pumps depend on an electrical
217
system, failure of electricity causes the failure of both pumps simultaneously (a common
cause). This common cause can be included in a FT, but not all common causes can. A poor
safety culture can degrade the operator alertness, training, maintenance. This common cause
influences probability failures of multiple components.

2. There is no time involvement in a FT. The failure of the casual events produces
instantaneously the consequence. In a real system, however, it might take some time until the
consequence occurs.

3. The events are binomial, meaning that the component either fails or not. No other states of
the components are possible.

Key references

Vesely, W.E., Goldberg, F.F., Roberts, N.H., & Haasl, D.F. (1981). Fault Tree handbook.
NURWEG-0492.
Aven, T. (1992). Reliability and risk analysis. London: Elsevier Applied Science.

Key articles from TPM-researchers

Ale, B.J.M., Bellamy, L.J., van der Boom, R., Cooke, R.M., Duyvism, M., Kurowicka, D.,
Lin, P.H., Morales, O., Roelen, A., & Spouge, J. (2009). Causal model for air transport
safety. Final Report.
Ale, B.J.M., Bellamy, L.J., Cooke, R.M., Goossens, H.J., Hale, A.R., Roelen, A.L.C., &
Smith, E. (2006). Towards a causal model for air transport safety: An ongoing research
project. Safety Science, 44(8), 657-673.

Description of a typical TPM-problem that may be addressed using the theory / method

FTA was used in combination with other methods for air transport safety. It can also be used
in occupational safety, for the analysis of specific type of accident, for example falling from
ladder.

Related theories / methods

FTA is part of quantitative risk analysis (QRA) and together with Event Tree Analysis (ETA)
is used in the Bow-tie analysis, representing the left hand side of the Bow-tie model. It is also
a method used for analyzing accident statistics. FTs are a particular type of Bayesian Belief
Nets (BBNs).

Editor

Daniela Hanea (d.m.hanea@tudelft.nl)






218
FINITE-DIMENSIONAL VARIATIONAL INEQUALITY

Brief discussion

The computation of economic and game theoretic equilibria has been of interest in the
academic and professional communities since mid-1960s. Mainly two approaches, namely
optimization and fixed point, have been widely used since. However, both approaches have
their own advantages and disadvantages when applied to solve equilibrium models;
separately, they either lack the generality or the computational efficiency which is necessary
for solving large-scale equilibrium problems. In recent years, finite-dimensional variational
inequality has emerged as a very promising candidate for filling the gap created by the
optimization and fixed point approaches.

Consider the standard problem below of finding the minimal value of a differentiable
function f over a closed interval I = [a,b]. Let be a point in I where the minimum occurs.
Three cases can occur:

1. if then ;
2. if then ;
3. if then

These necessary conditions can be summarized as the problem of finding such that



The absolute minimum must be searched between the solutions (if more than one) of the
preceding inequality.

The above is just a simple example. Finite-dimensional Variational Inequality of this kind is
often utilized to achieve transport equilibrium problems. A more generalized form is
presented below as:

The finite-dimensional variational inequality problem, VI (F,K), is to determine a vector
K X e
*
, such that:
, , 0 ) ), ( (
* *
K X X X X F e >

Where F is a given continuous function from K to
N
R
, K is a given closed convex set, and (. ,
.) denotes the inner product in
N
R
.

Variational Inequality problems contain optimization problem as a special case, when the
Jacobian matrix of F is symmetric and positive semi-definite.

The algorithms that can be used in this theory may include:
- Projection algorithms
- Relaxation methods
- Dual cutting plane methods
- Simplicial decomposition
- Gap descent Newton methods

Applicability
219

Variational Inequality theory is a powerful tool for the study of many equilibrium problems,
since it encompasses, as special cases, important problem classes which are widely utilized in
economics and in engineering, such as systems of nonlinear equations, optimization problems,
and complementarity problems. It is especially useful when the study of network problems,
and in particular, multi-tiered and multi-criteria network problems is concerned.

Key references

Nagurney, A., & Dong, J. (2002). Supernetworks, Appendix B. Cheltenham, UK: Edward
Elgar.
Harker, P.T., & Pang, J.S. (1990). Finite-dimensional variational inequality and nonlinear
complementarity problems: A survey of theory, algorithms and applications. Mathematical
Programming, 48(1-3), 161-220.
Florian, M., & Hearn, D. (1995). Network equilibrium models and algorithms. In M.O. Ball,
T.L. Magnanti, C.L. Monma & G.L. Nemhauser (Eds.), Network routing. Handbook in
Operation Research and Management Science, 8.

Description of a typical TPM-problem that may be addressed using the theory / method

Synchronization of different mobility networks may give rise to the problems such as the
equilibrium of these different networks. As different mobility networks are distinct from each
other, which implies multi-criteria may be used in the analysis. Moreover, as each network
has its own route characteristics, an overall multi-tiered structure may also be visualized in
the problems. Thus, given the property of it, variational inequality theory may be well suitable
to address such problems.

Editor

Chao Chen (c.chen@tudelft.nl)



















220
GAMING SIMULATION AS A RESEARCH METHOD

Brief discussion

When gaming is seen as one type of simulation, complementary to for instance computer
simulation, it can be discerned what the strengths of this method are. Gaming simulation as a
research method has two basic types, as depicted in the figure below.



From left to right gaming simulation models a real-world system in 6 researcher-controllable
variables, being rules, roles, objectives, constraints, load (variable settings, or state) and
situation (how it will be played). Using participants this yields both qualitative and
quantitative data from within a session that can be analyzed for what it tells about the real-
world system. This type of research is known as analytical sciences and the approach used
corresponds to the tradition in for instance economics, physics and sociology, for testing
hypotheses and generating theories.

From top to bottom the data generated within a session is less important. What matters in this
approach (known as design science, and the dominant mode at TU Delft) is the effect of
participating in a session on the participants in terms of learning or behavioural change, and in
the large to the organization that participants come from.

Applicability

Gaming simulation as a research method offers the unique capability of having real human
people in a simulated environment, available for observation while doing a task. Participants
can be assigned roles, and a good gaming simulation is capable of getting them involved in a
serious way. This way the method is well-suited for multi-actor research, in all domains and
environments where human behaviour is an essential part of the functioning of a system. This
221
system can be socio-technical (like infrastructures and facilities), socio-economic (supply
chains and networks, finance systems) and institutional (policy making, cross-culture
collaboration)

Pitfalls / debate

Especially when using gaming simulation for hypothesis testing and theory generation the
validity of the outcomes will be criticized. And not without reason, as the notion of a game
has a counter-intuitive feel when considering rigor in hypothesis testing. The validity of the
gaming simulation can be assessed in four factors: psychological reality, predictive validity,
process validity and structural validity. Research into this aspect of the method is developing
as of 2009.

Key references

Duke, R.D., & Geurts, J.L.A. (2004). Policy games for strategic management. Amsterdam:
Dutch University Press.
Klabbers, J.H.G. (2006). Guest editorial. Artifact assessment vs. theory testing. Simulation &
Gaming, 37(2), 148-154.
Klabbers, J.H.G. (2008). The magic circle: Principles of gaming & simulation. (2
nd
edition).
Rotterdam/Taipei: Sense Publishers.
Peters, V., Vissers, G., & Heijne, G. (1998). The validity of games. Simulation & Gaming,
29(1), 20-30.

Key articles from TPM-researchers

Mayer, I. (2008). Gaming for policy analysis: Learning about complex multi-actor systems. In
L. de Caluw, G.J. Hofstede, V. Peters (Eds.), Why do games work? In Search of the Active
Substance (pp. 31-40). Kluwer. ISBN 9789013056280.
Kuit, M., Mayer, I., & Jong, M., (2005). The INFRASTRATEGO game: An evaluation of
strategic behavior and regulatory models in a liberalizing electricity market. Simulation &
Gaming, 36(1), 58-74, ISSN: 1046-8781.
Meijer, S.A., Hofstede, G.J., Beers, G., & Omta, S.W.F. (2008). The organisation of
transactions: Research with the Trust and Tracing Game. Journal on Chain and Network
Science, 8(1), 1-20.
Meijer, S.A. (2009). The organisation of transactions: Studying supply networks using
gaming simulation (PhD-thesis). Wageningen University: Wageningen Academic Publishers.
ISBN: 978-90-8686-102-6.

Description of a typical TPM-problem that may be addressed using the theory / method

Gaming simulation for hypothesis testing has been used in projects with ProRail to test
whether the introduction of market and price mechanisms in capacity allocation of cargo
trains will improve network usage. In supply chains and networks multi hypotheses have been
tested about the influence of social variables like culture and relations on economic
behaviour.

Related theories / methods
222

Multi-actor analysis, Complex adaptive systems, New institutional economics, Supply chains
and networks

Editor

Sebastiaan Meijer (sebastiaan.meijer@tudelft.nl)

































223
GENEALOGICAL METHOD

Definition

The method entails a specific take on historiography and is meant to deliver real histories
(describing what actually happened) that are believed to be effective histories as well (i.e.
generate doubt and discomfort in order to stimulate a wider process of reflection, creating
new opportunities for the future). A genealogy is meant to explain how people have come to
be in some sort of impasse and why they cannot recognize or diagnose adequately the nature
of this situation. It opens up possibilities to break through this impasse, exactly by describing
the genesis of a given situation and showing that this particular genesis is not connected to
absolute historical necessity.

There is no blueprint procedure available for the creation of real and effective histories. Still,
some of the methodological guidelines are:
- Focus on events/ moments of resistance and (micro)practices and describe them in their
singular appearance
- Keep on searching for more causes, interpretations, discontinuities and disconfirming
evidence
- Focus on details
- Dont select an a priori perspective on agency or structure
- Focus on how questions
- Include a polyvalence of voices

The genealogical methodology does not so much determine which methodologies should
actually be applied to gather data. Indeed, the guidelines are more broad principles that
suggest the researcher how to at history. The choice of methods for data gathering should
depend on the specific data needs that pop up during the case study. That is, data gathering
methods do not drive the case, but are selected on their usefulness for developing a genealogy
for a specific object of study.

Applicability

It was for those circumstances wherein changes in self-evident ways of talking and acting
were deemed necessary but difficult to achieve that Foucault developed his methodology. For
example, it has been applied to lay bare the history of scientific disciplines. Besides, it is an
useful method for dealing with deadlocked and stagnating decision making or policy making
situations.

Pitfalls / debate

The fact that there is no blueprint for doing genealogical research makes it difficult to
understand the method and apply it in practice. However, the method may be unorthodox, but
this does certainly not imply that any arbitrary construction will do. The genealogical method
has its internal rules of performance despite the fact that there is no blue-print about
procedure. Procedure is very much a matter of knowing what would be inappropriate given
the epistemological and ontological assumptions being made by Foucault. When conducted in
the proper way, the results of a genealogy can be confirmed, revised or rejected according to
the most rigorous standards of social science, in relation to other interpretations.

224
Second, the refusal to assume too much a priori results in a broad focus that can easily be
criticized for being a catch-all approach that is imprecise and therefore not useful as a
research focus. However, when properly done, more and more focus is added during the
application of the genealogical principles.

Key references

Foucault, M. (1971/1994). Nietzsche, genealogy, history. In J.D. Faubion (Ed.) (1994),
Aesthetics, method, and epistemology (pp. 369 391). New York: The New Press.
Foucault, M. (1991). Questions of method. In Burchell et al. (Eds.), The Foucault effect:
Studies in governmentality. University of Chicago Press.
Flyvbjerg, B. (2001). Making social science matter: Why social inquiry fails and how it can
succeed again. Cambridge: Cambridge University Press.
Linssen, J.A.A. (2005). Het andere van het heden denken. Filosofie als actualiteitsanalyse bij
Michel Foucault (Dissertatie). Nijmegen.

Key articles from TPM-researchers
Flyvbjerg, B. (1998). Rationality and power. Democracy in practice. Chicago: The University
of Chicago Press.
Other:
Foucault, M. (1975). Surveiller et punir. Naissance de la prison. Gallimard. Nederlandse
Vertaling (1989) Discipline, Toezicht en Straf. De geboorte van de gevangenis.
Vertalerscollectief Historische Uitgeverij, Groningen.
Hajer, M. (1995). The politics of environmental discourse: Ecological modernization and the
policy process. Oxford: Oxford University Press.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Deadlocked or stagnating policy debates like almost all decision-making processes about
large infrastructure projects (Betuwelijn, HST, Schiphol, Port of Rotterdam).

Related theories / methods

Developing a Foucaultian genealogy can be seen as a specific take on case studies. Especially
the literature on case studies that focus on the intrinsic value of a single case is relevant.

Editor

Menno Huys (m.g.huys@tudelft.nl)







225
GRAMMATICAL METHOD OF COMMUNICATION

Brief discussion

By the grammatical approach to communication the traditional distinction between subject
and object (mind and matter, theory and practice) is supplemented by a historical perspective.
Theories and concepts appear to have a history which is embedded in the predicament of real
human beings, caught up between the heritage from the past and challenges and innovations,
"imperatives" from the future. Theories and concepts therefore do not lead a life of their own
in an ivory tower of internal logic, but can be analyzed and approached as historical answers,
which respond both to the challenges of the future, which people have to face and to the
historically constituted discourses they are already part of. This implies a creative process of
change and innovation. Essential to the grammatical method is the fact, that the forms of
grammar like the imperative, conjunctive, participe, indicative, mark distinctive phases in the
process of communicative interaction.



The grammatical tenses of the language we normally use show the process of change we are
subjected to. Innovation always starts with a new imperative, which sets the agenda in that a
concrete problem for the first time enters the stage of our consciousness. In the "conjunctive"
state of mind different people respond to this challenge in different ways and an internal
dialogue develops. If different views on the problem reach agreement a group of people is
constituted adopting the same codes or terms for action and finally on this support base large-
scale changes in the object world are brought about. The language used in each state of the
communication process is different in character like is shown by the Table below.

Indicative Participe Conjunctive Imperative

Clear concepts
Definitions
Instrumental
language
Pointing to the

Stories
Half a word enough
Language of
belongingness
Group narrative

Words
Propositions which
are proposals
Associative and
fuzzy language

Names and vocatives,
slogans
Overwhelming and
uprooting, out of comfort
zone
226
world outside
According to
labour division
History and
tradition
Repetitive
Norms and values
(traditional)
Discussion
Dialogue
Finding agreement
Trial and error
Coping with
differences
New situation
Unstructured problems
Surplus of meaning of
existing values



Applicability

There are several advantages. Most communication theories only allow for clear-cut
indicative conceptual language (the usual paradigm being that thinking is encoded in
language, which the other person decodes again into thinking -- as if thinking would be
entirely different from speaking and as if language is only of instrumental significance). As a
consequence unstructured problems (in the grammatical method "imperatives") and fuzzy
communication (unavoidable associative language in agreement seeking dialogs, i.e.
"conjunctive" language) are blinded off, which creates not only misunderstandings, but
sometimes outright irritation and enmity among different actors, both of them using so-called
clearcut concepts, but apparently of their own making. Despite such clear-cut language they
keep living in a universe of their own and don't get in touch with each other.

The difficulty in applying the "grammatical method" lies in the fact, that it makes explicit
often subconscious emotional layers and also that it challenges the self-assured normative
horizon of problem definitions by different actors. The "grammatical method", sometimes
also defined by the principle of "dialogue", takes more time and then again appears to be more
relevant, where the diversity between the different actors increases, like for instance is the
case in intercultural communication.

Pitfalls / debate

There are several risks, even in situations, where in principle the method is applicable. First
there is the risk of an everlasting dialogue between different discourses, with different views,
disconnected from the problem at the table. This is the danger of relativism. Second there is
the danger of deepening the conflict, by making the disagreements (and underlying
preferences and priorities) more explicit. Instead of becoming inclined (the inclination
brought about by conjunctive language, i.e. by "proposing", i.e. using propositions which are
at the same time proposals) towards each other the stakeholders may end up in heated heads
and cold hearts. Another pitfall is the danger, that the method or approach is taken as a means
of improving internal communication without bothering how the energy coming free thanks to
such improved communication is spent. The method in this case results in people working
together with an enormous openness and energy, which is spent to the wrong things.

Key references

Rosenstock-Huessy, E. (1970). Speech and reality. Norwich: Argo Books.
Rosenstock-Huessy, E. (1981). Origin of Speech. Norwich: Argo Books

Key articles from TPM-researchers

227
Kroesen, J.O. (1995). Tegenwoordigheid van geest in het tijdperk van de techniek: Een
inleiding in het werk van Eugen Rosenstock-Huessy. Zoetermeer: Meinema.
Kroesen, J.O. (2008). Leven in organisaties: Ethiek, communicatie, inspiratie. Vught:
Skandalon. (English translation in preparation).
Kroesen, J.O. (2004). Imperatives, communication, sustainable development. In
Mitteilungsbltter der Eugen Rosenstock-Huessy Gesellschaft (pp. 101-109). Krle: Argo
Books.

Description of a typical TPM-problem that may be addressed using the theory / method

With the help of the "grammatical method" a student described the failure to create a
consortium of actors for the innovation of Hoog Catharijne in Utrecht. Where in itself the
plans for innovation responded to an urgent need (imperative), because Hoog Catharijne is too
crowded, and although innovation appeared to be a feasible option also in financial terms (i.e.
realizable in the world outside), the different parties involved lacked trust -- which was
caused by bad experiences with each other in the past. However, the person chairing the
sessions only allowed for "indicative" discussions and didn't put enough effort in bringing the
different parties together and building up trust by dialogical and agreement seeking
discussions, opening up towards each other. This clear-cut and result oriented approach
caused the most important problem - lack of trust - to fall under the table. This caused the
actors involved to end up in big discussions about small things with the final outcome of
silent withdrawal of the primary actors by sending representatives lower in the hierarchy of
the organizations involved to the meetings.

Related theories / methods

Habermas work is related to this approach, although his work is less specific and refined than
the work of Rosenstock-Huessy, from which he also drew. There are a number of others
experimenting with a dialogical approach, Rosenberg for instance, just to mention one of
them, with his method of "nonviolent communication".

Editor

Otto Kroesen (j.o.kroesen@tudelft.nl)















228
HAZARD AND OPERABILITY STUDY (HAZOP)

Brief discussion

HAZOP is aimed at identification of deviations (failures) that can lead to accidents in a
procedural structured brainstorm. It is a system safety analysis on a new design for technical
installations. A HAZOP includes an analysis of how system deviations could arise and an
analysis of the potential risk of the deviations. The study focuses on systematic and process-
oriented examination of a complex system. The HAZOP approach makes use of a systematic
procedure and a combination of expertise to try to overcome incompleteness. To carry out a
HAZOP a description and visualization of the system is required. The procedure is based
upon the use of a set of guide words and a set of applicable process parameters (see Table
below) for the system or measure under study, which define scenarios that may result in a
hazard or an operational problem. For each relevant combination of a guide word and a
parameter (a cell in the matrix) the group discusses in what ways it could arise, the
consequences if it arises and the probability of it arising. Finally, the group thinks of
countermeasures for the identified problems.
Table: Example of HAZOP matrix showing process parameters, guide words and deviations
Guide words
P
r
o
c
e
s
s

p
a
r
a
m
e
t
e
r
s

No(ne) More Part of Reverse Other than
Temperature
Pressure
Flow deviation
Concentration
Maintenance

Applicability

A HAZOP study is a method useful for safety system analysis. While it is mostly used for
identifying possible hazards and analysis associate risks, it can also be applied to study
operability problems in processes. HAZOP originates from chemical process industry. Such
studies are obligatory for the design of new plants or for modifications (maintenance) of
current plants. HAZOP has successfully been applied in other industries and other fields
amongst others road maintenance, tunnels and road traffic in general.

Pitfalls / debate

A disadvantage of the method is the time consuming process of the structured brainstorm. The
success of the analysis depends on following the structured procedure aiming for finding all
imaginable deviations, including minor problems. A team leader should pay attention to
overruling or putting aside deviations, since the method primarily focuses on identification
and only secondly on prioritizing the identified problems.
The success moreover depends on the preparations which include having a clear and workable
description of the system and its processes and composing a team of different fields of
expertise.

Key references

229
Kletz, T.A. (1999). Hazop and hazanIdentifying and assessing process industry hazards (4
th

edition.). Institution of Chemical Engineers, Rugby.
Lawley, H.G. (1974). Operability studies and hazard analysis. Chemical Engineering
Progress 70, 4556.
Elliot, D.M., & Owen, J.M. (1968). Critical examination in process design. The Chemical
Engineer, 233, 377383.
Swann, C.D., & Preston, M.L. (1995). Twenty-five years of HAZOPs. Journal of Loss
Prevention in the Process Industries, 8, 349353.

Key articles from TPM-researchers

Swuste P, & Heijer T. (1999). Project: Onderzoek (on)veiligheid wegwerkerrapportage van
het onderzoek. Amsterdam: Arbouw.
Jagtman, H.M. (2004). Road safety by design: A decision support tool for identifying ex-ante
evaluation issues of road safety measures. Eburon, Delft.
Jagtman, H.M., Hale, A. R., & Heijer, T. (2005). A support tool for identifying evaluation
issues of road safety measures. Reliability Engineering & System Safety, 90, 206216.
Jagtman H.M., Hale, A.R., & Heijer, T. (2006) Ex ante assessment of safety issues of new
technologies in transport. Transportation Research Part A, 40, 459474.

Description of a typical TPM-problem that may be addressed using the theory / method

The method was applied to assess safety effect of measures in road traffic. Application
showed its use for comparing speed measures in an urban area and defining hypotheses for
testing intelligent speed adaptation. Early TPM studies involved safety of workers on
motorways and in tunnels. Other critical systems may profit from application as well.

Related theories / methods

HAZOP is related to Failure Mode and Effectiveness Analysis (FMEA) which starts from
components of a system to address system failures. In design HAZOP is commonly applied in
Preliminary Safety Analysis (PSA) and can be followed by Fault Tree Analysis and Event
Tree Analysis in a later stage (DSA, Detailed Safety Analysis).

Editor

Ellen Jagtman (h.m.jagtman@tudelft.nl)











230
HUMAN INFORMATION PROCESSING

Brief discussion

The human information processing model provides a useful framework to analyze the
different psychological processes that are used when interacting with systems. It can also be
used when carrying out a task analysis (Wickens & Hollands, 1999). It shows that information
processing can be represented as a set of phases whose function is to do something with the
information, to make it usable for the human.

Figure: Human information processing model (Wickens & Hollands, 1999)

As safety always has to do with the behaviour of humans, using this model is very important
when designing systems that have to do with the safety of humans, such as warning systems,
control panels, etc. The human information processing model shows how information
processing works within a human and this can be used to design systems in such a way that
they are easy to understand and safe to be operated by humans.

Applicability

This model can be used in the Safety Science field to determine what the (cognitive)
ergonomic boundary conditions of a product or system have to be in order to make sure that
people understand how to behave safely with the system.

Pitfalls / debate

This model is a psychological model; it looks at information processing of the individual, so it
is good at showing what the boundary conditions should be of individuals working with a
system. When group behaviour is relevant, when people interact with each other and with the
machine, additional models may need to be added, as more factors may be of influence than
the Human Information Processing model is showing.

Key references

231
Kalsher, M.J., & Williams, K.J. (2006). Behavioral compliance: Theory, methodology, and
results. In M. S. Wogalter (Ed.), Handbook of warnings (pp.313-331). Mahwah, NJ:
Lawrence Erlbaum Associates.
Lindsay, P.H., & Norman, D.A. (1977). Human information processing. New York:
Academic press.
Wickens, C.D., & Hollands, J.G. (1999). Engineering psychology and human performance.
Upper Saddle River, New Jersey: Prentice Hall.
Wogalter, M. S., Dejoy, D. M., & Laughery, K. R. (1999). Warnings and risk communication.
London: Taylor & Francis Ltd.

Key articles from TPM-researchers

Sillem, S., & Wiersma, J. W. F. (2008). Constructing a contextual human information
processing model for warning citizens. PSAM9, Hong Kong, PSAM9.

Description of a typical TPM-problem that may be addressed using the theory / method

Determining whether a new warning system can effectively improve peoples self-reliance in
an emergency? The Human Information Processing model can be used to determine what
boundary conditions there are for a warning system to be effective (Sillem & Wiersma, 2008)

Related theories / methods

Communication model (Wogalter et al., 1999), Interactive Social Cognitive Model (Kalsher
& Williams, 2006)

Editor

S. Sillem (s.sillem@tudelft.nl)




















232
IDEF0 INTEGRATION DEFINITION FOR FUNCTION MODELLING
10


Brief discussion

IDEF0 (Integration definition for function modelling) is a technique for modelling (business)
processes. It is used to show how the goals of an organization (or part of an organization) are
achieved through the activities (processes) taking place within that organization. It is both
used to describe existing business processes and to document the processes after some
proposed change. IDEF0 used to be known as SADT (Structured Modelling and Design
Technique). According to the standard, IDEF0 is generic, rigorous and precise, concise,
conceptual, and flexible (FIPSPUB, 183).

IDEF0 models are hierarchical. Whereas the main goal of the organization is always modelled
by a single activity (the A-0 schema), that activity is decomposed into further activities at the
next level (the A-1 schema) and so on, as necessary.

Each activity in an IDEF model has 4 arrows leading to or from the activity box: an (optional)
input arrow (the stuff that is being processed by the activity), an output arrow (the goods,
information, or service resulting from the activity), the controls (the information that guides
or constrains the process, and finally the mechanisms (the people, machinery, and other
resources consumed by the activity). Like the activities themselves, all arrows are labelled
with a descriptive text.

While activities are connected to each other, it is important to understand that they take place
in parallel: output of one activity is used as input/control/mechanism by other activities as it
becomes available (they are considered streams of goods or information).

IDEF0 is very useful to get an overall idea of what is happening in an organization, and what
is needed to accomplish the goals of that organization.

Applicability

IDEF0 applies to business process modelling at a very high level, where processes and
resources need to be identified. It does not model time constraints or decision making in these
processes. If detailed information about the timing of activities must be taken into account,
techniques like UML interaction diagrams might be more suitable; if details about decision
processes need to be described, techniques based upon flow diagramming might be more
applicable. Apart from business process modelling per se, IDEF0 is also recommended for the
analysis, development, re-engineering, integration and acquisition of information systems
(FIPSPUB, 183).

Pitfalls / debate

Some of the most common pitfalls observed are inconsistent decomposition, inconsistent
modelling perspective, too low level activities. An IDEF model needs to be decomposed in a
way that is balanced, that is, which (at each level) tries to create activities whose complexity
level is similar to each other. Activities that are not relevant for the purpose at hand should
not be decomposed to a deeper level than necessary. The modeller should describe the

10
Also known as SADT Structured Analysis and Design Technique
233
activities from a well-chosen perspective, not changing the perspective while describing
different activities. Hierarchical consistency should always be checked too. When too low
level activities are being modelled using IDEF, the technique lacks the means to expressing
time relationships and decisions, so the resulting models will not be very useful at that level.

Key references

References are pervasive. A good source of information is www.idef.com; it has references to
the main reports and related information. The standard is described in Federal Information
Processing Standards Publication 183 (FIPSPUB, 183).

Key articles from TPM-researchers

/

Description of a typical TPM-problem that may be addressed using the theory / method

IDEF0 has been used at the first stages of many TPM projects that involved the analysis and
optimization of existing business processes, the description of such processes for certification
reasons, and the design of new business processes.

Related theories / methods

IDEF0 might be used in tandem with methods to describe information, like UML class
diagrams. In that case, the information produced or the information about products or services
produced might be described using UML class diagrams.

Editor

H.J. Honig (h.j.honig@tudelft.nl)




















234
LINEAR PROGRAMMING
11


Brief discussion
Linear Programming (LP) is a technique for optimizing a linear objective function, subject to
linear equality and/or linear inequality constraints. The LP-model describes a formalisation of
a linear function to be maximised/minimised in such a way that a list of constraints
represented as linear equations will be satisfied.
The general formalisation can be expressed as follow:
Max c
T
x or Min c
T
x
Subject to Ax < b Subject to Ax > b
x > 0 x > 0
Where x represents the vector of variables (to be determined), while c and b are vectors of
(known) coefficients and A is a (known) matrix of coefficients. The expression to be
maximised or minimised is called the objective function (c
T
x in this case). The equations Ax
< b and x > 0 are the constraints over which the objective function is to be optimized.
The solving-procedure of the problem is usually done by the Simplex method (Dantzig,
1947). Due to the canonical form of the constraints each individual variable can be changed
and therefore the xi variable will be searched where the improvement of the goal function is
the best. This is the so-called pivot-column. Pivot-operations will be carried out until no
improvement of the goal function can be found. Each linear programming model (called the
primal model) has also a dual model formulation. The dual formulation can provide us
information about which constraints are binding the problem and which constraints still have
slack to for improvement in the primal model.

Applicability

Linear programming can be applied to various fields of study. Most extensively it is used in
business and economic situations, but can also be utilised for some engineering problems.
Some industries that use linear programming models include transportation, energy,
telecommunications, and manufacturing. It has proved useful in modelling diverse types of
problems in planning, routing, scheduling, assignment, and design.

Pitfalls / debate

In practice many times the problem instances become extremely large and therefore LP-
models can make use of decomposition techniques (Benders, 1962), Lagrange-relaxation (i.e.
moves difficult constraints in the objective) ,or internal point method (Karmarkar, 1984) to
obtain a solution in polynomial time.

Key references

Benders, F. (1962). Partitioning procedures for solving mixed integer variables programming
problems. Numerische Mathematik, 4, 238252.

11
LP
235
Karmarkar, N. (1984). A new polynomial time algorithm for linear programming.
Combinatorica, 4(4), 373395.
Luenburger, D. G. (1973). Introduction to linear and nonlinear programming. Reading,
Mass.: Addison-Wesley.
Aikens, C.H. (1985). Facility location models for distribution planning. European Journal of
Operational Research, 22, 263279.

Key articles from TPM-researchers

Van Duin, J.H.R., & van Wee, G.P. (2007). Globalization and intermodal transportation:
modeling terminal locations using a three-spatial scales framework. In R. Cooper, K.
Donaghy, & G. Hewings (Eds.), Globalization and regional economic modeling (pp. 133-
152). Heidelberg: Springer.
Van Duin, J.H.R. (2006). Modeling intermodal networks. In E. Benavent, & V. Campos
(Eds.), Third international workshop on freight transportation and logistics (pp. 349-352).
Altea: University of Valencia, Spain.

Description of a typical TPM-problem that may be addressed using the theory / method

Linear programming is often applied to the classical transportation problem, denoted as the
approach to the distribution of freight flows. In such model the region specific data are excess
demand and supply quantities, or imports and exports, whereas the quality of the
transportation services is given by the transportation costs per unit of commodity. The
objective in the optimization approach is to find the flows t
ij
between origins i and
destinations j which minimize the overall transport costs at the system level while satisfying
excess demands and supplies S
i
and D
j
for all regions. This results in the following linear
programming problem:

min
t
ij ij
ij
ij
c t


sub
t S
ij
j
i
s


t D
ij
i
j
s


t
ij
> 0
for all i,j

where
t
ij
= flow between i and j
c
ij
= transport costs between i and j
S
i
= excess regional supply in i
D
j
= excess regional demand in j

236
Location and allocation problems are also familiar problems which can be solved with LP-
modelling paradigms. Also for production-order-assignment and route-assignments linear
flow modelling is applicable.

Related theories / methods

For location problems centre-gravity-modelling or load-distance models can be applied as
well. When other more (qualitative) criteria are involved for location problems a factor rating
method can be applied.

Editor

Ron van Duin (j.h.r.vanduin@tudelft.nl)





































237
MODEL PREDICTIVE CONTROL

Brief discussion

Model Predictive Control, or MPC, is an advanced method of process control that has been
in use in the process industries since the 1980s. It has been proven to provide benefits that are
greater than those achieved from the improvement of basic control systems as PID or cascade
controllers. The greatest benefits are realized in applications with dead-time dominance,
constraints, interactions and the need for some optimization. The advantage of model
predictive control lies in its knowledge of the effects of past actions of manipulated and
disturbance variables on the future profile on controlled and constraint variables.



Model Predictive Control has a specific type of feedback that makes it distinctive. As opposed
to the traditional feedback where a controller uses as an input a difference between the set
point and the actual measurement of the output, the MPC controller applies as its input the
difference between the future value of the set point and the predicted trajectory of controlled
variable CV. CV Prediction model predicts the process output y for a number of time units
ahead. A future trajectory of the set point (can be a constant) for the same number of time
units ahead as the predicted process output is calculated by the model SP Prediction. A
Control algorithm Control computes the control action based on the error vector as the
difference between the future trajectories of the set point and predicted process output. It is
important to stress, that CV Prediction model is adjusted to the mismatch between the model
output (predicted y) and actual measured value of the process output y. The resulting optimal
control input sequence is applied to the system in a receding horizon fashion, i.e., at each
controller sample step, the optimisation problem is solved to find the sequence of N actions
that are expected to optimise system performance over the prediction horizon. Only the action
for the first step is implemented, after which the system transits to a new state and the
controller determines new actions, given the new state of the system.

The advantages of MPC are the fact that the framework handles operational input and state
constraints explicitly in a systematic way, and that a control agent employing MPC can
operate without intervention for long periods of time, due to the prediction horizon that makes
the agent look ahead and anticipate undesirable future situations. Furthermore, the moving
238
horizon approach in MPC can in fact be considered a feedback control strategy, which makes
it more robust against disturbances and model errors.

Applicability

MPC has become an important approach for finding control policies for complex, dynamic
systems. It has found wide application in the process industry, and recently has also started to
be used in the domain of infrastructure operation, e.g., for the control of road traffic networks
(Hegyi et al., 2005), power networks (Hines, 2007; Negenborn, 2007), combined gas and
electricity networks (Arnold et al., 2009), railway networks (van den Boom & De Schutter,
2007) and water networks (Negenborn et al., 2009).

Pitfalls / debate

There are some issues that have to be addressed before a control agent using an MPC strategy
can be implemented successfully: the control goals have to be specified; the prediction model
has to be constructed; the measurement of the system state has to be available; a solution
approach has to be available that can solve the MPC optimization problem; the solution
approach has to be tractable.

A class of systems for which there are still many open questions regarding MPC are hybrid
systems, i.e. systems including both continuous and discrete dynamics. This class of systems
currently receives significant attention in MPC research.

Key references

Maciejowski, J.M. (2002). Predictive control with constraints. England: Prentice Hall.
Morari, M., & Lee, J.H. (1999). Model predictive control: Past, present and future. Computers
& Chemical Engineering, 23, 667682.
Hegyi, A., De Schutter, B., & Hellendoorn, J. (2005). Optimal coordination of variable speed
limits to suppress shock waves. IEEE Transactions on Intelligent Transportation Systems,
6(1), 102112.
Hines, P. (2007). A decentralized approach to reducing the social costs of cascading failures
(PhD dissertation). Carnegie Mellon University.
Negenborn, R.R. (2007). Multi-agent model predictive control with applications to power
networks (PhD thesis). Delft University of Technology, Delft, The Netherlands.

Key articles from TPM-researchers

Negenborn, R.R., Houwing, M., De Schutter, B., & Hellendoorn, J. (2008). Adaptive
prediction model accuracy in the control of residential energy resources. 2008 IEEE Multi-
conference on Systems and Control (CCA2008). IEEE: San Antonio, Texas, USA, Paper 251.
Houwing, M., Ilic, M., Negenborn, R.R., & De Schutter, B. (2009). Model predictive control
of fuel cell micro cogeneration systems. IEEE International Conference on Networking,
Sensing and Control. IEEE: Okayama City, Japan.
Negenborn, R.R., Sahin, A., Lukszo, Z., De Schutter, B., & Morari, M. (2009). A non-
iterative cascaded control approach for control of irrigation canals. IEEE SMC09 Conference,
San Antonio, October 2009.
239
Lukszo, Z., Weijnen, M.P.C., Negenborn, R.R., & De Schutter, B. (2009). Tackling
challenges in infrastructure operation and control: Cross-sectoral learning for process and
infrastructure engineers. International Journal of Critical Infrastructures, 5(4), 308-322.

Description of a typical TPM-problem that may be addressed using the theory / method

An example of a typical TPM problem is demand response with micro cogeneration systems -
in which household controllers react to real-time electricity pricing. With MPC applied
to micro cogeneration (micro-CHP), variable energy cost savings of about 50 euro per year
can be obtained. These cost reductions come on top of the energy costs that are already
achieved with standard micro-CHP application as compared to the use of conventional
boilers. This leads to an additional increase in the room for investment in micro-CHP.
Further, there is an economic incentive for aggregators to setup virtual power plants focusing
on aggregate-level demand response.

Related theories / methods

Optimal control

Editor

Z.Lukszo (Z.Lukszo@tudelft.nl)



























240
MODELLING AND SIMULATION

Brief discussion

Modelling and Simulation is considered both a method and tool.

Simulation techniques have benefited many of the traditional areas in helping to mitigate
design flaws, learning about system behaviour, providing training, and becoming a standard
practice for developing complex systems. Following the analogies of traditional domains,
application of simulation in the context of socio-technical environments such as enterprise,
organizational and business processes, has attracted a huge interest among researchers

Due to its application scope, flexibility, and importance, Modelling and Simulation is
becoming a ubiquitous practice in modern science, technology, and management to the extent
to be considered by some as a third pillar in science, between theory building and
experimentation. In many large companies, the practice of modelling and simulation is a part
of standard business process management. This discipline involves various activities such as
system abstraction, model description, validation, simulation, and experimentation.

In this specific context, modelling refers to the process of obtaining a simplified or abstracted
representation of a part of reality, in a mathematical form amenable to digital processing.
Simulation refers to the computerized execution of the latter model, either for explanation or
prediction purposes. Typically, simulation models are designed with time as a backdrop,
although other system variables could play this role, as stochastic variables do in Monte Carlo
simulation.

Although various taxonomies of modelling and simulation paradigms coexist, a few major
distinctions are continuous vs. discrete, deterministic vs. stochastic, multi-agent based vs.
population-based models.

Modelling and Simulation has been practiced intensely in virtually all domains of knowledge
for decades, yet, only recently has an all pervasive theory of modelling and simulation begun
to emerge and the field has been perceived as a scientific domain in its own right. Theory of
modelling and simulation specifies the important concepts manipulated in a simulation
endeavour such as models, simulators, experimental frames, and provides the important
relations between them.

Applicability

Modelling and simulation is generally considered an alternative to analytical solutions. It
mainly applies to problems whose mathematical representations do not easily lend themselves
to exact solutions. This means that mostly complex systems are tackled with modelling and
simulation. Increasingly, simulation is applied to human and social systems where informal
knowledge and theories are operationalized. In such areas, the focus is less on point prediction
but rather on finding general trajectories.

Pitfalls / debate

There is no unique simulation model for a given system. Depending on the level of fidelity
required, the modeller has to choose the right abstraction in order the preserve the morphisms
241
that can allow making inferences about the source system through experimentation with the
model.

Given the various possible formalisms, one essential point is to choose the most appropriate
and expressive formalism for a specific problem.

Finally, input data could be also a challenge as often the models are based on rather limited
data set, where some assumptions take place.

Key references

Shannon, R. (1975). System simulation: The art and science. Englewood Cliffs, New York:
Prentice Hall, p.387.
Zeigler, B., Praehofer, H., & Kim, T. (2000). Theory of modeling and simulation (2
nd
edition).
San Diego, CA, USA: Academic Press.

Key articles from TPM-researchers

Seck, M.D., & Verbraeck, A. (2009). DEVS in DSOL: Adding DEVS semantics to a generic
Event Scheduling Simulation Environment. Proceedings of the Summer Computer Simulation
Conference SCSC 2009, Istanbul (Turkey), July 13, 17, 2009.
Barjis, J. (2007). Automatic business process analysis and simulation based on DEMO.
Journal of Enterprise Information Systems, 1(4), 365-381.

Description of a typical TPM-problem that may be addressed using the theory / method

- Complex system analysis and design
- Study of uncertainties in complex socio-technical systems
- Communication and shared understanding among stakeholders of complex projects

Related theories / methods

Systems Engineering, Systems analysis and design, Business process, DEVS, Multi-Agent
systems, Differential equations, numerical integration.

Editors

Mamadou D. Seck (m.d.seck@tudelft.nl)
Joseph Barjis (J.Barjis@tudelft.nl)










242
MONTE CARLO METHOD

Brief discussion

Monte Carlo methods are a class of computational algorithms that rely on repeated random
sampling to compute their results. Monte Carlo simulation performs risk analysis by building
models of possible results by substituting a range of valuesa probability distributionfor
any factor that has inherent uncertainty. It then calculates results over and over, each time
using a different set of random values from the probability functions.



Probability distributions are a realistic way of describing uncertainty in variables. Common
probability distributions include normal, lognormal, uniform, triangular, PERT and discrete
distributions.

Monte Carlo simulation produces distributions of possible outcome values.

(Tools for) Monte Carlo simulation provides a number of advantages over deterministic, or
single-point estimate analysis:
- The probabilistic results show not only what could happen, but how likely each outcome
is.
- Graphical results are easily created, which is important for communicating findings to
other stakeholders.
- It easily enables correlation of inputs, sensitivity analysis and scenario analysis.

Applicability

Monte Carlo simulation methods are especially useful for modelling phenomena with
significant uncertainty in inputs and in studying systems with a large number of coupled
degrees of freedom. Specific areas of application include physical sciences, design and
visuals, finance and business, telecommunications and games.

A selection of software tools that facilitate the implementation of the Monte Carlo technique
are Pertmaster, for time and cost control in project management and add-Ins to excel with a
243
wide field of application like. Crystal Ball, Risk Solver, @Risk, DFSS Master, Risk Analyzer,
MC Add-In for Excel. With only basic programming skills the Monte Carlo approach can be
implemented in Mat lab, C++, Pascal etc.

The Monte Carlo simulation is a commonly used technique, but is computationally
demanding.

Pitfalls / debate

Some Monte Carlo simulation tools offer maximum-likelihood algorithms. Be careful, they
do not always find the true global maximum of the likelihood function. Scrutinize outputs
carefully and vary the number of iterations. Do not overparameterize the model. Start using a
simple model if possible.

The Monte Carlo techniques apply to static models. Be very careful using the method for
dynamic models.

Key references
Rubinstein, R. Y., & Kroese, D. P. (2007). Simulation and the Monte Carlo method (2
nd

edition). New York: John Wiley & Sons. ISBN 9780470177938.
More references could be checked from the internet:
www.palisade.com; http://www.primavera.nl;

Key articles from TPM-researchers

/

Description of a typical TPM-problem that may be addressed using the theory / method

In natural sciences outputs of most theories have an almost 100 % correlation with physical
phenomena and are universally applicable. Often these theories are called Laws. Theories
for typical TPM-problems often result in less universal and less precise results. Therefore a
comprehensive description of the scope, the boundary conditions and the uncertainty of the
inputs is of crucial importance. The Monte Carlo technique enables the researcher to test his
theories.

A powerful application of the Monte Carlo Technique is the analysis of decision trees or more
complex decision structures with several linked trees. An analysis may result in risk profile,
(assessment of) policy suggestion, influence diagrams, sensitivities and influence diagrams.
Monte Carlo methods are superior to human intuition.

Basically the Monte Carlo technique forces/enables the TPM researcher to justify the
uncertainties in the results of is theories.

Related theories / methods

244
Several alternative methods, like the bootstrap method and the forward discrete probability
propagation method have been developed to circumvent the time-consuming calculations of
the Monte Casino method.

Editor

R.M.Stikkelman (r.m.stikkelman@tudelft.nl)











































245
MORT MANAGEMENT OVERSIGHT & RISK TREE

Brief discussion

The Management Oversight and Risk Tree (MORT) method is an analytical procedure for
inquiring into causes and contributing factors of accidents and incidents. The MORT method
reflects the key ideas of a 34-year programme run by the US Government to ensure high
levels of safety and quality assurance in its energy industry. The MORT programme started
with a project documented in SAN 821-2, Johnson (1973).

The MORT method is a logical expression of the functions needed by an organisation to
manage its risks effectively. These functions have been described generically; the emphasis is
on "what" rather than "how", and this allows MORT to be applied to different industries.
MORT reflects a philosophy which holds that the most effective way of managing safety is to
make it an integral part of business management and operational control.

The acronym MORT is used to refer to four things:
1. a safety-assurance programme which ran between 1968 and 2002 in the USA;
2. the body of written material which documented the programme;
3. a logic tree diagram: the Management Oversight and Risk TREE, see main branches
below;
4. a method for helping investigators probe into the systemic causes of accidents and
incidents.



The MORT User's Manual describes the last item, the MORT Method, and is designed to be
used. Each topic on the MORT chart has a corresponding question in Part 2 of the User's
Manual.
246

The questions in MORT are asked in a particular sequence, one that is designed to help the
user clarify the facts surrounding an incident. The analyst, focussed on the context of the
accident, identifies which topics are relevant and uses the questions in the manual as a
resource to frame his/her own inquiries.

Applicability

Like most forms of analysis applied in investigations, MORT helps the analyst structure what
they know and identify what they need to find out; mostly the latter. The accent in MORT
analysis is on inquiry and reflection by the analyst. MORT is an in-depth root cause analysis
method that covers the whole life cycle of an operation, from conceptualisation to disposal
and renewal.

Pitfalls / debate

Good investigations are built on a secure picture of what happened. MORT analysis needs
this as a basis. Analysis using an appropriate sequencing method such as Events &
Conditional Factors Analysis (ECFA+) can be effective and provides a detailed picture of the
events comprising the accident. Using Energy Trace & Barrier Analysis, or Barrier Analysis
as it is usually called, is the way to connect MORT analysis to the events of the accident.
Therefore, as soon as the factual picture allows it, carry out an Energy Trace and Barrier
Analysis. It is an essential preparation for MORT analysis.

Key references

Frei, R., Kingston, J., Koornneef, F., & Schallier, P. (2002). MORT users manual. (NRI
revision of 1992 edition by N.W. Knox and R.W. Eicher). NRI-1. NRI Foundation.
Frei, R., Kingston, J., Koornneef, F., & Schallier, P. (2002). MORT chart - Intended for use
with NRI1 (2002). (NRI revision based on MORT 1992). NRI-2. NRI Foundation.
Haddon Jr., W. (1973). Energy damage and the ten counter-measure strategies. Journal of
Trauma, 1973 (reprint Injury Prevention, 1995(1), 40-44).
Johnson, W.G. (1973) MORT - The Management Oversight and Risk Tree. ERDA SAN 821-
2, Idaho Falls, Available from www.nri.eu.com.

Key articles from TPM-researchers

Koornneef, F. (2000). Organised learning from small-scale incidents (PhD thesis). Delft
University Press.

Description of a typical TPM-problem that may be addressed using the theory / method

MORT represents a tree of knowledge from a wide range of disciplines that combined
provides a basis to design complex systems of which critical functioning must be assured
before it is started up. It was serendipity that this tree is also a content-loaded Root Cause
Analytical investigation method, successfully applied in nuclear, chemical and other industrial
sectors.

Related theories / methods
247

Energy Trace & Barrier Analysis, Haddon's Barrier Strategies, Life Cycle Analysis, Risk
Analysis, Safety Assurance Programme, Change Analysis, Operational Readiness, SMS

Editor

Floor Koornneef (f.koornneef@tudelft.nl)











































248
MULTI-AGENT SYSTEMS TECHNOLOGY

Brief discussion

The multi-agent paradigm allows for the specification of computer systems as collections of
autonomous and distributed components. Wooldridge and Jennings (1995) defined the weak
notion of agency as follows:

Agents are:
(1) autonomous,
(2) pro-active,
(3) reactive,
(4) capable of interacting with their environment
(5) capable of interacting with other agents (i.e. have social ability)

Multi-agent systems are systems of agents designed to interact with each other:
communication and coordination patterns are defined to this purpose.

In some applications, mobility forms an important characteristic of multi-agent systems, as it
makes it possible for agents to move between locations, to process data locally (whereto
stored). The multi-agent systems paradigm provides a means to design complex distributed
information processing systems, to design virtual organizations of agents.

Applicability

In recent years, applications of multi-agent systems have been implemented in industrial
environments, in particular in environments where distributed planning/scheduling, or the
management of distributed interactions and/or resources, play an important role. Typical
examples include supply chain management, planning systems, distributed information
management and auction based energy markets.

Pitfalls / debate

Systems vary from small scale BDI-based theoretical frameworks, via multi-purpose flexible
middleware, to large scale simulation frameworks, with numerous options within each
category. Modularity and clear specification of functionality often requires additional
overhead.

Key references

Wooldridge, M., & Jennings, N.R. (1995). Intelligent agents: Theory and practice. Knowledge
Engineering Review, 10(2), 115-152.
Brazier, F.M.T., Dunin-Keplicz, B.M., Jennings, N.R., & Treur, J. (1997). Desire: Modelling
multi-agent systems in a compositional formal framework. International Journal of
Cooperative Information Systems, 6(1), 67.
Freber, J. (1999). Multi-agent systems: An introduction to distributed artificial intelligence.
Addison-Wesley.
Luck, M., McBurney, P., Shehory, O., & Willmott, S. (2005). Agent technology: Computing
as interaction (a roadmap for agent based computing). AgentLink.
249

Key articles from TPM-researchers

Overeinder, B.J., & Brazier, F.M.T. (2006). Scalable middleware environment for agent-
based internet applications. Applied Parallel Computing, 3732, 675-679.
van't Noordende, G., Overeinder, B.J., Timmer, R.J., Brazier, F.M.T., & Tanenbaum, A. S.
(2009). Constructing secure mobile agent systems using the agent operating system.
International Journal of Intelligent Information and Database Systems, 3(4), 363-381.
Warnier, M., Timmer, R. J., Oey, M. A., Brazier, F. M. T., & Oskamp, A. (2008). An agent
based system for distributed information management: A case study. Proceedings of the 3
rd

Int'l Multiconference on Computer Science and Information Technology (IMCSIT'08), 55-61.

Description of a typical TPM-problem that may be addressed using the theory / method

Distributed resource management is a challenge, especially in large-scale systems such as
power grid infrastructures in which multiple parties are involved, each with their own goals
and interests. There is no single organization with total knowledge and control. The multi-
agent systems paradigm provides a means to model individual agents and virtual
organizations of agents, communication and control structures. Multi-agent systems
technology provides a means to simulate the behaviour of such systems.

Related theories / methods

Multi-agent systems research and development is mainstream Artificial Intelligence.

Editors

Martijn Warnier (M.E.Warnier@tudelft.nl)
Frances Brazier (F.M.T.Brazier@tudelft.nl)




















250
OREGON SOFTWARE DEVELOPMENT PROCESS (OSDP)

Brief discussion

The complexities of modern business technology and policy are straining experts who aspire
to design multi-actor systems to enhance existing organizations. Collaborative design is one
approach to try and manage complexity in design activities. Still, collaboration in itself is not
necessarily an easy mode of working and the design of supporting IT system bears special
challenges. The design of IT systems which support collaboration is different to the traditional
design of single-user applications: the designer has to take the social context and the intended
system use into account when shaping the application. The Oregon Software Development
Process (OSDP) addresses the above issues and fosters end-user involvement by means of
socio-technical patterns. End-users interact with developers in different iteration types. OSDP
proposes three different types of iterations:
1. In conceptual iterations users envision the scenarios of system usage.
2. In development iterations users and developers collaborate to build the required
groupware infrastructure.
3. In tailoring iterations users appropriate the groupware system to changed needs.



In conceptual iterations, end-users outline future scenarios of use. These scenarios are
implemented during development iterations. End-users closely interact with the developers by
proposing the application of patterns from the pattern language and identifying conflicting
forces in the current prototype. As soon as the tool for computer-mediated interaction can be
used by the end-users, the tailoring iterations start. End-users reflect on the offered tool
support and either tailor the tool or escalate the problem to the developers who then start a
new development iteration. In all iteration types, patterns communicate design knowledge to
the developers and the users. The participants of the OSDP learn how conflicting forces
should be resolved instead of just using pre-fabricated building-blocks. This empowers them
to create customized solutions that better suit their requirements.

Applicability

251
The Oregon Software Development Process (OSDP) can be applied to a series of design
problems in domains for which a pattern language exist that can serve a communication
language between end-users, designer and developers. The theoretical foundations of the
ODSP are Alexanders Oregon Experiment, Heideggers notion of break situations, Rittel and
Webbers understanding of wicked problems and Schns theory on reflection in action.

Pitfalls / debate

The major pre-requisite of the OSDP is the availability of a pattern language in the given
problem domain. However, there is already a widespread list of available patterns languages
for different domains. Another pitfall is related to the availability of users and developers in
the process. OSDP relies on the fact that users and developers can meet in person and are
aware of each others. In fact, many of the tasks rely on intensive communication between
users and developers.

Key references

Heidegger, M. (1927). Sein und zeit. Niemeyer.
Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy
Sciences, 4, 155-169.
Alexander, C., Silverstein, M., Angel, S., Ishikawa, S., & Abrams, D. (1980). The Oregon
experiment. Oxford University Press.
Schn, D. A. (1983). The reflective practitioner: How professionals think in action. Basic
Books.

Key articles from TPM-researchers

Lukosch, S., & Schmmer, T. (2006). Groupware development support with technology
patterns. International Journal of Human Computer Studies - Special Issue on 'Theoretical
and Empirical Advances in Groupware Research, 64, 599-610.
Schmmer, T., Lukosch, S., & Slagter, R. (2006). Using patterns to empower end-users - The
Oregon software development process for groupware. International Journal of Cooperative
Information Systems - Special Issue on '11th International Workshop on Groupware
(CRIWG'05)', 15, 259-288.
Schmmer, T., & Lukosch, S. (2007). Patterns for computer-mediated interaction.
John Wiley & Sons, Ltd.

Description of a typical TPM-problem that may be addressed using the theory / method

The OSDP has been applied in various projects that dealt with the development of
collaboration enabling IT systems. Such systems include GDSS, tools for collaborative
modelling, tools for eliciting shared knowledge as well as collaborative serious games.

Related theories / methods

Muller, M. J., & Kuhn, S. (1993). Participatory design. Communications of the ACM, ACM
Press, 36, 24-28.

252
Beck, K. (2000). eXtreme programming explained. Addison Wesley.
Floyd, C. (1993). STEPS - a methodical approach to PD. Communications of the ACM, ACM
Press, 36, 83.
Wulf, V., Krings, M., Stiemerling, O., Iacucci, G., Fuchs-Fronhofen, P., Hinrichs, J.,
Maidhof, M., Nett, B., & Peters, R. (1999). Improving inter-organizational processes with
integrated organization and technology development. Journal of Universal Computer Science,
5, 339-365.
Fischer ,G., Grudin, J., McCall, R., Ostwald, J., Redmiles, D., Reeves, B., & Shipman, F.
(2001). Seeding, evolutionary growth and reseeding: The incremental development of
collaborative design environments. Coordination Theory and Collaboration Technology,
447472. Lawrence Erlbaum Associates.

Editor

Stephan Lukosch (S.G.Lukosch@tudelft.nl)


































253
POLICY ANALYSIS (HEXAGON MODEL OF POLICY ANALYSIS)

Definition

Policy analysis is a broad field of applied policy research and advice. Policy analysis is aimed
at supporting decision making, but this may take many shapes or forms. Some policy analysts
conduct quantitative or qualitative research, while others reconstruct and analyze political
discourse, or set up citizen fora. Some policy analysts are independent researchers; some are
process-facilitators, while others act as political advisers.

The hexagon model of policy analysis attempts to (re)structure the discipline into a single
conceptual model. The model identifies six activities and translates these into six underlying
policy analytic styles. Each activity implies different values, and calls for different criteria
when it comes to evaluation. The figure below shows the hexagon model of policy analysis.

Research and
analyze
Design and
recommend
Clarify values
and arguments
Democratize Mediate
Advise
strategically
Rational style:
what is good knowledge?
C
l
i
e
n
t

a
d
v
i
c
e

s
t
y
l
e
:
W
h
a
t

i
s

g
o
o
d

f
o
r
t
h
e

c
l
i
e
n
t
?
A
r
g
u
m
e
n
t
a
t
i
v
e

s
t
y
l
e
:
w
h
a
t

i
s

g
o
o
d

f
o
r

t
h
e
d
e
b
a
t
e
?
Interactive style:
What is good for mutual
understanding?
P
a
r
t
i
c
i
p
a
t
o
r
y

s
t
y
l
e
:
W
h
a
t

i
s

g
o
o
d

f
o
r
d
e
m
o
c
r
a
t
i
c

s
o
c
i
e
t
y
?
P
r
o
c
e
s
s

s
t
y
l
e
:
W
h
a
t

i
s

g
o
o
d

f
o
r

t
h
e
p
r
o
c
e
s
s
?
Scientific quality
Validity;
Reliability etc
Policy relevance:
Usability, action
orientation etc.
Quality of debate and arg.
Consistency, richness
and openess of
arguments etc.
Political effectiveness
Workability, feasibility
pro-activeness; personal
goal achievement
Democratic legitimacy
Openess,
transparency,
representation, etc
Acceptance and learning
Commitment,
sharing of
perspectives; learning
Object oriented
values and criteria
Subject oriented
values and criteria
Idealistic and
generic values and
criteria
Pragmatic and
particular values
and criteria
Independent scientist
Objective researcher
Independent expert
Impartial advisor
Involved client advisor
Client counselor
Facilitator, mediator
Process manager
Democratic (issue)
advocate
Narrator
Logician, ethic


The corners of the hexagon indicate the six policy analysis activities which may be carried out
by a policy analyst (in clock-wise direction):
- research and analyse a policy-relevant issue,
- design and recommend alternative courses of action,
- provide a specific decision maker with a strategy for achieving specific goals given a
certain political constellation,
- mediating between different parties,
- allowing for a democratic process by enabling all relevant parties to provide their input,
- clarifying values and arguments of stakeholders involved in the policy process .

The criteria for the success of a policy analysis activity are shown inside the corners of the
hexagon.

254
- The combinations of adjacent activities (i.e. the links between two activities) can be traced
to different policy analysis styles, which may be found in the policy analysis literature.

- The rational style of policy analysis (ranging from research and analyse to design and
recommend) is discussed in many general textbooks on methods of policy analysis (Patton
& Sawicki, 1986; MacRae & Whittington, 1997; House & Shull, 1991; Quade, 1975;
Miser & Quade, 1988).

- The client advice style is based on assumptions that policymaking occurs in a complex
and rather chaotic arena with numerous players, with different interests and strategies (De
Bruijn & ten Heuvelhof, 2000; De Bruijn et al., 2002). Besides knowledge and insights
gained through research, policy analysis is largely a question of politico-strategic insight
(Wildavsky, 1987).

- The process style of policy analysis is based on the assumption that substantive aspects of
a policy problem are in fact co-ordinate or perhaps even subordinate to the procedural
aspects of a policy problem. The analyst or process manager creates loose coupling of
procedural aspects and substantive aspects of a problem (De Bruijn et al., 2002).
Procedural aspects are understood to be the organization of decision-making or the way in
which parties jointly arrive at solutions to a problem.

- A participatory style of policy analysis assumes that citizens can have a voice and be
interested enough to deliberate on substantive and politically difficult questions (Dryzek,
1990; Fishkin, 1991; Durning, 1993; DeLeon, 1994; Mayer, 1997; Fischer 2000). The
policy analyst can take on a facilitating role in such a debate by promoting equality and
openness in the design and by giving ordinary citizens and laymen a role alongside others
(Mayer, 1997).

- In an interactive style of policy analysis, target groups and stakeholders are usually invited
to structure problems or devise solutions in structured working meetings at which policy
analysis techniques may be used (Mason & Mitroff, 1981). This brings about a multiple
interaction whereby the views and insights of the analyst, the client and also the
participants are enriched (Edelenbos, 1999).

- The argumentative style (Fischer & Forester, 1993; Fischer, 1995; Van Eeten, 2001)
assumes that it can make the structure and progress of the discourse transparent by means
of interpretive and qualitative methods and techniques, and can also bring about
improvements by identifying caveats in the debate or searching for arguments and
standpoints that can bridge the gap between opponents.

Applicability

Policy analysis is a widely applicable field with the aim to inform and improve policy
making. The hexagon model of policy analysis serves three purposes: understanding of policy
analysis as a discipline, contribution to the design of new policy analysis methods and
projects, and guidance for evaluating such methods and projects.

Pitfalls / debate

255
The variety and multi-faceted nature of policy analysis makes it clear that there is no single,
let alone one best, way of conducting policy analyses. The discipline consists of many
different schools, approaches, roles and methods. The activities and styles in the hexagon
model are portrayed in an archetypical way. This sub-division makes it possible to relate
various policy analysis styles found in the literature to each other and to analyze the
characteristics of and differences between the styles. The model does not It is, however, a
model with the necessary simplifications, and it is not meant to be a rigid prescriptive model.

Key references

Bobrow, D., & Dryzek, J. (1987). Policy analysis by design. Pittsburgh, PA: University of
Pittsburgh Press.
Brewer, G., & DeLeon, P. (1983). The foundations of policy analysis. Homewood, Il.: Dorsey.
DeLeon, P. (1994). Democracy and the policy sciences: Aspirations and operations. Policy
Studies Journal, 22, 200-212.
Dryzek, J. (1990). Discursive democracy: Politics, policy and political science. Cambridge:
Cambridge University Press.
Dunn, W. (1981/1994). Public policy analysis: An introduction. (1
st
and 2
nd
edition).
Englewood Cliffs: Prentice-Hall.
Durning, D. (1993). Participatory policy analysis in a social service agency: A case study.
Journal of Policy Analysis and Management, 12(2), 297-322.
Fischer, F., & Forester, J. (Eds.) (1993). The argumentative turn in policy analysis and planning.
Duke University Press.
Fischer, F. (1995). Evaluating public policy. Chicago: Nelson-Hall.
Fishkin, J. (1991). Democracy and deliberation: New directions for democratic reform. New
Haven CT: Yale University Press.
Hawkesworth, M. (1988). Theoretical issues in policy analysis. Albany: State University of
New York Press.
Hogwood, B., & Gunn, L. (1984). Policy analysis for the real world. Oxford: Oxford University
Press.
Hoppe, R. (1998). Policy analysis, science, and politics: From speaking truth to power to
making sense together. Science and Public Policy, 26(3), 201-210.
House, P., & Shull, R. (1991). The practice of policy analysis: Forty years of art and
technology. Washington: Compass.
MacRae, D., & Whittington, D. (1997). Expert advice for policy choice: Analysis and
discourse. Washington: Georgetown University Press.
Mason, R., & Mitroff, I. (1981). Challenging strategic planning assumptions: Theory, cases and
techniques. New York: John Wiley and Sons.
Miser, H., & Quade, E. (1988). Handbook of systems analysis: Craft issues and procedural
choices. Chichester: Wiley.
Patton, C., & Sawicki, D. (1986). Basic methods of policy analysis and planning. Englewood
Cliffs: Prentice-Hall.
Quade, E. (1975). Analysis for public decisions. New York: Elsevier.
256
Weimer, D., & Vining, A. (1992). Policy analysis: Concepts and practice. Englewood Cliffs,
NJ: Prentice Hall.
White, L. (1994). Policy analysis as discourse. Journal of Policy Analysis and Management,
13(3), 506-525.
Wildavsky, A. (1987). Speaking truth to power: The art and craft of policy analysis. New
Brunswick: Transaction Books.

Key articles from TPM-researchers

De Bruijn, J., & Ten Heuvelhof, E. (2000). Networks and decision-making. Utrecht: Lemma.
De Bruijn, J., Ten Heuvelhof, E., & In t Veld, R. (2002). Process management: Why project
management fails in complex decision-making processes. Dordrecht: Kluwer.
Edelenbos, J. (1999). Design and management of participatory public policy making. Public
Management, 1(4), 569-578.
Van Eeten, M. (2001). Recasting intractable policy issues: The wider implications of the
Netherlands civil aviation controversy. Journal of Policy Analysis and Management, 20(3), 391-
414.
Mayer, I. (1997). Debating technologies: A methodological contribution to the design and
evaluation of participatory policy analysis. Tiburg: Tilburg University Press.
Mayer, I.S., Van Daalen, C.E., & Bots, P.W.G. (2004). Perspectives on policy analyses: A
framework for understanding and design. International Journal of Technology, Policy and
Management, 4(2), 169-191.

Description of a typical TPM-problem in the context of which the concept is particularly
relevant

Using a broad definition of policy analysis, many of the activities which are carried out at
TPM can be considered to be policy analysis activities. Typical TPM issues which can be
addressed using policy analysis include decisions regarding (future) infrastructure options,
e.g. for energy, transport and/or water. The hexagon model can be useful in making conscious
choices about the setup of a policy analysis study.

Related theories / methods

Most of the methods in the catalogue are applicable as part of a policy analysis activity. The
way in which the methods are used depends on the activity which is chosen. System
dynamics, for example, may be used as part of a design and recommend activity by
simulating the consequences of various alternatives, but it may also be used as a democratize
activity by means of a group model building session. The most relevant TPM methods for
policy analysis are: Actor (network) analysis, Analytic Hierarchy Process, Cognitive
mapping, Cost Benefit Analysis, Discrete Event Simulation, Modelling and simulation
(Discrete event simulation, Agent-Based Simulation, System Dynamics, Exploratory
Modelling, Participative Modelling), Multi-Criteria Decision Analysis, Scenario analysis,
Scenario development, Bouncecasting, Serious gaming, Stated preference experiments,
Transaction modelling approach to actor networks. These are all methods which can be used
within a policy analysis study.

257
The catalogue also mentions Systems Engineering as a method. This is a method at the same
level as policy analysis, and various other methods are used as part of a systems engineering
project. Systems engineering and policy analysis are both system based approaches to
problem solving. In systems engineering, the system is a means to an end. Systems engineers
design products and/or services. Contrary to the systems engineering approach, the system is
not the means to an end in policy analysis. In policy analysis the objective is usually to
influence the state of the system by intervening in an existing system. Policy analysis is less
focused on technologically oriented systems than systems engineering generally is. The
systems which are analysed are either social systems or socio-technical systems.

The concepts which are related to policy analysis are: Actor, Attitude, Belief system, Strategic
behaviour, Uncertainty, Validation (in modelling and simulation). In policy analysis, the
actors with their belief system and behaviour are important. Since ex-ante policy analysis is
about future policies it is essential to take into account uncertainties which may influence
decisions which are made about a future situation. Models are often used to investigate
consequences of alternative courses of actions, and confidence in these models is a
prerequisite to the acceptance of model results.

Editors

Els van Daalen (c.vandaalen@tudelft.nl)
Pieter Bots (p.w.g.bots@tudelft.nl)
Igor Mayer (i.s.mayer@tudelft.nl)
Wil Thissen (w.a.h.thissen@tudelft.nl)


























258
POLICY ANALYSIS FRAMEWORK

Definition

Policy analysis is performed in government, at all levels; in independent policy research
institutions, both for-profit and not-for-profit; and in various consulting firms. It is not a way
of solving a specific problem, but is a general approach to problem solving. It is not a specific
methodology, but it makes use of a variety of methodologies in the context of a generic
framework. Most important, it is a process, each step of which is critical to the success of a
study, and each step of which must be linked to the policymakers, to other stakeholders, and
to the policymaking process.

The approach is built around an integral system description of a policy field (see Figure
below). At the heart of the system description is a system model (not necessarily a computer
model) that represents the policy domain. The system model clarifies the system by (1)
defining its boundaries, and (2) defining its internal structure the elements, and the links,
flows, and relationships among them.


Figure: The policy analysis framework and steps

Two sets of external forces act on the system: external forces outside the control of the actors
in the policy domain, and policy changes. Both sets of forces are developments outside the
system that can affect the structure of the system (and, hence, the outcomes of interest to
policymakers and other stakeholders). These developments involve a great deal of
uncertainty. The external forces themselves are highly uncertain. They include, among
others, the economic environment, technology developments, and the preferences and
behaviour of people. Typically, scenarios are the analytical tools that are used to represent
and deal with these uncertainties.

Policies are the set of forces within the control of the actors in the policy domain that affect
the structure and performance of the system. Loosely speaking, a policy is a set of actions
259
taken by a problem owner to control the system, to help solve problems within it or caused by
it, or to help obtain benefits from it. In speaking about national policies, the problems and
benefits generally relate to broad national goals for example, tradeoffs among national
environmental, social, and economic goals. A goal is a generalized, non-quantitative policy
objective (e.g., reduce air pollution or ensure traffic safety). Policy actions are intended to
help meet the goals.

For each policy goal, criteria are used to measure the degree to which policy actions can help
to reach the goal. These criteria are directly related to the outcomes produced by the system.
Those system outcomes that are related to the policy goals and objectives are called outcomes
of interest. Unfortunately, although a policy action may be designed with a single goal in
mind, it will seldom have an affect on only one outcome of interest. Policy choices, therefore,
depend not only on measuring the outcomes of interest relative to the policy goals and
objectives, but identifying the preferences of the various stakeholders, and identifying
tradeoffs among the outcomes of interest given these various sets of preferences. The
exploration of the effects of alternative policies on the full range of the outcomes of interest
under a variety of scenarios, and the examination of tradeoffs among the policies requires a
structured analytical process that supports the policymaking process.

Applicability

The framework can be applied to analyzing policy problems in any of the TPM domains (e.g.,
energy, water, ICT, transport). Recently, it has been applied to developing anti-terrorist
strategies as part of the Minor in Safety, Security, and Justice.

Pitfalls / debate

According to Mayer et al. (2004), there are many functions and styles of policy analysis. The
framework described above relates primarily to what they call the rational and client
advisory styles of policy analysis, which link the research and analyze, design and
recommend, and provide strategic advice purposes. These styles, which are object
oriented, are often referred to as traditional policy analysis.

Key references

Mayer, I.S., Van Daalen, C.E., & Bots, P.W.G. (2004). Perspectives on policy analyses: A
framework for understanding and design. International Journal on Technology, Policy and
Management, 4(2), 169-191.
Miser, H.J., & Quade, E.S. (Eds.). (1985). Handbook of systems analysis: Overview of uses,
procedures, applications, and practice. New York: Elsevier Science Publishing Co., Inc.
Walker, W. E. (2000). Policy analysis: A systematic approach to supporting policymaking in
the public sector. Journal of Multicriteria Decision Analysis, 9(1-3), 11-27.

Related theories / methods

Practically all of the theories and methods in this catalogue can be used for addressing the
policy problems being analyzed within the policy analysis framework.

Note that this framework is used as the basis for EPA-1121 (Introduction to Policy Analysis).
260

Editor

Warren Walker (W.E.Walker@tudelft.nl)














































261
POLICY SCORECARD

Brief discussion

The final stage of a policy analysis study is about selecting one or more policies/strategies
from a large number of potential strategies. Many approaches have been developed for this
purpose. Walker (2000) identifies the theoretical and practical problems associated with these
approaches in a multi-stakeholder context. Many policy/strategy choice approaches are
aggregate approaches, such as cost-benefit analysis, multi-criteria analysis, or multi-objective
optimization. In an aggregate approach, each impact is weighted by its relative importance
and combined into some single, commensurate unit such as money, worth, or utility.
Decision-makers then use this aggregate measure to compare the policies/strategies.

However, there are drawbacks to using an aggregate approach. First, the aggregation process
loses considerable information in particular, the information on the individual impacts.
Second, any single measure of worth depends strongly on the weights given to the different
effects when they were combined and the assumptions used to get them into commensurate
units. Unfortunately, these crucial weights and assumptions are often implicit or highly
speculative. They may impose a value scheme on the decision-makers that bears little relation
to their concerns
12
.

Third, the aggregate techniques are intended to help an individual decision-maker choose a
single preferred strategy the one that best reflects his/her values (importance weights).
Serious practical and theoretical problems with aggregate approaches arise when there are
multiple stakeholders and/or multiple decision-makers. The practical problems include the
need to answer the following questions: whose values get used (the issue of interpersonal
comparison of values), and what relative weight does the group give to the preferences of
different individuals (the issue of equity)? The theoretical problem associated with these
questions is that it has been proved that there is no rational procedure for combining
individual rankings into a group ranking that does not explicitly include interpersonal
comparison of preferences (Arrow, 1950). To make this comparison and to address the issue
of equity, full consideration of the complete set of impacts is essential.

Use of a policy scorecard avoids such problems. Each row of a scorecard represents an impact
and each column represents a policy. An entire column shows all of the impacts of a single
policy; an entire row shows each policys value for a single impact. Numbers or words appear
in each cell of the scorecard to convey whatever is known about the size and direction of the
impact in absolute terms i.e., without comparison between cells. An example is shown in
Figure below.

12
For example, traditional cost-benefit analyses implicitly assumes that a euros worth of one kind of benefit has
the same value as a euros worth of another; yet in many public decisions, monetarily equivalent but otherwise
dissimilar benefits would be valued differently by different stakeholders.
262

Figure: Sample scorecard (from the Policy Analysis of the Oosterschelde (POLANO) project)

In comparing the strategies, each stakeholder and decision-maker can assign whatever weight
he/she deems appropriate to each impact. Explicit consideration of weighting thus becomes
central to the strategy choice process itself, as it should be. Prior analysis can consider the full
range of possible outcomes, using the most natural description for each outcome. Therefore,
some effects can be described in monetary terms and others in physical units; some can be
assessed with quantitative estimates (e.g. noise and pollutant emissions) and others with
qualitative comparisons (e.g. the stakeholder acceptability for this strategy is high).

Arrow K.J. (1950). A difficulty in the concept of social welfare. Journal of Political
Economy, 58(4), 328346.
Walker, W.E. (2000). Policy analysis: A systematic approach to supporting policymaking in
the public sector. Journal of Multicriteria Decision Analysis, 9(1-3), 1127.

Key references

Miser, H.J., & Quade, E.S. (Eds.). (1985). Handbook of systems analysis: Overview of uses,
procedures, applications, and practice (pp. 93-95 and 230-234). New York: Elsevier Science
Publishing Co., Inc.
Walker, W.E. (2000). Policy analysis: A systematic approach to supporting policymaking in
the public sector. Journal of Multicriteria Decision Analysis, 9(1-3), 1127.

Key articles from TPM-researchers

Wijnen, R.A.A., Chin, R.T.H., Walker, W.E., & Kwakkel, J. H. (2008). Decision support for
mainport strategic planning. In P. Zarate, J.P. Belaud, G. Camilleri, & F. Ravat (Eds.),
Proceedings of the International Conference on Collaborative Decision Making (CDM'08,
Toulouse, France, June 30 - July 4, 2008), IOSPress publishers, ISSN:0922-6389.

Description of a typical TPM-problem that may be addressed using the theory / method
263

Any TBM problem that involves evaluation and choice among a number of different
policies/strategies could use policy scorecards, but especially those involving multiple
decision-makers. Discussion among multiple decision-makers and stakeholders about the
preferred policy/strategy becomes transparent and understandable (aggregation of the multiple
effects of a policy into a single measure necessarily involves logic, which most of the time is
only understood by the policy analyst). And, the focus of the discussion is on the decision, not
on the weights to be used.

Editor

Warren Walker (W.E.Walker@tudelft.nl)






































264
Q-METHODOLOGY

Brief discussion

Q-methodology is a method to reveal peoples viewpoints on a particular topic. The basic
idea of Q-methodology is that one correlates persons instead of traits. In a typical Q-study
subjects are asked to rank-order a selection of statements, which adequately represents the
whole of subjective communicability on the topic at hand. When two rank-orderings are
shown to correlate, the persons relating to them are said to share a similar viewpoint on the
topic. By factor-analyzing the correlation matrix of NxN rank-orderings, shared viewpoints
among people can be extracted. An interest in the structure and content of these commonly
held viewpoints lies at the core of any Q-methodological study.

Q-methodology was originally introduced in 1935 by the British psychologist William
Stephenson as the science of subjectivity. As opposed to the science of objectivity, in which
measurement is independent of the subject (e.g., measuring a persons height), measurement
within the science of subjectivity is anchored in self-reference. This means that variables are
not defined a priori by the researcher as self-evidential entities (like in the science of
objectivity a variable like centimetres can be defined to measure height), but that their
meaning is dependent on other positions held by the subject. For example, a persons
agreement with the statement due to congestion my travel time to work is very long can
receive a negative or positive meaning dependent of his position taken on a second statement
I find it relaxing to spend time in a traffic jam. Hence, to study a persons subjectivity
means studying the whole viewpoints held by subjects. These viewpoints must be regarded
and subsequently interpreted as coherent constellations of interrelated positions related to the
topic at hand. The aim of Q-methodology is to study the structure and form of these
constellations.

In the field of transport Q-methodology has been previously applied to study peoples
viewpoints on transport and social inclusion, to investigate peoples strategies in medium-
distance travel decision making, to study peoples motives behind mode choice and to
investigate peoples acceptance and anticipated behavioural response to the introduction of
charging tolls. In sum, Q-methodology can be applied to any topic in which subjective
perceptions, beliefs and attitudes play a role.

Applicability

The Q-method applies to the study of subjectivity. Given a typical sample size of 40-80
people the influence of objective variables on the revealed viewpoints cannot be assessed. A
second and related issue is that the Q-method only applies to the revelation of the subjective
viewpoints; it cannot provide information on the distribution of these viewpoints in a
population. Other classification methods, like cluster analysis or latent class analysis, can
handle large (representative) datasets. However, these methods can only handle individual
measurements and not whole rank-orderings. As a result, the Q-method is better able to
distinguish the classes and identify the appropriate number of classes. Q-methodology and
cluster/latent class analysis therefore also complement each other. For example, a Q-analysis
can be used to reveal the subjective viewpoints, which can subsequently be used as input for a
cluster analysis to assess (and explain) the variation of the subjective viewpoints in a
population.

265
Key references

Brown, S.R. (1980). Political subjectivity: Applications of Q methodology in political science.
New Haven, CT: Yale University Press. Available at: Kent State University Library.
Watts, S., & Stenner, P.H.D. (2005). Doing Q methodology: Theory, method and
interpretation. Qualitative Research in Psychology, 2, 67-91.
McKeown, B., & Thomas, D. (1988). Q methodology. Beverly Hills, California: Sage
Publications.

Key articles from TPM-researchers

Van Eeten, M.J.G. (1999). Dialogues of the deaf: Defining new agendas for environmental
deadlocks. Delft: Eburon.
Van Eeten, M.J.G. (2001). Recasting intractable policy issues: The wider implications of The
Netherlands civil aviation controversy. Journal of Policy Analysis and Management, 20(3),
391-414.
Kroesen, M., & Brer, C. (2009). Policy discourse, people's internal frames and declared
aircraft noise annoyance: An application of Q methodology. Journal of the Acoustical Society
of America, 126(1), 195-207.

Description of a typical TPM-problem that may be addressed using the theory / method

In the following we will provide a brief description of how Q-methodology can be applied to
a typical TLO problem. For example, the research aim is to gain insight in peoples
viewpoints on road pricing policies. The whole of subjective communicability about this topic
includes a wide range of issues (e.g., related to equity, acceptance, privacy, costs/benefits) and
may eventually cover over 200 varying statements of opinion. Via Q-methodology, this
concourse can be brought back to interpretable proportions, i.e. coherent viewpoints in
which all these positions are interrelated into meaningful wholes. These viewpoints can
subsequently be used to answer policy relevant questions like what are the minimum
conditions a pricing instrument has to satisfy? or how will peoples acceptance be affected
by certain pricing instruments?, etc.

Related theories / methods

The Q-method is basically a classification method. As such, it competes with other
classification methods like cluster analysis and latent class analysis. Yet, Q-methodology is
also partly complementary to the methods (see applicability). Second, a Q-methodological
study can be (and has been) effectively combined with a Grounded Theory approach to
identify the whole of subjective communicability in relation to the subject matter at hand (i.e.,
the first step in the Q-method).

Editor

Maarten Kroesen (m.kroesen@tudelft.nl)



266
QUANTITATIVE RISK ANALYSIS (QRA)

Brief discussion

Risk analysis aims as answering three important questions:
1. What can go wrong?
2. How likely is it to happen?
3. Given that it occurs, what are the consequences?

While the qualitative methods are more used to answer the first question, the answer to the
other two questions are found performing QRA.

QRA or probabilistic risk analysis (PRA) is a methodology to evaluate risks associated with
complex engineering systems or processes. In a QRA, risk is characterized by the severity of
an accident and the likelihood of occurrence of that severity. Mathematically, Kaplan and
Garrick (1981) defines risk as a collection of scenarios s
i
(what can go wrong), a probability p
i

associated with each scenario (how likely is it to happen) and the consequence x
i
(what are the
consequences). The total risk associated with the given system or process is, then, the sum of
all the products p
i
x
i
.

To perform a QRA for a given process or system, a series of techniques are involved. Figure
below presents a symbolic representation of the steps of a risk analysis, together with the
techniques associated to each step. The QRA include the quantification of the probability of
the unwanted events identified, in which techniques as failure tree or reliability analysis are
involved, and quantification of the effects of the unwanted events identified, in which effect
analysis or damage analysis are used.


Figure: Fishbone model

Applicability

QRA can be applied for any type of complex technological systems or activities, at any phase
of the life cycles, from the concept definition and pre-design, through operational phase,
improve phase and removal from operation. QRA should be used for any cost-effectively
decision making process which involves risks to people and environment.

267
Pitfalls / debate

The accuracy of the QRA results depends on the availability and reliability of data used as
input. For a new design or process, data from similar sources can be adapted and the results
are more useful for comparative risk analysis rather than absolute risk evaluations.

Key references

Kaplan, S., & Garrick, B. J. (1981). On the quantitative definition of risk. Risk Analysis, 1(1),
11-27.
Bedford, T., & Cooke, R.M. (2001). Probabilistic risk analysis: Foundations and methods.
Cambridge University Press.

Key articles from TPM-researchers

Ale, B.J.M., Bellamy, L.J., van der Boom, R., Cooke, R.M., Duyvism, M., Kurowicka, D.,
Lin, P.H., Morales, O. Roelen, A., & Spouge, J. (2009). Causal model for air transport safety.
Final Report.
Hanea, D.M. (2009, November). Human risk of fires: Building a decision support tool using
Bayesian networks (PhD thesis). Safety Science Group, TBM.

Description of a typical TPM-problem that may be addressed using the theory / method

QRA can be applied to any problems involving risk based decision making, in all kinds of
fields, such as transportations, infrastructures, occupations.

Related theories / methods

Some of the techniques used when performing QRA are: Fault Tree Analysis (FTA), Event
Tree Analysis (ETA), Bayesian Belief Nets (BBNs), Expert Judgment, accident statistics.

Editor

Daniela Hanea (d.m.hanea@tudelft.nl)













268
REAL OPTIONS ANALYSIS AND DESIGN
13


Brief discussion

Real options analysis and design is a way to design flexible artefacts that are able to deal with
future uncertainties in the current design. ROA treats uncertainties in the future as
opportunities in stead of threats. ROA is able to calculate the value of flexibility, i.e. given the
uncertainties, what is the value of designing a flexible artefact that is able to respond to future
uncertainties. The value of a project using ROA is higher than when using conventional NPV
techniques, since there is value in uncertainty. The more volatile the surroundings are, the
more value is to be gained from a flexible design.

The method is based in the financial options analysis.

Applicability

For valuation of any design alternatives under uncertainty. It can be applied to the design of
any artefact, given enough data on the uncertainties. One would need proper screening models
of the artefacts alternative designs.

Pitfalls

There are not always options available for a designer in case of the design of large scale
tangible artefacts.

The sheer amount of number crunching may give a false idea of certainty.

Key references

Amram, M., & Kulatilaka, N. (1999). Real options: Managing strategic investment in an
uncertain world. Boston: Harvard Business School Press. ISBN 0-87584-845-1.
Copeland, T. E., & Antikarov, V. (2001). Real options: A practitioner's guide. New York:
Texere. ISBN 1-587-99028-8.
Dixit, A., & Pindyck, R. (1994). Investment under uncertainty. Princeton: Princeton
University Press. ISBN 0-691-03410-9.
Moore, W. T. (2001). Real options and option-embedded securities. New York: John Wiley &
Sons. ISBN 0-471-21659-3.
Mller, J. (2000). Real option valuation in service industries. Wiesbaden: Deutscher
Universitts-Verlag. ISBN 3-824-47138-8.
Smit, T.J., & Trigeorgis, L. (2004). Strategic investment: Real options and games. Princeton:
Princeton University Press. ISBN 0-691-01039-0.
Trigeorgis, L. (1996). Real options: Managerial flexibility and strategy in resource
allocation. Cambridge: The MIT Press. ISBN 0-262-20102-X.
DeNeufville, S. (2009, forthcoming). Flexible design. MIT Press.


13
Flexible Design
269
Key articles from TPM-researchers

/

Description of a typical TPM-problem that may be addressed using the theory / method

Design of a Syngas infrastructure, OV chip card, and Zuid-As, etc.

Related theories / methods

NPV (and other economic valuation methods), Monte Carlo simulation

Editor

Paulien Herder (p.m.herder@tudelft.nl)



































270
SAFETY AUDITING: THE DELFT METHOD

Brief discussion

Auditing is a technique originating from the financial world and concerned with examining,
verifying, and/or correcting financial accounts, e.g. from organisations. Applied to
organisations and safety the term pertains to the assessment of the quality of an organisations
safety management system, i.e. assessing which sub-systems the organisation has in place to
control safety and to what extent these systems are working and effective in practice.

There are various approaches towards safety auditing that differ in scope, depth and
theoretical underpinning. At the Safety Science Group in Delft a method has been developed
based on their own safety management model. In this model three cycles are applied: the
lifecycle of barriers, the Deming cycle as a method of implementing and improving processes
and the Nertney wheel to identify key safety management areas.

Applicability

A safety audit only makes sense in companies running production processes that bring along
significant hazards, and their associated risks, and that have already various systems in place
to control these hazards and risks. However, the choice where to look for weak spots in the
safety management system remains a challenge. Consequently, although major safety audits
show significant overlap, a comparison would also reveal differences in focus and stress.
Some of these differences are related to the industry the audit is developed for; e.g. the (petro-
)chemical industry, heavy industry or nuclear installations. The Delft audit method has been
developed for the chemical industry but claims a much wider applicability. However,
currently it is still much a expert tool to be used by experienced auditors.

Pitfalls / debate

Currently, the Delft safety audit misses clear norms with which to compare a companys
safety management system. In this regard, the method still relies very much on the experience
of the user, i.e. the auditor.

Key references

Bellamy, L. J., Papazoglou, I. A., Hale, A. R., Aneziris, O. N., Ale, B. J. M., Morris, M. I., et
al. (1999). I-Risk: Development of an integrated technical and management risk control and
monitoring methodology for managing and quantifying on-site and off-site risks. Contract
ENVA-CT96-0243. Report to European Union. Den Haag: Ministry of Social Affairs and
Employment.
Det Norske Veritas. (undated). isrs
7
PSM [Electronic Version]. Retrieved April 30, 2009 from
http://www.dnv.com/binaries/isrs7%20PSM%20Brochure%20rev%207_tcm4-273420.pdf.
Hale, A. R., & Guldenmund, F. W. (2004). Aramis audit manual (version 1.2). Delft: Safety
Science Group, Delft University of Technology.
Hourtolou, D., & Salvi, O. (2003). ARAMIS Project: Development of an integrated Accidental
Risk Assessment Methodology for IndustrieS in the framework of Seveso II Directive. Paper
presented at the ESREL 2003, Maastricht, June 16-18.
271
International Loss Control Institute. (1984). International safety rating system. (Fifth revised
edition). Georgia: Institute Publishing.
Oh, J. I. H., Brouwer, W. G. J., Bellamy, L. J., Hale, A. R., Ale, B. J. M., & Papazoglou, I. A.
(1998). The I-Risk project: Development of an integrated technical and management risk
control and monitoring methodology for managing and quantifying on-site and off-site risks.
In A. Mosleh & R. A. Bari (Eds.), Probabilistic safety assessment and management (pp.
2485-2491). London: Springer.

Key articles from TPM-researchers

Guldenmund, F. W., Hale, A. R., & Bellamy, L. J. (1999). The development and application
of a tailored audit approach to major chemical hazard sites. Paper presented at the SEVESO
2000. Risk Management in the European Union of 2000: The Challenge of Implementing
Council Directive 96/82/EC "SEVESO II", Athens, November 10-12.
Guldenmund, F. W., Hale, A. R., Goossens, L. H. J., Betten, J. M., & Duijm, N. J. (2006).
The development of an audit technique to assess the quality of safety barrier management.
Journal of Hazardous Materials, 130(3), 234-241.
Hale, A. R., Guldenmund, F. W., Bellamy, L. J., & Wilson, C. (1999). IRMA: Integrated Risk
Management Audit for major hazard sites. In G. I. Schueller & P. Kafka (Eds.), Safety &
reliability (pp. 1315-1320). Rotterdam: Balkema.

Description of a typical TPM-problem that may be addressed using the theory / method

- Periodical assessment of major hazard sites by inspectorates (the Delft audit method has
been used for this purpose with practical results).

- Mapping of a safety management system onto the generic system of the Delft method to
see whether it is complete or elements lacking (see PhD project P.H. Lin).

Related theories / methods

Safety management, Safety culture, Risk management/ control, Qualitative research,
Organisational assessment.

Editor

Frank Guldenmund (f.w.guldenmund@tudelft.nl).












272
SCENARIO APPROACH (FOR DEALING WITH UNCERTAINTY ABOUT THE
FUTURE)

Brief discussion

Scenarios
The use of the term scenario as an analytical tool dates from the early 1960s, when
researchers at the RAND Corporation defined states of the world within which alternative
weapons systems or military strategies would have to perform. Since then, their use has
grown rapidly, and the meanings and uses of scenarios have become increasingly varied. We
use Quades (1989) definition: A description of the conditions under which the system or
policy to be designed, tested, or evaluated is assumed to perform.

Scenarios are stories of possible futures, based upon logical, consistent sets of assumptions,
and fleshed out in sufficient detail to provide a useful context for engaging planners and
stakeholders. A scenario for a policy analysis study includes assumptions about developments
within the system being studied and developments outside the system that affect the system,
but exclude the policy options to be examined. Because the only sure thing about a future
scenario is that it will not be exactly what happens, several scenarios, spanning a range of
developments, are constructed to span a range of plausible and relevant futures. No
probabilities are attached to the futures represented by each of the scenarios. They have an
exploratory function, not a predictive function. Scenarios do not tell us what will happen in
the future; rather they tell us what can (plausibly) happen. They are used by policy analysts
and policymakers to prepare for the future: to identify possible future problems, and to
identify robust (static) policies for dealing with the problems.

Scenario development
Scenario development is the methodology to develop scenarios. There are numerous ways to
develop scenarios, but here we present the traditional quantitative approach, based upon the
scenario axis technique (vant Kloosterand & van Asselt, 2006). This is a method first
described by Schwartz (1996), applied by RAND Europe (1997), and currently much in use.

The steps in scenario development, shown in Fig., are the following:
1. Develop system diagram and define outcomes of interest: the system diagram provides a
framework for exploring the issues related to the policy problem. It identifies the system
boundary and policy domain, and it includes elements that affect or influence the
outcomes of interest
2. Identify forces driving structural changes (FDSCs): i.e. those forces outside the system
that act on the system and may lead to changes in the functioning of the system and, as
such, which may lead to changes in the outcomes of interest. These forces are outside the
control of the systems policymakers.
3. Identify system changes, and logical connections between the forces, the system changes,
and the outcomes of interest.
4. Identify forces and system changes that are uncertain.
5. Identify forces and system changes that are relevant: the relevance of a force is based on
whether the resulting change in one or more of the outcomes of interest is high or low
(qualitative estimates). If there are several outcomes of interest, some forces may lead to
both high and low impacts. If a force has a high impact with respect to any of the
outcomes of interest, it should be categorized as relevant.
273
6. Develop scenarios: Fairly certain relevant FDSCs are included in all scenarios. Forces that
are uncertain and relevant are candidates for developing scenarios (output from previous
two steps). Forces for each scenario are selected so that they form a consistent story
(logical connections identified in Step 3 are important). Each scenario is described with
assumptions about forces and resulting changes.
7. Quantify the scenarios: A quantitative estimation gives a detailed picture of a scenario and
provides insight into its associated problems. Quantification also enables the scenario
variables to become inputs into system models used to estimate the impacts of policy
changes.

Scenarios that are developed should meet the criteria of plausibility, credibility, and
consistency.


Figure: The scenario development process (from RAND Europe, 1997)

Key references

Van Beek, F.A. (2007). Denken in scenario's: Onzekerheid beheersen, Den Haag,
Kennisinstituut voor Mobiliteitsbeleid.
Van der Heijden, K. (1996). Scenarios, the art of strategic conservation. Chichester: Wiley.
Kahn, H., & Wiener, A. (1967). The Year 2000: A framework for speculation on the next
thirty-three years. New York.
Vant Klooster, S.A., & Van Asselt, M.B.A. (2006). Practicing the scenario-axes technique.
Futures, 38(1), 1530.
Quade, E. S. (1989). Analysis for public decisions. (3
rd
edition). Amsterdam: Elsevier Science
Publishers B.V.
RAND Europe (1997). Scenarios for examining civil aviation infrastructure options in the
Netherlands. DRU-1513-VW/VROM/EZ, RAND, Santa Monica.
Outcomes/
system diagram
Identification of forces
& system changes
Identification of fairly
certain forces &
system changes
Identification of
uncertain forces &
system changes
Uncertain forces &
system changes
with low impact
Uncertain forces &
system changes
with high impact
Scenarios
Demand
Checking
consistency &
plausibility of
scenarios
Translation into ranges of
values of scenario variables
Brainstorming, focus
group sessions
Step 1: Define outcomes of interest and develop
system diagram
Step 2: Identify external forces driving changes in
the system
Step 3: Identify system changes, connections
between the forces and changes, and changes in the
outcomes of interest
Step 4: Determine what is uncertain and what is not
Step 5: Determine what is relevant
Step 6: Develop descriptive scenarios
Step 7: Quantify scenarios (and generate demands)
274
Schwartz, P. (1996). The art of the Longview. New York, Currency Doubleday.

Description of a typical TPM-problem that may be addressed using the theory / method

Scenarios can be used in practically any policy analysis study involving uncertainty about the
future.

Please note that the course EPA-2132, which deals with handling uncertainties in policy
analysis, discusses a wide range of methods for long-term policymaking. Amongst others, a
lot of attention is given to developing and using scenarios.

Editors

Warren Walker (W.E.Walker@tudelft.nl)
Vincent Marchau (V.A.W.J.Marchau@tudelft.nl)
Jan-Willem van der Pas (J.W.G.M.vanderPas@tudelft.nl)

































275
SERIOUS GAMING
14


Brief discussion

Serious games induce learning experiences at the individual, organisational or systemic level.
These experiences include gaining insight in complex matter, rehearsing the future, expanding
knowledge, and developing ones frame of reference. The gaming groups research relies
heavily on simulation science, facilitation expertise, and immersive technology. Serious
gaming provides solutions for professional training, policy development, and (higher)
education by building physical and digital serious games.

Applicability

There are many applications for serious gaming ranging from raising awareness to decision
support. Its main strengths are to switch actor positions, visualize complex matter and its
ability to fuel stakeholder participation through competition. Successful implementations of
serious gaming are seen in supply chain management, public policy development, building
and construction planning, military training and risk management.

Pitfalls / debate

Serious gaming relies on valid system models. Constructing these models can be very
complex, especially when the models need to be as close to reality as possible. However, this
pitfall is shared with other domains that are related to modelling and simulation. Serious
Games design can also be very expensive; certainly when a 3D environment is needed in the
game. Although it might not be seen as a pitfall, social acceptance of playing a game to learn
something is still a big hurdle to take.

Key references

Abt, C. (1970). Serious games. New York: Viking.
Duke, R.D., & Geurts, J. L.A. (2004). Policy games for strategic management: Pathways into
the Unknown. Amsterdam: Dutch University Press.
Klabbers, J. (2008). The magic circle: Principles of gaming & simulation. (2
nd
edition).
Rotterdam: Sense Publishers.

Key articles from TPM-researchers

Mayer, I.S. (forthcoming). The gaming of policy and the politics of gaming: A review. To
appear in: Simulation and Gaming An Interdisciplinary Journal of Theory, Practice and
Research.
ISAGA (2009). Introducing a selection method of (C)OTS game engines for serious games.
ISAGA (2009). Technology and skills for 3D virtual reality serious gaming.
Kortmann, L.J., & Harteveld, C. (2009). Agile game development: Lessons learned from
software engineering. In: Proceedings of the 40th ISAGA conference, Singapore, ISBN 978-
981-08-3769-3.

14
Simulation and gaming; gaming for professional practice and policy evaluation
276

Description of a typical TPM-problem that may be addressed using the theory / method

Recently a game was created for harbour planning. Construct it! Which facilitates the process
of planning the layout and timetable of a harbour in the Netherlands. Because of the complex
nature of harbour planning a tool had to be developed that would teach people the most
important facets of harbour planning which is a multidisciplinary problem that involves many
stakeholders.

Related theories / methods

Serious gaming is sometimes called gaming simulation because of its heritage from
simulation which also introduces another offspring namely interactive simulation. Some of
the theories are from simulation but others are from learning, decision science or
collaboration engineering.

Editors

Rens Kortmann (l.j.kortmann@tudelft.nl)
Ronald Poelman (r.poelman@tudelft.nl)




























277
STATED PREFERENCE EXPERIMENTS
15


Brief discussion

Stated preference experiments are a family of data-collection methods intended to examine
trade-offs made in decision making behaviour. The quintessence of this method is that
hypothetical profiles are constructed that describe decision alternatives (e.g. products or
services) in relevant characteristics called attributes. Based on statistical designs certain levels
of the attributes are combined into profiles of decision alternatives that can be shown to
respondents. Typically a sample of citizens or consumers is requested to evaluate each profile
on a rating-scale or to choose between two or more presented profiles. By applying regression
or logit models, the observed responses are then decomposed into the part-worth (or marginal)
utility contribution of the attribute level to the overall utility derived from the alternatives.
The resulting utility function may be applied, for example, to predict changes in behaviour
due to alternative planning or policy measures.

Applicability

As stated preference experiments makes use of hypothetical profiles, the method is
convenient to examine preferences for products or services that do not yet exist. In particular,
it should be used if separate and independent survey questions on preferences for individual
attributes will not result in valid responses. For instance, preference for price cannot be
validly measured if the functionality or quality of a product or service is not specified. Thus
the method is especially useful if one is interested in examining trade-offs among the
attributes of decision alternatives.

Pitfalls / debate

One should be aware that this method measures stated responses, hence, how respondents
intend to behave if the alternatives shown to them would become available in reality. Hence,
there is always the question whether people will actually conduct that behaviour in reality. A
reason for a possible gap is that full information is provided in the experiments, which may
not be known in reality. Hence, decision makers may not be aware of the existence of
alternatives once they become available or may have different perceptions on the values of
their attributes. Another reason may be that stating that one will change behaviour is easier in
an experiment than doing so in reality. Hence, estimated utility models may be biased and
should preferably be validated or combined with revealed behaviour.

As indicated above, one can observe ratings and choices with respect to the constructed
profiles. Rating tasks in principle allow one to estimate a model for each individual decision
maker, which is therefore convenient to apply in small populations of decision makers.
However, one should be aware that these individual models are rather unreliable and that
these models cannot predict choice behaviour without making none-testable ad hoc
assumptions on how rating evaluations are related to choice. In general, choice task are
considered to result in more valid utility models.

Key references


15
Alternative names: conjoint analysis, conjoint experiments or stated choice experiments/methods
278
Louviere, J.J. (1988). Analyzing decision making: Metric conjoint analysis. Sage University
Paper, Series on Quantitative Applications in the Social Sciences, No.07-067. Beverly Hills:
Sage Publications.
Louviere, J.J., Hensher, D.A. & Swait, J.D. (2000). Stated choice methods: Analysis and
applications. Cambridge University Press.

Key articles from TPM-researchers

Bos, I.D.M., Van der Heijden, R., Molin, E.J.E. & Timmermans, H.J.P. (2004). The choice of
park & ride facilities: An analysis using a context-dependent hierarchical choice experiment.
Environment and Planning A, 36(9), 1673-1686.

Description of a typical TPM-problem that may be addressed using the theory / method

In the paper of Bos et al. (2004), the method is successfully applied to model the trade-offs
car drivers make with respect to Park and Ride facilities. The model is applied to predict the
likely effects of alternatively designed Park and Ride facilities and accompanying policy
measures on modal split. In general, stated preference experiments are widely applied in
transport, in particular to model mode choice behaviour and willingness to pay for travel time
savings. However, the method may also be useful for other TPM domains to predict
behavioural responses towards newly introduced or changed products or services. For
example, to examine the switching behaviour of consumers with respect to energy providers,
or predicting citizens preferences for new (e-governance) ICT services.

Related theories / methods

- Observed choices are estimated by applying discrete choice models, which are also
discussed in this catalogue.
- Stated preference experiments are often contrasted by revealed preference methods that
observe revealed responses, hence, choices already made in real markets.

Editor

Eric Molin (e.j.e.molin@tudelft.nl)












279
SADT - STRUCTURED ANALYSIS AND DESIGN TECHNIQUE
16


Brief discussion

The Structured Analysis and Design Technique (SADT) is a functional modelling technique
for processes that are embedded in systems. It gives a structure to system analysis by linking
system activities by information flows and enables hierarchical structures by system-in-
system analysis (like with a Russian doll). In SADT, every activity transforms an input
variable into an output variable; this is the process being modelled. In order for the process to
run, mechanisms (e.g. tools or machines) are required and controls that govern the process
(e.g. rulebook). These are also explicitly modelled. The method can give clear insight into
system processes as it can display them on different levels of detail. Activities in the process
can be described on a high level without exact knowledge of how they are being performed in
detail. This can be elaborated on in a further level of detail. The method makes sure that the
views on different levels of abstraction are consistent. The emphasis therefore is on what
function should be performed, not on how this is done or which technologies are being used.



Applicability

Given its functional approach and its layered structure, the method can be applied to a wide
variety of systems. Its focus is on system functions and information flows, since all arrows
between functions must be explicitly named and denominate an information flow. This
prevents the method from having bi-directional arrow between several activities which merely
indicate that there interaction. SADT forces the researcher to think what exactly is exchanged
and how this is further used in the receiving process.

Pitfalls / debate

A difficulty is that SADT allows to show activities, but it is not easily possible to model sub
processes which concern decision making (if-then-choices) and how such choices are made.


16
Also known as IDEF0 (Integration Definition for Function Modelling)
Input
Controls
Output
Mechanisms
Level n :
Level n+1 :
280
Another difficulty is that the functions in their most pure from, albeit useful, are sometimes
difficult to grasp. For example, the key process in railway operations is to transport
passengers from A to B. On the highest level of abstraction, the activity that changes inputs
into outputs is therefore a transformation of the passengers position. It is easily misperceived
as transport passengers - a process which does not have clearly defined inputs and outputs
or even as operate trains which is not the highest possible level of abstraction, but a sub
process that can be depicted on a more detailed level.

At the same time, the weaknesses induce strong points: the methods force the analyst to think
at such abstract levels, what most other methods do not.

Key references

Marca, D. A., & MacGowan, C. L. (1988). SADT: Structured Analysis and Design Technique.
New York: Mc Graw-Hill.
NIST. (1993, December 21). Announcing the standard for Integration Definition for Function
Modeling (IDEF0), draft: Federal Information Processing Standards Publication 183.

Key articles from TPM-researchers

Van den Top, J., & Jagtman, H. M. (2007). Modelling risk control measures in railways.
Paper presented at the ESREL 2007 - Risk, Reliability and Societal Safety, Stavanger,
Norway.

Description of a typical TPM-problem that may be addressed using the theory / method

The method is useful for the analysis of activities in safety systems. The method can be used
for any other system.

Related theories / methods

The method relates to control theory (e.g. feedback / feed forward) and to systems analysis.

Editor

Jaap van den Top (j.vandentop@tudelft.nl)














281
STRUCTURAL EQUATION MODELLING

Brief discussion

Structural Equation Modelling (SEM) is a powerful multivariate analysis technique that
allows modelling complex causal (linear) relations between variables. In a way, SEM can be
regarded as a generalization of regression analysis and offers solutions for two of its
limitations. First, while regression analysis is able to control for correlations among
explanatory variables, it is not able to model causal relations among those variables. The main
feature of SEM is that it allows estimating simultaneously a series of related equations in
which a variable can be an independent variable in one equation, while being a dependent
variable in another equation (analogy path analysis). This enables disentangling causal
relations between variables in direct and indirect effects as well as testing for spurious effects.
Second, regression analysis can only model variables that are assumed to be measured
without measurement error. As this assumption practically never holds, certainly not in social
research, this results in biased estimates. SEM allows one to include latent variables that are
measured by a series of indicator variables (analogy factor analyses), therefore corrects for
measurement error and results in unbiased estimates.

Applicability

SEM is especially suitable when the hypothesized causal model consists of direct, indirect and
spurious causal paths between variables and/or variables are (subjective) complex constructs,
like beliefs, attitudes, and behavioural intentions.

Pitfalls / debate

SEM is basically a confirmative modelling technique. This means that specification of the
model must be theory-driven. The reason is that a good model fit only means that the model is
not rejected. One can always specify alternative models that fit the data equally well.
Furthermore, it should be realized that when SEM applications are based on cross-sectional
data, causal relationships between variables can never be proven empirically. One needs
theory to interpret estimated relations in causal terms. Thus, estimating a complex causal
model is always an interplay between model results and theory.

Key references

Bollen, K.A. (1989). Structural equations with latent variables. New York: John Wiley and
Sons.
McDonald, R., & Ho, M-H.R. (2002). Principles and practice in reporting structural equation
analyses. Psychological Methods, 7(1), 64-82.
Westen, R., & Gore, P.A. (2006). A brief guide to structural equation modelling. The
Counseling Psychologist, 34(5), 719-751.

Key articles from TPM-researchers

Kroesen, M., Molin, E.J.E., & Van Wee, B. (2008). Testing a theory of aircraft noise
annoyance: A structural equation analysis. Journal of the Acoustical Society of America,
123(6), 4250-4260.
282
Molin, E.J.E., & Brookhuis, K.A. (2007). Modeling acceptability of the intelligent speed
adapter. Transportation Research Part F, 10(2), 99-108.

Description of a typical TPM-problem that may be addressed using the method

The example is taken from the PhD. research of Maarten Kroesen. The part discussed here
intends to estimate the effects of aircraft noise exposure (in dB(A)) on residential satisfaction.
This causal relation is assumed to be indirect and mediated by noise annoyance. In addition,
other personal determinants of noise annoyance, like age and noise sensitivity, are included in
the model. Age and aircraft noise exposure are assumed to be measured without measurement
error and placed in rectangles, while the other (subjective) variables are all measured by three
indicator variables in placed in ovals. The sign, magnitude and statistical significance of the
hypothesized paths can be estimated by software packages as Lisrel and Amos that require a
correlation or covariance matrix between all measured variables as input.



Related theories / methods

Regression analysis, path analysis, and factor analysis can be considered special cases of
SEM. Furthermore, SEM can be applied to estimate models based on panel data, time-series
data, multi trait-multi-experiments, etc.

Editors

Eric Molin (E.J.E.Molin@tudelft.nl)
Maarten Kroesen (M.Kroesen@tudelft.nl)










Age
Aircraft
noise
exposure
Noise
sensitivity
Noise
annoyance
Residential
satisfaction
283
SUSTAINABLE DEVELOPMENT FOR (FUTURE) ENGINEERS (IN A
INTERDISCIPLINARY CONTEXT)

Brief discussion

For years the most used and widely accepted definition regarding sustainable development
has been the one of Brundtland: Sustainable development is development that meets the needs
of the present without compromising the ability of future generations to meet their own needs,
in which the needs of the worlds poor have first priority. This definition is politically very
valuable as no one can disagree on it.

However, when asking (future) engineers what sustainable development is based on this
definition mostly silence remains. For engineers a direct relation with their design and its
criteria is needed. To accomplish students will get affinity with the problem field by a broad
range of lectures covering topics like: the (enhances) green house effect, resource depletion,
developing countries, water scarcity and abundance, energy (re)sources and technologies,
sustainable entrepreneurship, sustainable design, innovation processes, etc. Furthermore,
some tools are provided to approach sustainable challenges and to analyze the sustainability
of designs.

Next to that the size of the challenge will be illustrated by the I-PAT formula, which states
that the environmental Impact of human kind on the Earth equals the product of its
Population, the Affluence this population lives in and the environmental impact of the
Technology it uses. This formula illustrates that the challenge is to start living about a factor
20 more efficient within the next fifty years whereas 20 is not an absolute number but an
indication to elaborate upon. It for example means that we will have to drive 300 km on 1 litre
of fuel instead of 15 km in the future implicating that we need a way different design of our
transport system (less exhaust, dead weight, asphalt, etc.). Different approaches are used to
facilitate the long-term thinking process like backcasting, transition management, system
innovation and the functions of innovation systems approach.

Once the variety and complexity of sustainable development is clear the link to engineering
processes is made by interactive sessions like discussions and workshops in interdisciplinary
setting (lead by experts). This setting will guarantee a sharing of different contrasting
viewpoints as every discipline has its own focus and viewpoint which will clarify the
complexity and variety of sustainability.

Four things will become clear. First of all, every project needs its own definition of
sustainability based on a shared point of view. The process to get this shared point of view
might be even more important than the design process that follows as a non-shared solution
will never make it to practice and a good problem definition is half the answer to the problem.
Secondly, interdisciplinary context of the process is very valuable as each participant will
create new input other participants do not think of leading to a more complete solution and
inspiration, next to that everyone can do what he is best at. Thirdly, all important stakeholders
should be heard to make sure the solution thought of will be valuable in practice. Fourthly,
sustainable does not exist, yet. Considering the complexity and the size of the challenge the
world will not be sustainable tomorrow. It is a long term intensive process desiring major
changes on a technological, societal and cultural level.

284
Developing countries, especially in Africa, face the challenge of breaking through poverty
traps and climbing the development ladder. In general, this requires endogenous efforts, but
these can be supported and even initiated through cooperation with foreign partners.
Partnerships with NGOs, including universities, have an important role to play and form
frameworks for development initiatives, including socio-technical activities in which
engineers are required.

By and large, sustainable development incorporates a variety of socio-technical choices,
interests and values and, consequently, it should be governed through dilemma
management, aimed at reconciling contradictions and finding common ground.

Applicability

The method discussed is intensive and time consuming. The main challenge is that there is no
ultimate solution: every context and every project requires a specific solution. This also is the
main problem in practice. People - in all parts of society - are eager for sustainable products,
applying everything that is labelled sustainable. But sustainable in itself does not exist and
a more than incremental improvement of the situation, though often necessary and worth
striving after, takes time and effort because much is still to be invented and implemented.

Providing a broad overview of the problem field and tools that can be used already creates a
more critical attitude towards current products and designs. Next to that, it will be easier to
pick up information from the field as a link to valuable information and translation to design
practices is easier to be made.

However the best result is acquiring the skills of interdisciplinary cooperation and
understanding to make a design (process) really work and get towards a shared (working)
definition of sustainable development.

Pitfalls / debate

As there is not one solution, it is impossible to get to a sustainable solution without proper
interdisciplinary interaction and a broad overview of the problem field concerning sustainable
development. This means that a coach, manager of facilitator of the process is needed that is
not willing to enforce his own opinion.

Key references

Brundtland, G.H. (Ed.). (1987). Our common future: The world commission of environment
and development. Oxford: Oxford University Press.
Mulder, K.F. (Ed.). (2006). Sustainable development for engineers: A handbook and resource
guide. Sheffield: Greenleaf Publishing Ltd.
Sachs, J. (2008). Common wealth: Economics for a crowded planet. New York: Penguin.

Key articles from TPM-researchers

De Werk, G., & Mulder, K.F. (2004). Interdisciplinary education as a way of shaping real
sustainable engineers? EESD Conference (Engineering Education for Sustainable
Development), Delft
285
De Werk, G., & Kamp, L. M. (2008). Evaluation of the sustainable development graduation
track at Delft University of Technology. European Journal of Engineering Education, 33(2),
221-229.
Mulder, K.F. (Ed.). (2006). Sustainable development for engineers: A handbook and resource
guide. Sheffield: Greenleaf Publishing Ltd.
De Eyto, A., & De Werk, G. (2008). Connecting silos: Ways to facilitate multidisciplinary
learning for sustainable development. EESD Conference (Engineering Education for
Sustainable Development).

Description of a typical TPM-problem that may be addressed using the theory / method

How to see to it that socio-technical problem-solving serves sustainable development?

Related theories / methods

Backcasting, Functions of Innovation Systems, System Innovation and Transition
Management, Microtraining, Value Driven History / Value Sensitive Development.

Editor

Gertjan de Werk (g.dewerk@tudelft.nl)




























286
SYSTEMS INTEGRATION
17


Brief discussion

Sustainability issues, both ecological and social, become ever more pressing, objectively and
subjectively. Consequently, socio-technical systems are challenged to develop further in order
to meet new necessities and requirements. A frequent response to these problems is to
optimize socio-technical systems by integrating them. There are various examples of system
integration. What is now considered to be the state-of-the-art domestic waste management, for
instance, is to burn it in a combined heat and power plant. As such the waste management
system becomes integrated with the electricity and the district heating system. Another
example is the research being done to couple electric vehicles to the electricity grid. This
concept of vehicle to grid can be seen as an integration of the transport system with the
electricity system.

Systems integration may be facilitated by factors such as the existing infrastructure and
compatibility with incumbent technologies or search heuristics. Moreover, when systems
integration takes place on a large scale, it has consequences at the technical, organizational,
and institutional level. First, as technologies become more inter-connected, the actors
associated with these technologies become more inter-dependent on each other. Second,
systems integration may also affect the power balance between different stakeholders in the
systems. Finally, it may also require new modes of governance.

Applicability

Many systems in society have been integrated in the past or are specially designed as such.
Optimizing systems by integration involves multi-criteria optimization in which the interests
of different actors and stakeholders have to be taken into account. Studying systems
integration, its implications and consequences is necessary to assess the long term
consequences of this type of response to sustainability problems and ensure informed
decision-making.

Pitfalls / debate

When systems are integrated, they become more complex, more inter-dependent and they
involve more competence areas and more actors and stakeholders. As such they may also be
more difficult to change and a situation of lock-in may result.

Key references

Raven, R., & Verbong, G. (2008). Boundary crossing innovations: Case studies from the
energy domain. Technology in Society, 31(1), 85-93.

Key articles from TPM-researchers

Vernay, A.B.H., Mulder, K.F., & Hemmes, K. (2009). A reflection on the consequences of
multifunctionality on long term sustainability with district heating as a case study. In H.

17
Multifunctional systems.
287
Riisgaard (Ed.), Joint actions on climate change (pp. 1-11). Aalborg: University Aaalborg
(ISBN n/a).
Vernay, A.B.H., Hemmes, K. & Mulder, K.F. (2009). Impacts of multifunctionality on
innovation processes: The case of district healing in the Netherlands. In K.N.D. van Egmond,
& W.J.V. Vermeulen (Eds.), Taking up the global challenge. Utrecht: University Utrecht.

Description of a typical TPM-problem that may be addressed using the theory / method

Evaluating the risk of lock-in of a given response of socio-technical system builders to
sustainability problems to help decision makers to take informed decisions. Other relevant
problems relate to optimization issues from a variety of interests and perspectives.

Related theories / methods

System innovation, Multisource multiproduct (energy) systems, Quantitative analysis.

Editors

Kas Hemmes (k.hemmes@tudelft.nl)
Wim Ravesteijn (w.ravesteijn@tudelft.nl)
Anne Lorne Vernay (a.b.h.vernay@tudelft.nl)





























288
TECHNOLOGY ASSESSMENT

Brief discussion

Technology Assessment aims at surveying the social consequences of new technologies,
assessing these consequences and taking action where necessary, including stopping further
technology development. In the 1960s and 1970s TA was aimed at the systematic
identification, analysis and evaluation of the potential secondary order consequences (whether
beneficial or detrimental) as opposed to the planned direct first order consequences of
technology in terms of its impacts on social, cultural, political, economic and environmental
systems and processes. The idea was that it was an instrument on behalf of a neutral, factual
input into decision-making processes. Technology Assessment was seen as an early warning
system to detect, control, and direct technological changes and developments so as to
maximize the public good while minimizing the public risk. As such TA fell short of
expectations and this led less ambitious concepts like Constructive TA and Participatory TA
or Back-casting, which do much more justice to the possibility of diverging technology
development trajectories and the desirability of involving a range of actors. The number of
users of TA has multiplied; it was especially adopted in large enterprises, in combination with
the application of scenario analysis. A simple approach for TA embraces the following steps:
1) Defining problem and research questions, e.g. for whom, problem versus technology
oriented; 2) Exploring and foresighting technological developments; 3) Technology impact
assessment; 4) Normative judgment; 5) Generating improvement options.

Applicability

The method is applicable for a variety of actors, including government agencies and
businesses. Its assumption is that technology is makeable and its development steerable by
actors. As such it deviates from Technology Forecasting, which is much more based on the
idea that technology develops autonomously and that we can know about it from an outside
view. In view of what we know of Technology Dynamics, TA has an optimistic undertone,
which is not always justified in the light of the systems development aspect to modern
technology (see e.g. socio-technical system building).

Pitfalls / debate

Main problem is the Collingridge dilemma: newly emerging technologies are steerable, but
knowledge with regard to their possible effects is scarce; knowledge of these consequences
increases with the further development of the technology, which, however, goes together with
decreasing controlling possibilities. Other problems deal with uncertainties and complexities
involved.

Key references

Mulder, K. (Ed.) (2006). Sustainable development for engineers: A handbook and resource
guide. Sheffield: Greenleaf Publishing Ltd.

Key articles from TPM-researchers

289
Van den Ende, J.C.M., Mulder, K.F., Knot, J.M.C., Moors, E.H.M., & Vergragt, P.J. (1998).
Traditional and modern technology assessment: Toward a toolkit. Technological Forecasting
and Social Change, 58, 5-21.
Mulder, K. (Ed.) (2006). Sustainable development for engineers: A handbook and resource
guide. Sheffield: Greenleaf Publishing Ltd.
Mulder, K. (2001). TA und Wirtschaft: Die Niederlande. In N. Malanowski, C. P. Kruck, &
A. Zweck (Eds.), Technology assessment und wirtschaft (pp.131-146). Frankfurt/New York:
Campus Verlag.

Description of a typical TPM-problem that may be addressed using the theory / method

How could technology development be controlled in such a way that relevant stakeholders
and actors benefit from it instead of experiencing damages from it?

Related theories / methods

Backcasting

Editor

Wim Ravesteijn (w.ravesteijn@tudelft.nl)


























290
TECHNOLOGY TRANSFER & SUPPORT
18


Brief discussion

Technology transfer is the generally used concept, but because transfer of technology
always involves adaptation to new settings and incorporation in local or endogenous
development processes, the extension support is justified and necessary. Technology
transfer and support is a term from the development jargon, referring to technical assistance.
Though the development debate is dominated by economists, there is an increasing awareness
among development experts, including economists, of the significance of technology. This
awareness goes back to the words of president Truman in his inaugural address in 1949:

We must embark on a bold new program for making the benefits of our scientific advances
and industrial progress available for the improvement and growth of underdeveloped areas.
The old imperialism -- exploitation for foreign profit -- has no place in our plans. What we
envisage is a program of development based on the concepts of democratic fair dealing.

This statement marks the beginning of the postwar development aid. The context was the
Cold War and the dominant approach Rostows phase-theory prescribing a large capital
input. This approach still resounds in the development debate, e.g. in Jeffrey Sachs writings,
though he makes a plea for a combined top-down and bottom-up approach, using both
financial means and a variety of other remedies: new technologies, institutional changes etc.

From a technological perspective the Appropriate Technology (AT) movement was
groundbreaking, with Schumacher preaching intermediate technology and a technology
with a human face in general. North-South Technology Assessment was a following step,
avoiding the ideological bias of AT. The present framework for economic development in low
income countries is sustainable development, as defined by the Brundtland-commission in
1987, turning top priority to the needs of the poor. Sustainable development requires a
worldwide exchange of science and technology aimed at supporting endogenous
development. Technology is viewed as crucial, both in relation to ecological and social
sustainability. The approach is broad, relating technology to entrepreneurship and considering
the institutional environment; the innovation systems approach is exemplarily. In that
framework technology could be transferred to another setting in order to support local
development processes, considering a wide range of enabling conditions.

Applicability

The method is applicable to a wide range of development problems. It is often assumed,
however, that the new environments possess the necessary absorptive capacity, which, in its
turn, often requires education and Western values. However, localities in poor countries
frequently lack even the beginnings of the innovation systems required for adopting new
technologies.

A drawback might be the thought that technologies come from the outside, frustrating local
innovative capabilities. In fact, the distinction between foreign and national technologies
becomes vague and blurry when the true character of technological development processes is
recognized, with processes of transfer, adaptation and local development going hand in hand.

18
Technology diffusion and adaptation
291

Pitfalls / debate

(Technological) development cooperation takes place in an international and intercultural
setting. Technologies are not value-free and often presuppose a host of enabling conditions,
ranging from infrastructures to attitudes. One approach is to see to it that technology transfer
and support is accompanied by policy transfer, value transfer and capacity building.
Another approach is to take local culture and possibilities explicitly as points of departure,
incorporating foreign technologies in an indigenous framework of technology and
organization.

Key references

Freeman, C. (1995). The National System of Innovation in historical perspective.
Cambridge Journal of Economics, 19, 5-24.
Freeman, C. (2002). Continental, national and sub-national innovation systems
Complementarity and economic growth. Research Policy, 31(2), 191-211.
Freeman, C., & Perez, C. (1988). Structural crises of adjustment, business cycles and
investment behaviour. In: G. Dosi et al. (Eds.), Technical change and economic theory.
London and New York: Pinter Publishers.
Hekkert, M.P., Suurs, R.A.A., Negro, S.O., Kuhlmann, S., & Smits, R.E.H.M. (2007).
Functions of innovation systems: A new Approach for analysing technological change.
Technological Forecasting and Social Change, 74(4), 413-432.
UN Millennium Project (2005). Innovation: Applying knowledge in development. London:
Earthscan.
Landes, D. (1998). The wealth and poverty of nations: Why some are so rich and some so
poor. New York: W.W. Norton & Company.
Ravesteijn, W., & Kop, J. (2008). For profit and prosperity: The contribution made by Dutch
engineers to public works in Indonesia, 1800-2000. Zaltbommel/Leiden: Aprilis/KITLV
Press.
The least developed countries report 2007. New York and Geneva 2007. Retrieved from
www.unctad.org/en/docs/ldc2007_en.pdf.
Sachs, J. (2005). The end of poverty: Economic possibilities for our time. New York: Penguin.

Key articles from TPM-researchers

Mulder, K. (2006). Sustainable development for engineers: A handbook and resource guide.
Sheffield: Greenleaf Publishing.
Ravesteijn, W. (2007). Between globalisation and localisation: The case of Dutch civil
engineering in Indonesia, 1800 1950. Comparative Technology Transfer and Society, 5(1),
32-65.
Ravesteijn, W. (1989). Technology and culture: Impulse towards a cultural-anthropological
approach to appropriate technology. In W. Riedijk, J. Boes, & W. Ravesteijn (Eds.),
Appropriate technology in industrialized countries (pp.3-34). Delft: Delft University Press.
292
Ravesteijn, W., & Kop, J. (2008). For profit and prosperity: The contribution made by Dutch
engineers to public works in Indonesia, 1800-2000. Zaltbommel/Leiden: Aprilis/KITLV
Press.

Description of a typical TPM-problem that may be addressed using the theory / method

How could socio-technical development in low-income countries be brought about and what
is the role of technology in these processes?
The innovation systems approach is being tested and further developed through research,
resulting in reflection on and adaptation of the necessary functions.

Related theories / methods

System innovation, Innovation systems, Transition Management, Community-based
Development, Sustainable development

Editor

Wim Ravesteijn (w.ravesteijn@tudelft.nl)



293
THE MICROTRAINING METHOD

Brief discussion

The Microtraining method is an approach to support informal learning processes in
organizations and companies. Learning in this sense means an active process of knowledge
creation taking place within social interactions, but outside of formal learning environments
or training facilities. This process can be facilitated by well-designed and structured systems
and by supporting ways of communication and collaboration like the Microtraining method
does. A Microtraining arrangement comprises a time span of 15-20 minutes for each learning
occasion, which can activate and maintain learning processes for a longer period if bundled
up to series (see the Microtraining workflow, Overschie, 2007). A Microtraining session can
be held face-to-face, online or embedded in an e-learning-scenario.



Applicability

The Microtraining method can be applied at learning situations within organizations and
companies especially where basic knowledge needs to be refreshed or improved and where
information for immediate use in daily practice is needed. In comparison to formal learning
ways, Microtraining is an approach to structure informal learning activities. As formal
learning is a successful approach for those who have less knowledge and skills (Tynjl,
2008), it can be counter-productive for more experienced employees (Jonassen et al., 1993).
For those, informal blended learning approaches like Microtraining seem to work very well
(Jonassen et al, 1993; Jonassen, 1997).

Pitfalls / debate

Considering organizational requirements, facilitating the new learning concept by the
management and accompanying the initial phase of implementing the Microtraining method
are crucial for its success. Because of this, the Microtraining method is not meant to be an
approach for designing learning materials alone, but to analyze learning processes in
organizations and support change management of learning strategies in a whole. If for
example employees do not have sufficient access to a computer or web services supporting
online learning activities being part of the Microtraining approach or if they are not allowed
to spend working time as learning time, the afford of designing the best material and
collaboration surroundings will fail.

Each Microtraining session is structured in the same way
6 min
4 min
3 min
3 min
Active start
What next? How to retain?
Feedback/ Discussion
Demo/Exercise 6 min
4 min
3 min
3 min
Active start
What next? How to retain?
Feedback/ Discussion
Demo/Exercise
Each series of sessions is structured in the same way
Introduction Sub-topic 1 Sub-topic 2 Sub-topic 3 Rounding off
etc.
Sessions of Sub-topics
Introduction Sub-topic 1 Sub-topic 2 Sub-topic 3 Rounding off
etc.
Sessions of Sub-topics
294

Key references

Cross, J. (2007). Informal learning: Rediscovering the natural pathways that inspire
innovation and performance. San Francisco: Pfeiffer.
Jonassen, D., Mayes, T., & McAleese, R. (1993). A manifesto for a constructivist approach to
uses of technology in higher education. In T.M. Duffy, J. Lowyck, & D.H. Jonassen (Eds.),
Designing environments for constructive learning (pp. 231-247). Springer-Verlag,
Heidelberg.
Jonassen, D. H. (1997). Instructional design models for well-structured and ill-structured
problem-solving learning outcomes. Educational Technology Research and Development,
45(1), 65-94.
Siemens, G. (2005). Connectivism: A learning theory for the digital age. International
Journal of Instructional Technology and Distance Learning, 2(1), 3-10.
Tynjl, P. (2008). Perspectives into learning at the workplace. Educational Research Review,
3, 130-154.
Vygotsky, L.S. (1978). Mind and society: The development of higher mental processes.
Cambridge, MA: Harvard University Press.

Key articles from TPM-researchers

De Vries, P., Lukosch, H., & Pijper, G. (2009). Far away yet close: The learning strategy of a
transport company. In: Conference Proceedings ICWL, International Conference on Web
based Learning, Aachen.
Lukosch, H., & De Vries, P. (2009). Mechanisms to support informal learning at the
workplace. In: Conference Proceedings of ICELW 2009, International Conference on E-
Learning in the Workplace, NY.
Lukosch, H., Overschie, M., & De Vries, P. (2009). Microtraining as an effective way
towards sustainability. In: Conference Proceedings of Edulearn09, Barcelona.
Overschie, M. (2007). Microteaching manual: Effective transfer of knowledge for sustainable
technological innovation. On http://www.microteaching.org, August 2007.

Description of a typical TPM-problem that may be addressed using the theory / method

The Microtraining method has been applied to a company in the transport sector. Employees
are in need of a huge amount of information, because of the rapid technological changes in the
field. On the other hand, they are no experienced learners and spend most of their working
time on the road. The company came up with a new training model based on the
Microtraining method. A new information platform was built up containing short units of
knowledge. Everyone in the company is allowed to use and add knowledge. The new method
is frequently used to learn about important technological changes.

Related theories / methods

The Microtraining method is based on several current learning theories and concepts. One of
the most important is the theory of social constructivism (Vygotsky, 1978), which sees
learning as an active process, taking into account the learners individual background. The
295
second crucial idea comes from the concept of connectivism which states that learning is not
an internal, lonely process, but that knew knowledge is required best while working and
learning in a lively community (Siemens, 2005).

Editors

Mariette Overschie (m.g.f.overschie@tudelft.nl)
Heide Lukosch (h.k.lukosch@tudelft.nl)
Pieter de Vries (pieter.devries@tudelft.nl)







































296
TRANSITION MANAGEMENT

Brief discussion

Transition management is a term coined by Jan Rotmans to denote the control and steering
of processes of system change and transformation, which are considered necessary from e.g.
the perspective of sustainable development. Such transitions are considered to be complex
processes of technology development and societal change, incorporating new socio-technical
systems and new technological regimes. The idea is that large-scale, long-term socio-
technical processes of change and transformation are largely autonomous and only partly
steerable through a variety of actors involved, including the government. Consequently,
transition management is seen as an evolutionary steering, consisting of adjusting processes
of sustainable development. An interesting off-spin of transition management has to do with
interventions on behalf of development in third world countries. There and elsewhere
transition management is about creating conditions under which societal innovation can take
place.

Applicability

Transition management is applicable in cases of complex processes of change and
transformation of technology and society, especially when it comes to sustainable
development.

Pitfalls / debate

The most important pitfall is that it is difficult to know how to steer, which methods to use
and how to assess whether steering has been successful or not. A main assumption is that
transition processes develop in the form of an S-curve, which basically reflects the
demographic transition model (the transition from high to low natality and mortality in
Western history).

Key references

Rotmans, J. (2003). Transitiemanagment: Sleutel voor een duurzame samenleving. Assen:
Van Gorcum.
Sachs, J. (2006). The end of poverty: Economic possibilities for our time. New York: Penguin.
Sachs, J. (2009). Common wealth: Economics for a crowded planet. New York: Penguin.

Key articles from TPM-researchers

Ravesteijn, W., & Kamp, L.M. (2009). Dynamic transition management: Lessons from
Denmark and Indonesia. Paper for 1
st
European Conference on Sustainability Transitions, 4-5
June, Amsterdam

Description of a typical TPM-problem that may be addressed using the theory / method

How to steer complex processes of technological and social change?

Related theories / methods
297

Innovation systems, Value driven history / value sensitive development, Sustainable
development, System innovation, Socio-technical system building.

Editor

Wim Ravesteijn (w.ravesteijn@tudelft.nl)











































298
UML (UNIFIED MODELLING LANGUAGE) - CLASS DIAGRAMS

Brief discussion

UML is a collection of modelling techniques to describe information, information structure,
operations on that information, as well as system behaviour and architecture (see
www.uml.org). Apart from their use for designing information system architectures and their
implementations, UML class diagrams are very suitable for conceptual modelling of the
information and/or knowledge that is relevant to a particular business process. In the latter
case, the operations that may be specified as part of the UML classes may be omitted. UML
class diagrams may form the basis for simulation models as well.

UML class diagrams are non-hierarchical; they describe the concepts that are relevant for a
certain application (be it a description, a design, a simulation model) and the relevant
properties (attributes, relations with instances of other classes) possessed by the instances of
those concepts. Attributes are simple properties of an object (instance of a class), like size,
colour, creation-date. Possible relations between objects are either the generic association
relation, or one of the aggregation and composition relations. The association relation needs
at least one label indicating the type of relation (which must be non-volatile). Examples are:
author-of, has, contains. Cardinalities are specified where necessary, indication how many
objects of the given type are associated with each object at the other end of the relation.
Cardinalities may be 1, 0 or 1 (optional), 1..n, 0..n; n being a positive integer or a wildcard
character, indicated by an asterisk. Apart from the relations between instances, there is also
the generalization/specialization relation between classes (concepts) that allow the modeller to
create conceptual hierarchies. Classes inherit the attributes and relations from their parent
classes, that is, from the classes higher up in the hierarchy.

UML class diagrams are very useful to get very precise specification of the information or
knowledge used by a business process. It may describe input and/or output of such processes,
but also the information/knowledge needed to control or support the processes.

Applicability

UML class diagrams (without operations) may be applied whenever relevant knowledge is to
be modelled, like when creating conceptual models for the purpose of describing an
organization or business process. Such models can be used both for analysis and for design
purposes, both for simulation models and for actual implementations. In the latter case,
classes do not just model domain knowledge but also the artefacts that make up the designed
system. In that case, operations will be specified as well, showing in detail what actions can
be performed by the instances of each class.

Pitfalls / debate

The most common pitfalls observed with UML modelling are the inclusion of irrelevant
information and/or the omission of relevant information (both classes and attributes or
relations). Another common mistake is the inclusion of volatile relations, which should not be
modelled. The presence of these errors has a huge impact on the usefulness of the resulting
model.

Key references
299

A good source of up-to-date information is www.uml.org; it has references to reports,
whitepapers and related information. No longer published is The Unified Modeling
Language User Guide, by Grady Booch et al, Addison-Wesley.

Key articles from TPM-researchers

/

Description of a typical TPM-problem that may be addressed using the theory / method

UML has been used at the first stages of many TPM projects that involved the analysis and
design of many simulation models and information systems, as well as the description of such
systems for certification reasons.

Related theories / methods

UML might be used in tandem with methods to describe business processes, like IDEF0
diagrams.

Editor

H.J. Honig (h.j.honig@tudelft.nl)



























300
VALIDATION IN MODELLING AND SIMULATION

Brief discussion

Validation is a process through which a simulation model is evaluated regarding its ability to
answer correctly to the questions that initially motivated its design. It is about ascertaining
that the right model was built, as opposed to verification which deals with building the model
right. Although not always sharp, this distinction clarifies that beyond merely identifying
possible syntactic and semantic flaws, validation aims at evaluating a model at the pragmatic
level, asking to what extent the model serves its purpose, what knowledge it brings on the
system under study.

A model is the specification of a source systems structure and behaviour in a modelling
formalism, a simulator is a computational agent executing model instructions, and an
experimental frame specifies the conditions under which the source system is observed and
experimented with, or in other words, the operational formulations of the objectives that
motivate a modelling and simulation project. Validity in this context refers to the relation
between a model and a source system within an experimental frame. The concept of validity
can be considered to have three levels, from the most basic to the most advanced, namely:

- Replicative validity: the model successfully recreates input/output trajectory at the data
level.
- Predictive validity: the model can predict unobserved input/output trajectory at the
functional level.
- Structural validity: the model has the same structure as the system and mimics the state
transition behaviour as observed through the experimental frame.

Various techniques can be used in order to validate a simulation model, among which the
following:

- Face Validation: Discussion of the model inputs, execution, and results with experts and
colleagues, in order to determine clarity, perceived relevance, and the relations between
input and output. By inviting other simulation and subject matter experts to observe the
model execution, it is possible to obtain valuable feedback with respect to the validity of
the model, according to subjective assessment of how reasonable such execution is.
- Black-box testing: This technique is used to assess the accuracy of the input - output
transformation of the simulation model. This technique can be applied to all the inputs of
the simulation that are variable between runs and subject to assessment in terms of the
results. Because it is too costly to test all input variables over a range of values, only those
variables that are selected for experimentation have to be tested, along with the output
values that will be used for analysis. Black-box testing differs from the actual experiments
in that the objective is to test the transformation accuracy rather than analyze the results.
The input and output variables that are subject to this techniques are determined in the
experimental design.
- Comparison with other models. The behaviour/structure of the simulation models are
compared to the behaviour /structure of other (valid) simulation models.
301
- White-box Operational Validation: Determining that the constituent parts of the
computer model represent real world elements with sufficient accuracy for the purpose at
hand. It is a detailed check of model components.

Applicability

Validation is applicable and even mandatory in all modelling and simulation projects.

Pitfalls / debate

- A given model cannot be said to be valid in an absolute sense. A model can only be valid
within a given experimental frame, for a certain purpose.
- Although validity is desirable in simulation models, it is impossible to logically prove that
a given model is valid. There is a tradeoff to find between the cost of the validation effort
and the level of confidence in the model. A Popperian approach should be adopted by
designing a falsifiable model and conducting enough tests to reach a reasonable level of
corroboration.

Key references

Zeigler, B., Praehofer, H., & Kim, T., (2000). Theory of modeling and simulation. (2
nd

edition). San Diego, CA, USA: Academic Press.
Popper, K. (1963). Conjectures and Refutations: The growth of scientific knowledge. London:
Routledge.
Balci, O. (2004). Quality assessment, verification, and validation of modeling and simulation
applications. In Proceedings of the 2004 Winter Simulation Conference (Washington, DC,
Dec. 5-8). IEEE, Piscataway, NJ, 122-129.

Key articles from TPM-researchers

Schellekens, A., Paas, F., Verbraeck, A., & Van Merrinboer, J. (2008). Expert validation of a
flexible educational model. Innovations in Education and Training International.

Description of a typical TPM-problem that may be addressed using the theory / method

Applies to all modelling and simulation projects

Related theories / methods

Modelling and Simulation, Philosophy of Science

Editor

Mamadou Seck (M.D.Seck@tudelft.nl)





302
VALUE FREQUENCY MODEL

Brief discussion

The value frequency model (assesses) aims to measure the value(-s) that people (un-)
consciously attach to a (new) routine, a new way of doing things. When a new routine is
introduced and the Magnitude (M) and Frequency (F) and estimation that the likelihood that
M and V will regularly occur the more likely are all positive, it is likely that the Intention to
Behave (IB) similarly in a next round will also be positive.

New ways of doing things are usually brought about by two things:
1. An explicit change in rules that leads to new routines. For instance a revised way to learn
on how to inspect levees.
2. Introduction of new technology that enforces new routines (new rules if you will) like C-
2000 as a new communication system or Cedric as a crisis-management system for
emergency service organizations.

Currently the concepts in the model are operationalized. Semantic anchoring is used to
construct and validate the different constructs. Calibration will be done in field-exercises in
the area of crisis-management and distributed collaboration projects from September to
December 2009.

Applicability

Given the fact that the Value Frequency model is based on positivist assumptions and seeks to
predict not to explain, it is well suited to test and apply in experiments and field situations
where it is clear that a new technology or new behavioural routine is introduced as an
alternative to an existing one.

A standard questionnaire is the most efficient instrument to utilize.

Compared to other models that focus on transition of work-practices the focus of the model is
not on acceptance but on the expectation of individual users that adoption of a new way of
doing things might deliver more value than the current situation. As such we can indicate
more precisely if actors want and intend to use a new tool or process. Whereas with
acceptance you only know that people will probably accept a new piece of equipment or new
routine, not if they like it enough that Want to use it frequently.

Pitfalls / debate

The Value Frequency Model has been incrementally developed over the past 7 to 9 years as a
response the Technology Acceptance Model (TAM) developed by Davis.

TAM model basically measures the intention to accept a new technology
19
. It does not try to
predict sustained use of a technology. Acceptance does not help to inform actors what
technologies will be used frequently over time. A first attempt to provide a question to the
answer what new technologies will be used frequently over time, delivered a technology

19
Technology is defined as (systems of) material artefacts that, because of their form, provide bandwidth for a
set of behavioural routines
303
transition model. When this model was used descriptively, it gave researchers the insight that
the introduction of technology implies a change in behavioural routines.

A major pitfall is the fact that it is, as yet, an empirically unvalidated theory. Not much value
can be attached to the outcomes of the model in the first rounds of measurement. Secondly it
is unlikely that prediction will hold absolutely at the organizational level. So prediction that a
number of individuals will want to use a new technology and/or apply a new behavioural
routine frequently when provided to them is possible. Prediction that the implementation of a
new ICT system will be successful cannot be done.

Key references

Briggs, R.O., Adkins, M., Mittleman, D.D., Kruse, J., Miller, S., & Nunamaker, J.F., jr.
(2003). A technology transition model derived from qualitative field investigation of GSS use
aboard the U.S.S. CORONADO. Journal of Management Information Systems, 15(3), 151-
195.
Briggs, R.O., De Vreede, G.J., & Reinig, B.A. (2003). A theory and measurement of meeting
satisfaction. In Proceedings of the 36th Hawaii International Conference on System Sciences,
Hawaii. IEEE Computer Society Press.
Briggs, R., Adkins, M., Kruse, J., Nunamaker, J., Mittleman, D., & Miller, S. (1999). Lessons
learned using a technology transition model with the US navy. In R.H. Sprague (Ed.),
Proceedings of the 32
nd
Annual Hawaii International Conference on System Sciences, Hawaii.
IEEE Computer Society.
Morton, A., Ackermann, F., & Belton, V. (2003). Technology-driven and model-driven
approaches to group decision support: Focus, research philosophy, and key concepts.
European Journal of Information Systems, 12, 110-126.

Key articles from TPM-researchers

Briggs, O, Murphy, J., Carlisle, T., & Davis, A. (2009). Predicting change: A study of the
value frequency model for a change of practice. In R.H. Sprague (Ed.), Proceedings of the
42
nd
Annual Hawaii International Conference on System Sciences, Hawaii. IEEE Computer
Society.
Kolfschoten, G., Briggs, O., De Vreede, G., Jacobs, P., & Appelman, J., (2006). A conceptual
foundation of the thinkLet concept for Collaboration Engineering. International Journal of
Human Computer Studies, 64, 611-621.

Description of a typical TPM-problem that may be addressed using the theory / method

The VFM-model can act as an evaluation instrument that indicates if sustained use of a
change of practice for the integration of new technologies or techniques. And it can show
what changes can be made to technology and process that improve the likelihood that a
process or technology will transit. This means it can be used in any setting where new
procedures or ways of working are tested. As such it can be a useful instrument to compare
across fields such as Serious Gaming, Joint Simulation, E-learning, Distributed Design etc. as
well.

304
Currently tests and games are designed to test the instrument in 3 experiments in the area of
crisis-management.

Ultimately, when the concepts are empirically validated it should be possible to use the
instrument to indicate if new policy-measures, properly broken down into

Related theories / methods

Davis, F.D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
information technology. MIS quarterly, 13(3), 319-340.
Ajzen I. (1991). The theory of planned behavior. Organizational Behaviour and Human
Decision Processes, 50, 179-211.

But theoretical framework has been mainly built up within the field of information sciences;
terminology might therefore be a bit counter-intuitive in the beginning.

Editor

Jaco Appelman (J.H.Appelman@tudelft.nl)































305
VALUE RELATED DEVELOPMENT ANALYSIS

Brief discussion

In this method the development and innovation of technology is related explicitly to the
development and innovation of societal values. It is already a widely accepted insight that
technologies are value sensitive, i.e. they promote and articulate particular values, which as it
were are inscribed in them. In this approach it is emphasized that the reverse is also true: the
development of technology is expression of the values a particular society may come to adopt
or even create. Such values usually emerge in times of crisis as a response to historical needs
and imperatives (here this method appears to be related to another method submitted in this
overview: the grammatical method of communication).

Applicability

This approach can be used to investigate how crisis can be the creative origin of a
revolutionary new approach to social problems and how such a revolutionary new way to deal
with them consequently evolves towards a differentiated and widely ramified and
interconnected set of institutions, values and technologies.

The advantage of the method is that it makes it possible to understand technologies as
connected to an inner core of values and also to a process of innovation and articulation of
such values within society. Technologies appear to be not randomly invented, but inventions
express, support and evolve the values adopted as solutions for the original crisis.

The method responds to problems created by a blind spot in development theory. Policy
transfer and technology transfer to developing countries many times fails because the
capacities and capabilities to deal with them are not in place. Among these values and
capacities are, to mention but a few: a spirit of innovation versus uncertainty avoidance, an
open dialogical attitude versus social relationships based on hierarchy and status, a disciplined
and sequential attitude to time (planning) versus the priority of tradition or of the present.
System innovation on the technological level therefore cannot succeed without cultural
transitions.

A drawback is constituted by the fact that this approach apparently tampers with the value of
indigenous development. It in fact criticizes the navet of this slogan, which is often used
in projects of aid agencies, while at the same time they demand, that the project is designed
according to criteria like women's emancipation, participation, good governance, which
silently introduce the cultural transition, which should be on the agenda explicitly in order to
find the right equilibrium between modernization and respect for traditions.

Pitfalls / debate

The method touches on a sensitive area in the development debate. If the dark pages of
western history are not recognized as many instances of the West subduing the innovative
moral inventions of its own history, it can be misused and make up a beautiful western history
as a shining example of western superiority. Another pitfall is, that people can become
seduced to bring about such value transfers to developing countries by copying the West. This
has never worked or will only work in a detrimental way. It should be actors pulling in, who
306
develop their own values by integrating modern values in a path dependent way. The western
societies among themselves already represent different versions of the western set of values.

Key references

Rosenstock-Huessy, E. (1993). Out of revolution: Autobiography of western man. New York:
Argo Books. (original 1938).
Landes, D. S. (1999). The wealth and poverty of nations: Why some are so rich and others so
poor. New York: WW Norton.

Key articles from TPM-researchers

De Jong, W.M., & Kroesen, J.O. (2007). Understanding how to import good governance
practices in Bangladeshi villages. Knowledge, Technology and Policy, 19(4), 9-25.
Kroesen, J.O. (2008). Machine bureaucracy and planetary humanity: A dialogical
understanding of personhood for the development of identities in the electronic age.
Humanities and Technology Review, 27, Fall, 2008.
Ravesteijn, W., & Kroesen, J.O. (2003). De toekomst als opdracht: Utopie, revolutie en
techniek in Europa. In Europa, Balans en Richting, Lannoo Campus, Tielt.
Ravesteijn, W., De Graaff, E., & Kroesen, J.O. (2006). Engineering the future: The social
necessity of communicative engineers. European Journal of Engineering Education, 31(1),
63-71.

Description of a typical TPM-problem that may be addressed using the theory / method

An article is in the pipeline about the Plakkies factory, the Ubuntu Company in South
Africa, which is designed according to a particular management style, in which modern
values and traditional values are integrated explicitly. The employees are required to be on the
job in a disciplined way from day to day, with severe sanctions in place, but at the same time
the management is lenient in case of the many personal problems (family responsibilities,
sickness etc.) of the employees. The preference of the employees to work together according
to tribal or family lines is recognized, but at the same time there is job rotation not only to
create a flexible workforce, but also to teach the employees to work together across such
lines. Deadlines are strict and at the same time a community spirit is supported and created.
There are clear hierarchical lines, but also bonds of trust are created and the management is
required to labour together with the employees, stepping off from its privileged position and
status.

Related theories / methods

The capabilities approach of Sen and Nussbaum is very near to this one, with the difference
that capabilities primarily refer to freedoms granted and procured to individuals by the state
authorities, like education, free movement and emancipation. This method focuses more on
capacities, i.e. internal qualities of persons and social groups, which enable them to enact
these freedoms.

The bottom of the pyramid approach is also near to this one, with the difference, that it is
primarily interested in creating and diffusing businesses of and among the poor, where this
307
approach again calls attention for the internal capacities of the actors to run such businesses.
Of course of these methods do not exclude each other, but support each other.

See also: the grammatical method of communication.

Editors

Martin de Jong (w.m.dejong@tudelft.nl)
Otto Kroesen (j.o.kroesen@tudelft.nl)
Wim Ravesteijn (w.ravesteijn@tudelft.nl)

Vous aimerez peut-être aussi