Vous êtes sur la page 1sur 17

EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION



106
Paper 7 : Integration, Comparisons, and Frontier of Futures Research
Methods

Theodore J Gordon and Jerome C. Glenn


I INTEGRATION OF FORECASTING METHODS

This paper represents the closing chapter of Futures Research Methodology Version 2.0
(CD-ROM) published by the American Council for the United Nations University with
in the framework of the Millennium Project. Following the extensive discussion of
twenty five forecasting methods or categories of methods, the last chapter suggests
which forecasting methods could be used under what circumstances and in what
combinations. The question is discussed on praxeological as well as on methodological
level.

Taxonomy of Methods

Quantitative or qualitative methods may be used to produce normative and exploratory
forecasts. Thus, all of the methods that have been considered in this series can be
classed as either quantitative or qualitative and as applicable to normative or
exploratory forecasting (or both). Some people have argued that any technique can be
applied to normative as well as exploratory forecasting; it's simply a matter of how the
technique is applied. The matrix presented in Figure 1 serves as a simple taxonomy of
the methods of futures research and indicates the primary usage in the field.

Figure 1: A Simple Taxonomy of Futures Research Methods

Quantitative Qualitative Normative Exploratory
Agent Modelling X X
Bibliometrics X X
Causal Layered Analysis X X
Cross-Impact Analysis X X
Decision Modelling X X
Delphi Techniques X X X
Econometrics and Statistical Modelling X X
Environmental Scanning X X
Field Anomaly Relaxation X X
Futures Wheel X X X
Genius Forecasting, Vision, and Intuition X X X
Interactive Scenarios X X X
Multiple Perspective X X X
Participatory Methods X X
Relevance Trees and Morphological Analysis X X
Road Mapping X X X
Scenarios X X X X
Simulation-Gaming X X
State of the Future Index X X X X
Structural Analysis X X X
Systems Modelling X X
Technological Sequence Analysis X X
Text Mining X X X
Trend Impact Analysis X X

EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

107

SOME WARNINGS

Before beginning a description of how these methods can be integrated, a few warnings
about forecasting and forecasts may be useful:

Accuracy and precision are two separate concepts. Quantitative forecasts can be very
precise, but quite inaccurate, particularly in this age of computers. Forecasts can also be
accurate but imprecise, such as: the high likelihood of an earthquake in California.

Extrapolation is bound to be wrong. Simply taking historical trends and extending them
into the future is easy, but the projection suggests that nothing new will come along to
deflect the trends, that the only forces shaping the future are those that exist in history.
Ultimately, even for planets in their orbit, this assumption must be wrong.

Forecasts will be incomplete. Forecasts based on discoveries not yet made are
exceedingly difficult to include. For example, who could have forecasted nuclear
generated electricity before fission was known? Descriptive forecasts about ESP,
antigravity, or a cure for aging are "out on a limb", because no fundamental
understanding exists about the phenomena that underlie the forecasts. (See other
sections of this report for further discussion on the extent of the unknowable.) As
Herman Kahn once said, "The most surprising future is one which contains no
surprises." This axiom is certainly pertinent to any domain in which change can be rapid
and without apparent precedent - for example, politically in the Middle East, and the
Persian Gulf, or with respect to terrorist attacks, or in health, the advent of SARS.

Planning must be dynamic. Because of inaccuracies and incompleteness, any plans
based on forecasts are subject to error. Therefore, as new information is gained,
forecasts should be revised and plans based on those forecasts reviewed. This
recognition of the dynamics of planning implies the need for constant scanning of future
possibility, developments, and new ideas.

Futures depend on chance. The consequences of developments initially seem
unimportant and unconnected but later, through tenuous inter-linkages, become
dominant in their effects.

Forecasting is not value free. Beliefs, right or wrong, colour one's view of the future as
discussed in the chapters on the Multiple Perspective and on Causal Layered Analysis.
These beliefs may or may not be codified; they also affect questioning about the future
as well as the answering. A reviewer of this paper pointed out that one approach to
forecasting is to capture legitimate alternative views through methods that show ranges
of possibilities, such as scenarios. He said, "Bias and misguided idealism are serious
problems in any forecasting enterprise." Many methods require judgments about
probabilities of future events, but most people are bad judges of probability.

Accurate forecasts of some complex and nonlinear systems may be impossible.
Examples: weather two weeks in advance, the stock market tomorrow, turbulent fluid
flow in the next minute, etc.

EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

108
Forecasts can be self-fulfilling or defeating. By forecasting the possible existence of a
new stack gas cleaning technology, that technology may become more likely. The
mechanism is clear enough: reading about the possibility, others work to bring it about.
A forecast of famine may make the famine less likely if it triggers action. Thus,
forecasting itself can have political consequences. Furthermore, if a self-defeating
forecast triggers action to avoid the forecasted problem, then the forecast may have
been highly inaccurate, nevertheless, extremely useful.

Methods That Fit Together

Forecasts may use one and only one of the methods described in this series, but use of
these methods in combination often provides efficiency and makes the forecasts more
robust. For example:

Environmental Scanning using Delphi, Text Scanning, and group Participatory
Techniques can identify trends;

Future Wheels can show potential consequences of these trends and future
events, and improve the understanding of the trends and potential events;

with this better understand of the trends and/or events, they can be used in
Cross-Impact Analysis to raise the important questions to be addressed in
Scenario Construction;

Scenario assumptions can be tested by Causal Layered Analysis, Multiple
Perspectives,Gaming-Simulations, and Roadmapping;

Trend Impact Analysis (TIA) can be used to provide estimates of the probability
of possible future events and these estimates can be obtained through Delphi
methods;

Cross impact tables can be included in a Systems Dynamics Model so that the
model would reflect the effects of interacting external events;

Scenarios can contain quantitative Time Series estimates of variables important
to the future world they depict; and

SOFI used Delphi to identify and weight variables and TIA to find a range of
variation of the variable over a ten year time series that comprise the index.

Many combinations are possible. Imagine large matrix with all methods in the Figure 1
listed down the right column and repeated across the top row. One could explore a new
combination by asking in each cell of this matrix: How can the methods in the first
column create new and improved uses of the methods listed in the top row of the matrix.
A third dimension of the matrix could list new conditions or technologies, such as
globalization, nanotechnology, virtual reality, ubiquitous computing, etc. Hence, one
cell would pose the question: how could Future Wheels be improved by Delphi in a
tele-virtual reality nano-technology environment?

In this section, we explore some of the most potent of these combinations.
EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

109

Cross-Impact Analysis requires a large number of judgments about conditional
probabilities. These judgments can be provided by experts through the use of Delphi
methods, focus groups, interviews, or as Godet describes (1993) in the Toolbox. In
addition, genius forecasting or participatory processes might be used if the matrix is
small. Finally, the analysts might benefit if s/he has a reference scenario to help guide
the conditional probability judgments.

Decision Analysis is the analytic study of the validity of contemplated decisions and
their intended and unintended consequences. This method usually involves estimation
of costs and benefits, consideration of risk and uncertainty, and articulation of a
decision principle, such as minimizing downside potential. To the degree that expert
judgment is used, Delphi methods may be employed. Estimation of risk and uncertainty
may be based on Monte Carlo or other quantitative method of analysis, or judgment.
53


Regression analysis, future wheels, and econometric models can help establish
relationships useful in estimating the consequences of decisions. One or more scenarios
may be used to define the assumptions on which the analysis is based.

Decision Analysis Trees, Roadmaps, and future wheels fall within the general
classification of decision analysis. This method involves the construction of branching
diagrams that illustrate downstream decision points and other consequences that flow
from a currently contemplated decision. Inputs used to construct such diagrams can
flow from a single expert assessing alternatives, a group at a meeting, a series of
interviews, or a more conventional Delphi.

Decision models and structural analysis are multi-attribute models that simulate the
decision processes of policymakers, other actors, or consumers in choosing among
alternatives that require judgment. If the decision model were designed to simulate a
market, the required data could be obtained using conventional market research
methods. If the model were designed to simulate a policy choice, interviews with the
policymakers themselves or Delphi can be used.

The Delphi method is a primary technique for gathering judgments from experts. A
Delphi exercise can be enhanced by other methods in several ways:

Experts can be shown a number of time series in a questionnaire, including
forecasts prepared by curve-fitting procedures, and asked to assess, in
quantitative terms, how future events might impact on the curves;

Forecasts presented in these curves can be derived by many different techniques,
including regression analysis and simulation modelling;


53
Monte Carlo" is the name of a technique that involves random sampling. It is often used in operations research in
the analysis of problems that cannot easily be modelled in closed form. In a Monte Carlo simulation, values of
independent variables are chosen randomly and the equations in which these variables appear are run to achieve a
single result for the dependent variable. The process is repeated many times, perhaps thousands, each with a different
set of independent variables and therefore a different resulting dependent variable. The set of results is then
considered as representative of the range of potential outcomes. This technique can be used in conjunction with
essentially any modelling approach to convert a deterministic, single-value solution into a probabilistic solution.

EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

110
Relevance trees and morphological analysis can assist in defining the questions
to be asked; genius forecasting can be used to form the initial questionnaire.

Econometric models are deterministic and based on statistically established historical
relationships. Such models are used not only to produce quantitative forecasts but also
to estimate the sensitivity of outcomes to any changes in the variables included in the
models. Expert judgment collection methods can be used to obtain estimates of the
independent variables used in sensitivity analysis. Scenarios can provide the backdrop
for econometric analyses and help ensure the internal self-consistency of external
assumptions. If a cross-impact matrix of future events were introduced into an
econometric analysis, then, through the use of Monte Carlo methods, the new random
selection of independent variables. This process produces a range of results of the
dependent variables; in the case of technology sequence analysis, the range of dates at
which the intermediate technologies or final system will be available solution could
become probabilistic rather than deterministic. To accomplish this, simultaneous
equations could be solved a large number of times and the results displayed as a range
of possibilities. Further, the outcomes could be tested to determine the sensitivity of the
outcome to the probabilities of events and their interactions. Similarly, TIA can be used
to create forecasts of external variables used in econometric models.

Genius Forecasting benefits from data. Presenting the results of a simulation model or a
TIA to an individual who is trying to imagine a desirable future or assess the impacts of
a particular series of developments will, hopefully, inform the judgments.

Future Wheels can give just enough structure to focus the mind without preventing free
thinking and leaps of insight in genius forecasting, brainstorming, and focus groups.

Morphological Analysis and Relevance Trees have been improved through the use of
expert input. For example, a researcher can form a tentative morphology and perfect the
morphology by asking experts in interviews to change the diagram. Often, an individual
can form the top levels of a relevance tree but require expert assistance to complete the
lower and more detailed levels of the diagram. When such assistance is required,
Delphis or interviews are helpful.

Participatory methods can use scenarios to great advantage. Imagine showing to a
group of people a scenario that depicts the consequences of current policies and then
asking if the picture that emerges is desirable. An example of the use of both methods
can be found in the Millennium Projects Science and Technology Management study
in which scenariosgenerated in part by Delphi rounds--were presented to a global
Delphi panel. The scenarios contained blanks, which the participants were invited to
complete. Following the scenarios were policy questions such as: If you believed this
scenario was likely, what actions would you take now? (see:
<http://www.acunu.org/millennium/st-scenarios-rd2.html>)

In a regression analysis, the first step is to "specify" the equation; that is to identify the
independent variables to be tested in the regression. This step, of course, can be the
subject of environmental scanning, Delphi, genius forecasting, a Futures Wheel, or a
series of interviews that explore possible chains of causality.

EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

111
Scenarios can be completely qualitative or largely quantitative. Scenarios are usually
presented in sets, differing in terms of their initial boundary conditions. Key measures
of the success of a scenario are plausibility, internal self-consistency, ability to make the
future more real, and utility in planning. When multiple scenarios are involved
consistency must exist among the scenarios. There are a number of techniques that help
assure plausibility and self-consistency.

The use of TIA in conjunction with a scenarios study is particularly powerful. Recall
that TIA requires identification of a series of events that can deflect historical trends.
Many of these events will affect more than one time series and more than one scenario.
Internal self-consistency of a scenario is promoted with the use of TIA since, whenever
an event appears in a given scenario, it has the same probability. Cross-impact analyses,
while more complex in many ways, can serve the same purpose.

The narrative statements often included in a scenario can be given quantitative power if
they are derived systematically. Simulation modelling serves this purpose. For example,
the Club of Rome's world model established a completely consistent (instructive, but
flawed) scenario that could then be tested for the effects of changes in initial
assumptions. Similarly, the Millennium Project used a multi-equation model prepared
by International Futures to give quantitative backbone to an otherwise purely qualitative
scenario (see: <http://www.acunu.org/millennium/scenarios/explor-s.html>). For more
information about the model used can be found at:
<http://www.du.edu/~bhughes/ifs.html >.

Of course, environmental scanning, and expert judgment, collected through Delphi or
other such means, is a usual method of obtaining inputs for a scenario. These inputs
might include, for example, the "scenario space" to be employed, principal drivers, the
time series to be included, the lists of events that can impact on baseline forecasts, and
the policies to be tested in the scenarios.

The Millennium Project has also experimented with a computer program for obtaining
and accounting for changes in previously prepared scenarios. In this approach a cross
impact matrix is created behind the scenes to indicate the interaction among
statements in the scenario. Then when the user changes an entry the cross impact matrix
is brought into play to ask the user how related statements in the scenario might be
affected by the change they suggest.

Systems Dynamics models are not completely dependent on statistical relationships, but
rather are based, at least in part, on perceptions about the relationships that exist among
variables in the model. Therefore, the techniques mentioned earlier for collecting expert
judgments all apply. Systems Dynamics models are usually deterministic. They can be
made probabilistic by linking the elements of the model to prospective events through
cross-impact and trend-impact methods. These methods permit the models to show a
range of outcomes and provide the ability to accomplish sensitivity testing to identify
which of the expected events are important to the outcome.

Technology Sequence Analysis begins with establishing a network of sequential and
interlocking technological or policy developments. Since such networks involve many
facets of expertise, interviews with experts have proven productive. In these interviews,
experts are asked not only to perfect the network, but also to provide judgments about
EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

112
the time or costs involved in progressing from one step to another. In addition,
relevance trees can help structure the exercise.

Trend Impact Analysis adds perceptions to time series forecast about future events that
can deflect the trends. The specific judgments required are: specifying the list of events,
probabilities of the events vs. time, and impacts of the events, should they occur, on the
time series variable under study. All of the techniques mentioned earlier for collecting
expert judgment apply here. In addition, while most TIAs have been based on time
series methods to establish a "baseline" forecast, the method can use regression analysis
or simulation modelling to make this baseline projection.

Another way of organizing and comparing the methods is by areas of use, as shown in
the following table:


When You Want to: Use

Collect judgments Genius Delphi
Futures Wheel
Group meetings
Interviews

Forecast time series, and Econometrics
Other quantitative measures Trend Impact Analysis
Regression analysis
Structural Analysis

Understand the linkages System Dynamics
between events, trends, and Agent Modelling
actions Trend Impact Analysis
Cross Impact Analysis
Decision Trees
Futures Wheel
Simulation Modelling
Multiple perspective
Causal Layered Analysis
Field Anomaly Relaxation

Determine a course of Decision Analysis
action in the presence of Road Mapping
uncertainty, Technology Sequence Analysis
Genius

Portray alternate plausible Scenarios
futures Futures Wheel
Simulation Gaming
Agent Modelling

Reach an understanding if the State of the Future Index
future is improving

Track changes and assumptions Environmental scanning
Text Mining

Determine system stability Non linear techniques


II. FRONTIERS OF FUTURES RESEARCH

The Millennium Project with its Nodes around the world, conducting accumulative
assessments of change can be thought of as an experimental method for global futures
EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

113
research. It is an example of the globalization of futures research. The Internet has made
participatory approaches among geographically dispersed people practical. Since the
future is increasingly complex, knowledge intensive, and globalized, then variations of
the Millennium Projects approach to futures research methods may occur. Wireless
Internet, knowledge visualization software, and improved computer translation will
allow more international foresight activities to build collective intelligence through
participatory feedback systems far more complex than the Projects current methods.

Forty years ago, computers were not much of a factor in futures research. Delphi was
done with pencil and paper in 1963, and sent through the mail. If current trends continue,
forty years from now nearly all futures methods will be conducted in software, through
networks, with diverse and changing sets of people, continually cross-referencing data
and monitoring decisions. Within twenty-five years, dramatic increases in collective
human-machine intelligence were judged to be plausible by the majority of an
international science and technology panel. Hence, the image of a few bright people,
using a few interesting methods to forecast the future, may be replaced by the image of
many people interacting with many combinations of methods to shape the future by
blurring the distinctions between research and decision-making.

Scope of the Unknowable

No matter the size of the model or the computer that runs it, developments exist that are
not only unknown but discoverable, if we work hard enough, as well as events that are
unknowable and at this time at least- undiscoverable no matter how hard we work.
Some of these undiscoverable events may turn out to be the most important aspects of
the future. People are asked in a Delphi, in interviews, or in participatory meetings
"what do you think may happen?" or "what do you want to happen?" Their answers are
limited sharply by what people, even experts, believe is feasible, by what is taken to be
"good science," and by what has already been demonstrated or postulated. Anti-gravity
machines, routine and ordinary extra sensory perception, matter/antimatter propulsion in
space, a Walden utopia for all, a life without aging beyond 40, a life without disease and
worry, and faster than light teleportation are rarely suggested. Why? Because: Before a
fundamental breakthrough demonstrates new possibilities, it is hard to imagine.
Consider the problem of forecasting the ubiquitous transistor radio before the transistor,
or even radios in 1700. Imagine forecasting nuclear power generating plants before
fission was demonstrated or forecasting the Concorde before Langley. What would a
futurist of 2100 use to describe this myopia from his or her vantage point?

By definition, the geography of the unknowable must loom as infinite. We could
certainly speculate about such discontinuities (science fiction specializes in this domain)
but, taking Kuhn's perspective, an idea before its time is apt to result in derision and
dismissal. (2) What serious forecast would include any mention of levitation, for
example? Yet room temperature superconductivity is OK because research is underway
and, while a big step ahead of the present, is still plausible. Plausibility is the key. When
does an idea about the future move from wild speculation to plausible and worthy of
consideration? The answer is not apparent but probably has as much to do with social
factors as science.

How can the domain of the unknowable be reduced? At least one experiment is worth
reporting. A managing editor, in a seminar of science students at Wesleyan University,
EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

114
asked for suggestions about the results of invalidating one currently held "good science"
idea. The students were asked to state the invalidated concept, find other ideas
invalidated by the discovery, and state the practical developments that would follow.
The experiment was a failure; imaginations were puny.

But this is a frontier. Just how does a concept move from disrepute to respectability?
How do visions or reality change? Or, more importantly, how do breakthroughs really
happen and can they be anticipated, if not individually, at least categorically? The future
holds more that we can imagine.

Decision making in Uncertainty

Much work remains in the field of decision making. We have stressed that forecasting
methods should not produce single-value images of the future and that uncertainty
should be made explicit. Yet the tools for dealing with uncertainty, for ensuring
adequate return for risk-taking, are far from perfect and, outside of market beta theory,
rarely used. To illustrate how quantification of uncertainty may help decision-making,
consider the following illustration: An executive of an electric utility company needs a
forecast of the future demand for electricity in his companies region. In the old days,
this forecast would have been produced with a deterministic regression model that
related demand to the number of people being served; all that would have been required
to forecast electricity demand would have been a demographic forecast of the
population of the area. This technique would have produced a single-value forecast. The
newer approaches would ask about future events that could change the historical
relationships and deflect the trends. The list of such events would include high-
efficiency appliances, electric automobiles, new plants moving into or out of the area,
and factors leading to changes in the industrial base, such as crime, education, etc. TIA
or another probabilistic method could then produce a forecast of electricity demand with
a range of expectations, as shown in Figure 2.

Now, the executive looks at the chart and says, "that's a fine forecast, but my problem is
that I want to find out whether or not to build the nuclear plant, and the distance
between curve A and curve B is exactly the value of one nuclear plant." Here's how one
might reason through the dilemma:

Suppose you believed curve A and built the nuclear plant, but the load really developed along
curve B. Remember that picture.

Now suppose that you believed curve A and didn't build the nuclear plant, but the load developed
along curve A. Remember that picture.

Which is better?





EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

115


The executive might reason that the second situation would be better, since capital
would be conserved and the company would be in the position of supplying a needed
commodity rather than having a disputed and idle facility, a monument to "bad
planning."

In general, sources of uncertainty include new and unprecedented events, noise, chance,
systemic changes, and experimental and observational errors. These sources of
uncertainty will never be eliminated.

Decision analysis is a respected component of policy and operations research. Its
methods include:


utility matrices: List the factors important to a decision (low cost, equity, improved standard of
living, etc.). Provide a weight for each factor. List the alternate decision and score them with
respect to each factor. Overall scores are determined by taking weighted sums. All other things
being equal, the decision with the highest score is the most logical path to follow.

cost/benefit analysis: Does the prospective payoff justify the costs expected to reach those payoffs?

minimax: What's the worst that things can be? Select the alternative action that maximizes the
minimum payout.

maximax: What's the best that things can be? Select the alternative that maximizes the maximum
payoff.

minimum regret: Select the alternative that has the potential to lead to a least regret future.
Example: suppose a person had the opportunity to continue in a sound job with a stable company
and promising prospects or join a fledgling company, play a bigger role, but with higher risk. The
person elects to remain with the stable company. Let 40 years go by. Picture the person saying: "If
only I had moved when I had the chance ...." Now reverse the situation: the person moves and the
new company fails. Let 40 years go by. Picture the person saying, "If only I had remained with my
stable old company...." A minimum regret decision is the one anticipated to cause the least remorse.

payoff matrix: Calculate the expected value of competing decisions by multiplying the expected
return by the probability of success of each decision. Then select the one with the highest expected
value.


EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

116
Scenarios have proven helpful in the decision making in uncertainty. The strategies that
work best in all of the scenario worlds included in a set are "good bets."

Also useful are:

Portfolio theory (make sure that risk and reward are commensurate, than assemble a portfolio that
in the aggregate reflects your risk profile)

Decision trees track the consequences of serial decisions leading to a goal.

But the field is primitive. Managers often do not know what risks are associated with
particular strategies. The quantitative techniques available to us are not yet capable to
quantify risk in ways other than probability. A lot of work is needed here.

Chaos, Non-Linear Systems, and Planning

While most physical and social systems are nonlinear, mathematical models and
simulations of those systems usually use linear assumptions. The linear approximation
is made because linear equations are simpler to handle mathematically and over vast
regions of operation the linear models provide a good match with reality. Linear
systems can be stable (that is, when perturbed, the system settles to some stable value),
can oscillate (that is, when perturbed, the system settles into a periodic cycle), or can be
unstable (that is when perturbed, the system movements become very large and
continually increase or decrease). When the systems are nonlinear, however, a fourth
state of behaviour can be triggered: chaos. In this state, the system appears to be
operating in random fashion, generating noise what appears to be noise. In this state, the
system behaviour is still deterministic but essentially unpredictable.

The central premise of planning is that forecasting is possible. The policy sciences teach
us to identify optimum policies by testing a set of prospective policies on models that
simulate the real world and choosing the policy that brings the model outcome closest to
the desired outcome. But if the model - and the real system - are in a chaotic state, the
results of a policy may be exquisitely dependent on a number of factors other than the
policy itself. In fact, quite different results might be obtained on successive runs of a
model (or in two "plays" of reality) with the same policy, if the initial conditions used in
the simulation (or in the "second run" of reality) are only very slightly different.

While most work in the field of chaos has been in the physical sciences, social systems
can also be nonlinear and driven to chaotic behaviour. (7) The Federal Reserve attempts
to control interest rates in the national economy by fixing the rate at which banks can
borrow from the government. Yet, if the economy were really nonlinear and in a chaotic
state, it would be difficult to tell a priori about the outcome of the "adjust-the rate"
policy.

If a system that we attempt to control is nonlinear (that is, input and output are not
related in a one-to-one fashion) and, through excessive feedback or "gain," is exhibiting
chaotic behaviour (that is, its behaviour resembles random motion or noise), then
prediction of the future of the system (interest rates, day-to-day swings in the market
averages) is essentially impossible in most circumstances and, therefore, predicting the
outcome of contemplated policies is equally impossible. In addition, historical
precedent fails for systems that are operating in the chaotic mode. Since chaotic systems
EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

117
are very sensitive to initial conditions, history is no guide since conditions in the past
were almost certainly different than the present.

Do these arguments lead to the conclusion that modelling and policy research are dead?
We think not, but a whole new set of approaches to planning and systems management
need to be invented. When chaos is possible, it is no longer adequate to say "choose a
policy that brings the expected future close to the desired future." In chaos, the expected
future is a chimera, and disorder can mask valid normative visions.

What might be some of these new strategies for management of chaotic systems? Here
are some thoughts:

First, analysts should recognize that random appearing data and bizarre behaviour may
not be what they seem.

Second, non linear models can be built to simulate real life systems that operate in a
stable mode most of the time. Such models (see reference 7, for example) can be used to
find conditions that drive the systems they simulate into oscillatory or chaotic states.
Then, using the model, policies can be found that move the system back toward stability.
One of the authors (Gordon), found that slowing down the feedback tends to stabilize
systems. So the old advice "sleep on it" may have some validity after all.

Third, the nature of modelling changes. In the old days (yesterday), validity was tested
by building models with data through some date in the past and then using the model to
"forecast" the interval to the present. If there was a match, the model could be believed
and used in forecasting. Now we see that if the system was in a chaotic state, it could be
almost exactly correct in its match to reality and yet replication of history would be an
impossibly stringent criterion. Nevertheless, such models are useful because they can
point the way toward stability, establish reasonable ranges of expected operation, show
periodic tendencies, and, if an attractor can be identified, even "nudged" at the right
instant to achieve damping in the chaotic regime.

Fourth, using such models, the analyst can identify the future limits of operation of a
system and set plans to accommodate those limits, saying, in effect, "I don't know
precisely where the system is going, but I do know its limits. I'll set plans that are
effective at the limits."

Fifth, planners might use the attributes of a chaotic system (rapid response to very small
impetus) to his or her benefit. In chaos, things happen quickly.

The problem of planning and management of systems operating in the chaotic regime is
a frontier of great importance to our field. It challenges old concepts and, with any
paradigm shift, opens new opportunities of unprecedented magnitude.

Judgment Heuristics

People often make irrational decisions. They do so for psychological reasons that are
not completely clear. Judgment heuristics is a field that documents some of these
irrationalities (8). One or two examples will suffice to make the point:

EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

118
Memorable events seem more likely than less memorable events. For example:
which is more likely, suicide or murder? Most people say murder, apparently
because it commands a higher visibility in the press and is, therefore, more
memorable. But, in fact, the opposite is the case.

We ignore probabilities in our decisions. In Tversky's example, Sam is a meek,
retiring, helpful, tidy, soft-spoken person. Which occupation is he more likely to
have, salesman or librarian? Most people say librarian, but there are about 100
times more salesmen than librarians. So given only the sparse amount of
information in this example, salesman would have been a better bet.

Since futures research has as its primary raison d'tre informing policymaking, a better
understanding of the mechanics of decision-making would be useful. This assumption
moves us into the realm of psychology, but so be it. The future, after all, resides in only
one place: in the mind.

The Assumption of Reductionism

There is an implicit assumption in some methods of futures research that reducing a
problem to its elements improves the forecasts of the systems behaviour. For example,
in agent modelling, the usual approach to writing equations that describe an economic
market, for example, is replaced with assumptions about the behaviour of individuals
that make up the market. The usual approach would deal with the aggregate description
of prices and trade flows, and the agent model with the individual buyers and sellers
decisions. We somehow have the feeling that by breaking down the problem into its
elements we gain accuracy. The notion is appealing but unproven. Do we know the
decision rules of the buyers and sellers with any more precision than the market as a
whole? Perhaps less. We validate such disaggregated models by comparing their output
with the real world and adjusting the rules of behaviour of the agents until there is a
match. This same implicit assumption is made in many other applications.

There is a frontier here: since many forecasting problems can be investigated at various
levels of aggregation, what levels are appropriate? As large scale data bases become
available in the future it will be possible to perform cluster analyses and multi
dimensional scaling to identify groups that are similar in behaviour or have similar
attributes. This marriage between epidemiology, statistics, and futures research will be
important and powerful.



EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

119
III REFERENCES

(1) Godet, Michel, (1993); From Anticipation to Action: A Handbook of Strategic Prospective,
UNESCO Publishing,.

(2) Kuhn, Thomas, (1970); The Structure of Scientific Revolutions, University of Chicago Press.

(3)York, James, and Li, Tien-Yien, (1975); "Period Three Implies Chaos," American
Mathematical Monthly, 82.

(4) Mandelbrot, Benoit, (1977); Fractals: Form, Chance, and Dimension, W. H. Freeman and
Co., San Francisco.

(5) Gleick, James, (1987); Chaos: Making-New Science, Viking, New York.

(6) Gordon, Theodore, and Greenspan, David, (1988);"Chaos and Fractals: New Tools For
Technological and Social Forecasting," Technological Forecasting and Social Change, 34, 1-25.

(7) Gordon, Theodore, (1992); "Chaos in Social Systems," Technological Forecasting and
Social Change, 42, 1-15.

(8) Kahn man Daniel, Slovic Paul, and Tversky Amos, ed., (1982); Judgment Under
Uncertainty: Heuristics and Biases, Cambridge University Press,.

EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

120
Presentation 7
Session 1: Methodological Selection
AC/UNU Millennium Project: Integration and comparisons of futures research methods
Epistemological Frame
Blurred boundaries between
explorative / qualitative and
quantitative / normative forecasting
Increased efficiency and robustness
of forecasting by integration of
diverse forecasting methods
Session 1: Methodological Selection
AC/UNU Millennium Project: Integration and comparisons of futures research methods
Methodological Challenge
Development of heuristic models enabling
efficient systematisation of foresight
methods:
Organisation and Comparison
Integration and Combination
EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

121
Session 1: Methodological Selection
AC/UNU Millennium Project: Integration and comparisons of futures research methods
Model 1
Question:
How can methods on
x-axis create new and
improved uses of
methods in y-axis?
Forecasting Methods (x)
F
o
r
e
c
a
s
t
i
n
g
M
e
t
h
o
d
s
(
y
)
Session 1: Methodological Selection
AC/UNU Millennium Project: Integration and comparisons of futures research methods
Model 2
Track Changes
Reach an Understanding
Portray Plausible Futures
Determine Course of Action
Understand Linkages
Collect Judgements
Forecast Time Series
Determine System Stability
8 areas
of use
EU-US SEMINAR: NEW TECHNOLOGY FORESIGHT, FORECASTING & ASSESSMENT METHODS-Seville 13-14 May 2004

SESSION 1: METHODOLOGICAL SELECTION

122

Session 1: Methodological Selection
AC/UNU Millennium Project: Integration and comparisons of futures research methods
Example: Scenarios Study (SS)
Increasing of
SS efficiency:
- Plausibility
- Internal Self
consistency
- Ability to make
Future real
- Utility in planning
- Mutual
Consistency
Trend Impact Analysis *
Simulation Modeling *
Multi Equation *
Environmental Scanning*
Delphi *
Cross Impact Analysis *
SS
Session 1: Methodological Selection
AC/UNU Millennium Project: Integration and comparisons of futures research methods
Enhancing the Models
Question:
How can
Scenario (x)
be improved by
Cross Impact
Analysis (y)
in a tele-virtual
nano-technology
environment (z)?
Futures Methods (x)
F
u
t
u
e
r
s

M
e
t
h
o
d
s
(
y
)

T
e
c
h
n
o
l
o
g
i
e
s
(
z
)

Vous aimerez peut-être aussi