Vous êtes sur la page 1sur 57

PART 1: Technology Assessment:

Technology, Society, Sustainability

Editors: dr.ir. K.F. IVlulder, ir J.N. Quist

Section Teclnnology Assessment


Faculty of Technology, Policy and Management
Delft University of Technology

Table of Contents
PART 1: Reader Technology Assessment: Technology, Society, Sustainability
Chapter 1: What is Technology Assessment

Chapter 2: Technological prediction and classical TA methods

13

Chapter 3: The formation of new technologies

27

Chapter 4: Economic Approaches to the formation of new technology

41

Chapter 5: From "Impact Assessment" to "Managing Technology in Society"

55

Chapter 6: Why do we need sustainability

70

PART 2: Texts on Philosophy of Science and Technology


Text 1: "Scientific Explanations". In: Samir Okasha (2002), Philosophy of Science:
A very Short Introduction; Oxford: Oxford University Press, Chapter 3 (pp. 40-57).

79

Text 2: "Technical Artefacts" and "Technological Knowledge". In: Pieter Vermaas,


Peter Kroes, lbo van de Poel, Maarten Franssen, and Wybo Houkes (2011),
A Philosophy of Technology: From Technical Artefacts to Soclotechnical

Systems,

Ft Collins/Princeton/Bonita Springs/Seattle: Morgan & Claypool,


Chapters 1 and 4 (pp. 5-20 and 55-66)

III

89

TABLE OF
L

CONTENTS

W H A T IS T E C H N O L O G Y ASSESSMENT?

1.1

T H E TRADITIONAL IMAGE OF TECHNOLOGY

1.2

TECHNOLOGY FORECASTING

1.3

SOCIETAL EFFECTS OF TECHNOLOGY

1.4

TECHNOLOGY ASSESSMENT DEFINITIONS

1.5

T A ORGANIZATIONS

1.6

T A , N E W STYLE

1.7

T A , A VERITABLE MENAGERIE

2.

T E C H N O L O G I C A L PREDICTION A N D CLASSICAL TA METHODS

13

2.1

PREDICTABILITY

13

2.2

METHODS

2.2.1
2.2.2
2.2.3
2.2.4
2.2.5
2.2.6
2.3

PROBLEM: THE CONTROL DILEMMA

2.3.1
2.3.2
2.3.3
3.

13

Monitoring
the Delphi method
Cross Impact Assessment
Social-technical maps
Checklist
Scenarios
Introduction
New criteria for the design phase
Elements hindering flexible design

14
15
15
16
17
23
24

24
24
25

THE FORMATION OF NEW TECHNOLOGIES

27

3.1

INTRODUCTION

27

3.2

AUTONOMOUS TECHNOLOGY DEVELOPMENT AND TECHNOLOGICAL DETERMINISM28

3.3

T H E SOCIAL CONSTRUCTION OF TECHNOLOGY

30

3.4

TECHNOLOGICAL DEVELOPMENT AS SYSTEM DEVELOPMENT

35

3.4.1
3.4.2
3.4.3
3.4.4
4.

The electrical system


Technological systems
Phases in system development
Example

36
36
38
39

ECONOMIC APPROACHES TO THE FORMATION OF NEW


TECHNOLOGY

41

4.1

T H E NEOCLASSICAL ECONOMIC FRAMEWORK

41

4.2

LONG WAVES

44

4.3

'PUSH-PULL'DEBATE

44

4.4

T H E EVOLUTIONARY THEORY

4.4.1
4.4.2
4.4.3
4.4.4
4.5

VHS,

4.5.1
4.5.2
4.5.3
4.5.4
4.5.5
4.5.6
4.6
5.

45

Regime and trajectory


Selection environment
Quasi-evolutionary theory
In conclusion
MS-DOS, QWERTY, TWISTS OF FATE

Increasing returns
End-game strategy
Licensing politics
Ruining the market
Tremendous interests
Breaking standards

45
46
47
47
48

49
50
51
51
52
52

T H E FORMATION OF TRAJECTORIES: POSITIVE FEEDBACK

53

FROM 'IMPACT ASSESSMENT'TO 'MANAGING TECHNOLOGY IN


SOCIETY'

55

5.1

CONSTRUCTIVE TECHNOLOGY ASSESSMENT ( C T A )

55

5.2

CTA FROM AN EVOLUTIONARY PERSPECTIVE

56

VII

Table of contents

5.3

NETWORK APPROACHES TO C T A

57

1. What is Technology Assessment?

5.4
LEARNING PROCESSES, ARTICULATION OF DEMAND AND STRATEGIC NICHE
MANAGEMENT 5 8
5.5

METHODS

5.5.1
5.5.2
5.5.3
5.5.4
5.5.5
5.5.6

gg

Manipulation of social networks


Social experiments
Social simulation
Participative T A
Consumer CTA
Consensus conference/public debate

59
60
g.^
52
53
67

5.6

BACKCASTING

5.7

DISCUSSION

" QJ
68

6.

WHY DO W E NEED SUSTAINABILITY?

70

6.1

INTRODUCTION

70

6.2

EASTER ISLAND

70

6.3

UNSUSTAINABLE SOCIETIES COLLAPSE

6.4

BUT HOW ABOUT us?

Why do things so often go wrong between "technology" and "society"? Are


technologists insensitive to social demands, or are protesting citizens insensitive to
rational arguments? Or is something more complicated going on? The need for
Technology Assessment will be illustrated using a few examples. Technology is
developed because it provides an advantage for the people involved. However, for
society as a whole the advantages don't always outweigh the drawbacks.
Technology Assessment was created to help providing the social judgment of pros
and cons. Definitions of TA from the ' 6 5 - ' 7 5 period will be treated. Short references
will be made to the most important classical TA methods, effects analysis and a
cost/benefit analysis. Subsequently will be described the evolution the concept of
TA has experienced to the modern Constructive Technology Assessment.

71
72

6.5

SUSTAINABLE DEVELOPMENT?

6.6

WHAT MEANS SUSTAINABILITY MORE CONCRETE?

'..'....^..''^'.''.'.^73

6.7

T H E ROLE OF TECHNOLOGY: FACTOR X

6.8

QUESTIONS, DISCUSSION AND EXERCISES

74
'''^'^''"'1

1.1

The traditional image of technology

75

.'..^^...'.^^.^.^ 7 6

Many social conflicts stem from the problem of dividing resources: problems in
which must be determined who will benefit from natural resources and the yields of
economic production. The answers to the most important problems of division
therefore form the core of many political ideologies. Technology and technological
innovation don't play any part therein, or so they are thought to do. Technology is
even most often seen as the tool, which increases the bounty to be divided. A
common view of technological development:
Although some individuals may be disadvantaged by the introduction of a new
technology, when the advantages and disadvantages of this new technology
are carefully reviewed it will be evident that society as a whole will always
benefit from it.
In other words this view
'Zero sum game': activity where the sum of output
implies: the development
factors
equals the sum of input factors
of new technology isn't a
'Positive
sum game': activity where the sum of
'zero-sum
game',
output
factors
is greater than the sum of input
socially speaking, but
fartnre;
something
society
benefits from - a 'positive-sum game'. Using this line of reasoning the content of
new technologies was undebatable: you simply couldn't oppose them;
technological innovation was a part of the "advancement" of society. Similar lines
of reasoning were generally accepted up into the sixties.

1.2

Technology forecasting

So technology was viewed in the past as a blessing for society, 'Manna from
Heaven'. This made research into the social effects of new technology
unnecessary. What was interesting to the government and the business sector
however was to know in which technological areas breakthroughs were imminent,
which would open up new and interesting commercial or military perspectives.
Already before the Second Wodd War 'technological forecasting' was being done
on a faidy large scale. The purpose of this was to chart technological advances on
the short- or medium term. Technological forecasting was only geared towards
explonng the possibilities that were presented by technology itself The

Vill

consequences of those possibilities remained undebated, as did the social


consequences of any technological innovation. The view on technology, which
stood behind this, was hence a deterministic one: Technology is a major driving
force changing the world; its creation itself is independent from social
developments.
The major think tanks RAND and HUDSON, which explored many technologies,
almost always interpreted the technological developments foreseen by them as
having no problematic effects whatsoever. Yet there clearly were some exceptions.
In 1944 Theodore Von Karman formulated the report Towards New Horizons in
which he propagated the development of the jet aircraft as a clear technological
mission. This mission wasn't only Inspired by the expected capabilities of the jet
engine, but also by the political necessity to give some new Impulses to the US Air
Force when the Second World War was drawing to a close.'

1.3

1. What is Technology A s s e s s m e n t ?

1. What is Technology A s s e s s m e n t ?

Societal effects of technology

In the sixties a surge of criticism against 'technology' occurred. In particular the


book Silent Spring by Rachel L.Carson and L.Darling^ had a big influence In it.
Other Influencing factors on an International level were the (nuclear) arms race in
which science and technology played a major role, the threatening of
unemployment and of decreasing privacy by computers and the advent of nuclear
power generation with its risks of unprecedented accidents and unresolved waste
problem.
Technological Forecasting was met with Increasing resistance since the sixties.
Forecasting was seen as the ultimate sign of the 'technocratlzing' society: planned
cities, landscapes, roads etc. Resistance was bred against this from a humanistic
and soclal-democratically onented perspective (Extreme left-wing did not resist this
Increased planning tendency as Marxism always had opted for more rigid state
planning)^
The rigidity of a technologically deterministic view in relation to social factors was
further emphasized by the apparent fact that technological development carried
disadvantages of Its own. The 'positive sum' game wasn't always valid and that
Introduced the matter of who should pay the price and who should reap the
rewards of technological development. If this were to be known beforehand it could
be used In policymaking. In other words, the social pros and cons of a
technological development should be assessed before deciding whether to
implement it or not.
Cost-Benefit analysis of technology: an
The making of such a
analysis of all effects, both positive and negative
cost/benefit analysis turned
(^^^ quantified as well as possible), which will
out to be troublesome In
^e brought about by the Introduction of a new
most cases. Also, In cases
technology In society.
where the assessment was
^
possible at all It turned out to be more often wrong than not. This was because the
technology under scrutiny often was controversial, hence making It difficult even to
reach consensus on the expected effects. Moreover the social effects of
technology were generally unpredictable. If those unpredlcted effects turned out to
be unwanted too, a major problem would ensue: the pest control compound DDT
turned out to be accumulating in the food chain only to end up in the fat tissue of
mammals and humans, cars enabled more mobility but because of their huge
numbers ended up constricting it and were damaging the (urban) environment,

lead catalysts in petrol turned out to be harmful to humans and the environment,
birth control methods not only provided birth control but also changed the sexual
morals which In turn led to Increased transfer rates of Sexually Transmitted
Diseases, the non toxic Chloro Fluoro Carbon refrigerants turned out to deplete the
ozone layer, etc. etc.
Those effects of technology were often hard to predict because they often entailed
higher-order effects. First
^

order effects consist of


First-order effects are a direct result of the
effects, which are a direct
intended use of the technology. Second- and
consequence
of
the
higher-order effects coincide with changes in
intended
use of the
human behavior s a result of the introduction of
technology. Second- and
the new technology.
higher-order
effects
pertain to the changes In human behavior as a consequence of the Introduction of
the new technology. First-order effects are generally easy to predict. Second- and
higher-order effects are much harder to foresee, but can have effects many times
more Important than the primary effects.
An example:
When the databases of different governmental organizations are coupled
a better degree of control Is maintained on the collection of taxes and
the execution of regulations. This Is a direct or first-order effect. But
the balance of power between government and civilians also changes
because social servants now have more information at their disposal.
This can tempt the legislator Into refining the laws and complicating
them. This can make the civilian feel powerless against the all-seeing,
all-knowing governmental organizations and get alienated from
politics. These are Indirect or higher-order consequences.
Another example:
The use of information technology In health care has the desired effect
of being able to perform better diagnostics on patients, which can
hence be treated in a better way (direct effect). More diagnostic tools
could Increase the reliability and accuracy of a diagnosis by a doctor.
But this can make the patient feel like being solely dependent on
technology, the relationship between doctor and patient can change
because of this and the patient can get the feeling of being treated like
a 'case'. It Is possible that this also influences our thinking on the
nature of humans: 'humans are machines, like all the others'.
Unexpected negative consequences of technology often are second- or third-order
effects, effects that are a consequence of the change In behavior of people
Involved with It. These changes often aren't foreseen because they are the result
of vastly different events coinciding In time. Also the sheer number and diversity of
the stakeholders Involved makes second-order effects hard to predict.

1.4

Technology Assessment, definitions

The Increasing public attention for the negative consequences of technology


created In the USA a need for a new kind of assessment which would warn of
unwanted effects of new technology, and which would provide possible solutions
for controlling the course of development: Technology Assessment.

' A.M.J.Kreykamp, H.Van Praag, B.Van Steenbergen (editing), 1972, Toekomstonderzoek, theorie en praktijk
(1.2.2.), November, Deventer: Kluwer.
" Rachel L.Carson, L.Darling, 1962, Silent Spring, Riverside Press, Cambridge, Mass.
' Waskow, cited in A.M.J.Kreykamp, H.Van Praag, B.Van Steenbergen (editing), 1974, Toekomstonderzoek,
theorie en praktijk (supplement 10, 1.5.2.), November, Deventer: Kluwer.

1. What is Technology A s s e s s m e n t ?

1. What is Technology A s s e s s m e n t ?

Technology Assessment is an attempt to establish an early warning system to


detect, control, and direct technological changes and developments so as to
maximize the public good while minimizing the public hsks.'^
In this definition TA was mainly considered an instrument benefiting the public.
However, in alternate definitions the political tight spots, which were hidden herein,
were explicitly left for the policymakers to worry about:
Technology Assessment is the systematic identification, analysis and
evaluation of the potential secondary consequences
(whether
beneficial or detrimental) of technology in terms of its impacts on
social, cultural, political, economic and environmental systems and
processes. Technology Assessment is intended to provide a neutral,
factual input into the decision-making process.^
... a class of policy studies, which systematically examine the effects on society
that may occur when a technology is introduced, extended or modified. It
emphasizes those consequences that are unintended, indirect or delayed.
These definitions bear a strong resemblance to a definition of TA used in the
business wodd:
The evaluation of the effects of [the application of] a new technology
using muitidisciplinary research... where the company process [i.e.
people and resources], products and services, market and customer
base, social aspects, costs and yields are studied in relation to each
other.^
Emilio Daddario, who submitted the bill for instating the US Congress Office of
Technology Assessment in the US, defined TA mostly as a policy-making
instrument:
Technology Assessment is a form of policy research, which provides a
balanced appraisal to the policymaker Ideally, it is a system to ask
the hght questions and obtain correct and timely answers. It identifies
policy issues, assesses the impact of alternative courses of action and
presents findings. It is a method of analysis that systematically
appraises the nature, significance, status, and merit of a technological
program.^
Although these definitions differ greatly in setting the boundary between TA and
political decision-making they also have a common base regarding the limitation of
technology forecasting.

'' Marvin J.Cetroii, Lawrence W.Connor, 1972, "A method for planning and assessing technology against relevant
national goals in developing countries", in: Marvin J.Cetron, Bodo Bartoclia, The Methodology of Technology
Assessment, New York.
' Vaiy T.Coates, cited by Ruud Smits, 1984, "De hernieuwde belangstelling voor Technology Assessment",
Wetenschap & Samenleving, no. 1, pp. 16-25.
Joseph F.Coatcs, cited in: Alan L.Porter, Frederick A.Rossini, Stanley R.Carpenter, A.T.Ropcr, 1980, A
Guidebook for Technology Assessment and Impact Analysis, New York/Oxford: North Holland.
' M.van Mens, 1989, "Toepassing van Technology Assessinent bij de NMB", in: A.Simonse, W.Kerkhoff, A.Rip,
(edit), Technology assessment in ondernemingen, Deventer: Kluwer, pp. 54-78.
* E.Q.Daddario, in Subcommittee on Science, research and Development of the Committee on Science and
Astronautics, US House of Representatives, 90th Congress, 1st Session, Ser. 1, (Washington DC, US Govemment
Priming Office, Revised August 1968), p. 10.

TA differs from technology forecasting in two ways:


TA isn't primanly concerned with the development of the technology itself,
but in the development of technology in relation to society, to wit the way in
which social choices have been made (and will be made) in the
development process and the appreciation of the (im)possibilities of this
new technology by stakeholder groups.
It is more or less geared toward the pointing out of possibilities in influencing
technological development, both in the positive and the negative sense,
and not towards predicting the future with respect to technology as
accurate as possible. Whether TA should impose a norm itself (minimizing
public risks) or whether it should fit into the norms imposed by decision
makers is still debated.

1.5

TA organizations

Often, governments were developers of technology themselves. Any strategic


information with respect to the consequences of technological developments
therefore mainly played a part in territorial conflicts between the legislature
(padiaments) and administration (government and its institutions). Information on
societal consequences of technological developments was very important to
padiaments because it enabled them to adjust its course on legislation eady on.
For the government this was often an undesirable course of action, because
information, which normally had no place in padiamentary decision-making, would
now get involved in political debates. This is why it was faidy obvious that the
padiament holding the strongest position in the technologically most advanced
country was the first to found a TA organization: the Office of Technology
Assessment which became a department of the U.S. Congress. In legislation and
debate the following requirements for TA became apparent:
1- TA was to be a new type of research differing from existing research
programs with Congress.
2- TA would have to deliver information relevant and important to the decisionmaking process.
3- TA had to be 'anticipating' and carry out an eady warning function for
Congress.
4- TA had to identify policy options and show consequences.
5- These consequences had to be evaluated.
6- Social, economical and political consequences of policy options had to be
charted.
7- TA had to be objective and unprejudiced.
8- TA had to be managed by individual people and involve the general public in
making assessments.
9- TA had to inform the general public on consequences of government policy.
10- TA wasn't to make policy, but deliver the necessary information for it.
11- TA had to contribute to the regain of public trust in Congress decisions.
12- TA had to help in predicting public reactions on Congress decisions to
prevent those from having to be revoked under public pressure afterwards.
13- TA had to contribute to the reasonability and rationality of Congressional
debates.
Linked to the founding ofthe OTA is the name of Senator Emilio Q. Daddano, who
not only managed to get the bill passed, but also became the first OTA director in
' Lewis Gray, 1982, "On 'Complete' OTA Reports", Technological Forecasting and Social Change 22, 3 and 4, pp.
299-319.

1. What is Technology A s s e s s m e n t ?

1. What is Technology A s s e s s m e n t ?

1972. Initially, his efforts weren't particularly successful because he continually


tended to avoid politically controversial subjects so as not to upset mennbers of
Congress on which the OTA depended for its survival.'" He took one side in the
dilemma that every political adviser faces: If you take stand in a controversy, you
make enemies, if you avoid all controversies, you become superfluous. Later on
the OTA (under new management) built up quite a good reputation.
Technology Assessment in most European countries (and the EU) only got going
in the eighties. There had been initiatives in the seventies, but failed because in the
European parliamentary tradition (where the distinction between Legislature and
the Executive power is hardly present) TA often was considered as a
reinforcement ofthe parliamentary opposition, (i.e. the governing party could get all
the information it needed from their friends in the administration, a TA institute
would therefore help the opposition). This made opposition parties in favor of TA,
while ruling parties generally opposed it. TA thus often got a chance to catch on
after a change in governments." In the Netherlands in 1986 the Rathenau
Institute'^ was founded, named after Prof. Rathenau who led the fist major TA
research program on the societal consequences of the introduction of
microelectronics around 1980.
In Europe the following TA institutes, more or less bound to political decisionmaking, currently exist: Institute for Prospective Technological Studies (IPTS, EU
institute, Seville)'^ Scientific and Technical Options Assessment (STOA, European
Parliament, Luxembourg)'*, Office Parlementaire d'Evaluation des Choix
Scientifiques et Technologiques (OPEGST, France)'^ ITA (Austria)'^ Vlaams
Instituur voor Wetenschappelijk en Technologisch Aspectenonderzoek (Flanders
parliament)'^ Bro fr Technologiefolgen Abschatzung beim Deutschen
Bundestag (TAB, Germany)'^ Parliamentary Office of Science and Technology
(POST, Great Britain)", Teknologi Radet (Denmark)'" and several others''. For an
assessment of Technology Assessment Institutes see Smits''.
TA also attained a standing in the business sector. Several causes attributed to
this: Forecasting often turned out to be insufficient. Slowly the realization dawned
that not all of technological progress translated to items and products, which were
viable for making a profit. Often, new technologies were very complex and their
subsequent success on the market depended on a lot of factors. For instance, the
Concorde turned out to be technically feasible. The advent of the 'wide body'
aircraft allowed ticket prices to drop, and the climbing oil prices spelled disaster for
the gas-guzzling Concorde. Furthermore, landing rights were hard to obtain for the
craft because of its high noise level. This made the Concorde into a commercial
mishap. All of these developments could in fact have been predicted. This was one
of the clearest examples that showed the necessity for expanding technological

'" D.Dickson, 1984, The new politics of science, Pantheon New York.
For ways of e.xecution in OTA see: Fred B.WoocI, 1982, "The Status ofTechnology Assessment, A view from the
congressional Office of technology assessment", Technological Forecasting and Social Change 22, 3 & 4, pp.
211-222.
" Ruud Smits, Jos Leyten, 1991, Technology Assessment, waakhond of speurhond? Naar een integraal
technologiebeleid, Zeist, Kerckebosch.
'~ Ruud Smits, Arie Rip, 1988, "De opkomst van TA in Nederland", Wetenschap & Samenleving 40, no. 5, pp. 7-16.
Maarten Evenblij, 1989, "Technology Assessment", Intermediair 25, nr. 7, februari 17th, pp. 33-37.
" http:/Avww.jrc.es/home/index.html
" http://www.europarl.eu.int/stoa/default_en.htm
http://www.assemblee-nat.fi-/documents/index-oecst-gb.asp
"'http://www.oeaw.ac.at/ita/welcome.htm
" http://www.viwta.be/
'* http://www.bundestag.de/blickpkt/l 996/tab2.html
''' http://www.parliament.uk/parliamentary_offices/post.cfm
http;//www.tekno.dk/subpage.php3?page=statisk/uk_about_us.htm&language=uk&toppic=aboutus
http://www.oeaw.ac.at/ita/www.htm
" R.E.H.M.Smits, 1990, State of the art ofTechnology Assessment in Europe, A report to the 2nd European
Congress on Technology Assessment Milan, 14-16 November 1990.

forecasts in such a way that social and economical developments could also play a
part.'^
With TA in businesses it was mainly a matter of charting the societal
consequences of company products and possibilities for developing new products
that would meet new demands or could create them. Also the prediction of
governmental action, shortage situations in supplies, and demographic
developments were important. Besides that it was also often important to note
internal consequences for the company as a result of introducing new
technologies: what effects does a certain technology have on work environment
and the balance of power between different groups within the company?
This form of TA was practiced in almost complete silence. With the 'normal' version
of TA the main goal was usually to clarify technological developments towards
politicians and the public involved so they would be able to determine their stance
on the subject. With TA in businesses the emphasis was not on involving more
people in decision-making through the spreading of information, but the gain of
competitive advantage by getting ahead in obtaining new information.'''

1.6

TA, New Style

About halfway through the eighties the claims which were implicit in the definitions
of TA most widely used turned out to be unattainable: serious doubts arose as to
whether it was even possible in principle to set up a reasonably reliable early
warning system for technological developments. Moreover the performing of a
completely objective TA study (by charting all consequences of a technology)
proved to be impossible. The predicting of societal consequences of technological
developments (including 3rd and 4th order effects) especially proved impossible
because of the sometimes rapidly changing social environment. In this way the oil
crisis invalidated a lot of TA analyses. As a reaction to this the goals of TA studies
were often dramatically reduced in scope. However, this was a political choice and
thus TA had to dismiss the pretence of neutrality (the prediction of all societal
consequences). Objectivity had been a difficult matter from the start, because of
the strongly political environment In which TA was practiced.
Goals more limited in scope for TA arose. One of them was the stimulation of
public debate on technological developments: not all consequences of a new
development could be foreseen, but that wasn't a prerequisite for being able to
publicly debate the consequences that were known. In the American situation a
strong public debate was seen as a means to come to well-thought out and stable
decisions. Public controversies were admittedly a nuisance to decision makers, but
of high importance to the democratic quality of the decision-making process.
Controversies stimulated a kind of 'informal' TA (because for instance in the
controversy claims involving risks had to be supported more and more solidly) and
hence improved the clarity of the subject among the public.'^ Cambrosio and
Limoges even were of the opinion that controversies were a deciding factor in the
application of TA studies and their limitations. They therefore considered public
controversies the central element to each TA study."^

Harry Jones, Brian C.Twiss, 1978, Forecasting technology for planning decisions, The Macmillan Press Ltd.,
London/Basingstoke.
A.Siinonse, W.Kerkhoff, A.Rip (edit), 1989, Technology assessment in ondernemingen, Deventer, Kluwer.
J.G.Wissema, 1977, Technology Assessment, aspectenonderzoek in het spanningsveld van technologie en
samenleving, Deventer, Kluwer.
See for example: Allan Mazur, 1985, The dynamics of technical controversy, Communications Press, and
Arie Rip, 1986, "Controversies as informal technology assessment", in: Knowledge: Creation, Diffusion, tJtilization
8, p. 350.
A.Cambrosio, C.Limoges, 1991, "Controversies as governing processes in technology assessment", Technology
Analysis and Strategic Management 3, pp. 377-396.

1. What is Technology A s s e s s m e n t ?

6. What is Technology A s s e s s m e n t ?

TA not only reduced its pretences. Often was concluded that TA studies generally
yielded answers, which were already obvious, and therefore had limited usefulness
in political decision-making. A newer, more active form of TA was needed that
would place less emphasis on objectivity and more on its practical use in political
decision-making.
Smits and Leyten discerned between traditional TA and a new form of TA. The
differences were pictured as follows:^'

The goal of TA is to provide information that helps people involved in technological


developments to determine their strategic policy, and helps to define further
subjects for TA research.
The main gripe with this definition is that TA is made completely subject to
government policy. In this way, the independent role TA would have had (as an
early warning system for undesired developments, stated in eariier definitions) is
denied.

TRADITIONAL

NEW

It's pretty clear that a definition of TA isn't that easy to put into one sentence. A
number of elements have to be included:

Science plays the dominant role

Equal interest between researchers


and users

High-strung
expectations
respect to the abilities of TA

with

Modest expectations with respect to


research results

Output TA = report

Output = report + discussion

Definition of problem is of lesser


importance

Problem definition plays major role

One single TA organization

Many different TA organizations

Instrumental use of information in a


rational decision-making process

Conceptual use of information


political decision-making process

TA results are automatically weighed


in with decision-making

High
level
of
attention
to
incorporation TA in decision-making
process

Technology
autonomous

is

TA focuses on the interaction between technological and societal


developments (and so not in a direct fashion but indirectly on the
technology-environment connection)
TA consists of analysis and design. Therefore it is rather an engineering
discipline than a scientific activity.
TA needs an ethical perspective, which indicates the relative importance of
norms and values that play a role in relation to technological and societal
developments.
TA only states demands for consistency and clear formulation of the specific
research question and on the ethical basis, which is used within the
research for the judgment of consequences. There is no single unchanged
ethical perspective, which should form the basis of every TA subject.
The ethical perspective of a TA study doesn't necessarily coincide with the
perspective held by the government or with the ethical perspective held by
the individual researchers.
TA attempts to show intervention possibilities based on its analyses.
TA has (contrary to Technology History) a perspective geared towards the
future.
TA studies specify the methods used in their course that make their results
reproducible and verifiable.
TA researchers have to indicate how they have established the boundaries
of their research subject (if not all consequences of a technology are
considered, why have only specific ones have been studied? How are
higher-order effects taken into account, which may result from nonconsidered first-order effects?).

in

considered
Technology is considered man-made
and hence steerable

Smits and Leyten arrived upon a new definition of 'new-style TA' that matched the
old definition by Daddario in many ways but had less pretence. TA is considered
instrumental in Smits and Leyten's view, but is also integrated in the political
decision-making process on a high level:
TA is a process consisting of analyses of technological developments and their
consequences and the discussion following from these analyses.

1.7

TA, a veritable menagerie

The execution of a TA study can have various different functions: Smits and Leyten
discern the following functions of TA:^'
1. Reinforcement of the positions of stakeholders in the decision-making
process
2. Providing support for the actual policy
3. Initiation and development of a future policy
4. Providing an eariy warning (especially for negative consequences)
5. Broadening of decision-making concerning stakeholders (involving more
people)
6. Development of desirable technological adjustments
7. Promoting the acceptance of technology by the public
8. Promoting the societal responsibility of scientists

R.Sinits, J.Leyten, 1988, "Key issues in the institutionalization of TA", Futures, february.

R.Sniits, J.Leyten, 1991, Technology Assessment, waakhond of speurhond? Naar een integraal technologiebeleid,
Zeist, Kerckebosch.
Ruud Smits, Jos Leyten, 1991, Technology Assessment, waakhond of speurhond? Naar een integraal
technologiebeleid, Zeist, ICerckebosch, p. 268.

1. What is Technology A s s e s s m e n t ?

6. What is Technology A s s e s s m e n t ?

TA not only reduced its pretences. Often was concluded that TA studies generally
yielded answers, which were already obvious, and therefore had limited usefulness
in political decision-making. A newer, more active form of TA was needed that
would place less emphasis on objectivity and more on its practical use in political
decision-making.
Smits and Leyten discerned between traditional TA and a new form of TA. The
differences were pictured as follows:''

The goal of TA is to provide information that helps people involved in technological


developments to determine their strategic policy, and helps to define further
subjects for TA research.
The main gripe with this definition is that TA is made completely subject to
government policy. In this way, the independent role TA would have had (as an
early warning system for undesired developments, stated in earlier definitions) is
denied.

TRADITIONAL

NEW

It's pretty clear that a definition of TA isn't that easy to put into one sentence. A
number of elements have to be included:

Science plays the dominant role

Equal interest between researchers


and users

High-strung
expectations
respect to the abilities of TA

with

Modest expectations with respect to


research results

Output TA = report

Output = report + discussion

Definition of problem is of lesser


importance

Problem definition plays major role

One single TA organization

Many different TA organizations

Instrumental use of information in a


rational decision-making process

Conceptual use of information


political decision-making process

TA results are automatically weighed


in with decision-making

High
level
of
attention
to
incorporation TA in decision-making
process

Technology
autonomous

is

TA focuses on the interaction between technological and societal


developments (and so not in a direct fashion but indirectly on the
technology-environment connection)
TA consists of analysis and design. Therefore it is rather an engineering
discipline than a scientific activity.
TA needs an ethical perspective, which indicates the relative importance of
norms and values that play a role in relation to technological and societal
developments.
TA only states demands for consistency and clear formulation of the specific
research question and on the ethical basis, which is used within the
research for the judgment of consequences. There is no single unchanged
ethical perspective, which should form the basis of every TA subject.
The ethical perspective of a TA study doesn't necessarily coincide with the
perspective held by the government or with the ethical perspective held by
the individual researchers.
TA attempts to show intervention possibilities based on its analyses.
TA has (contrary to Technology History) a perspective geared towards the
future.
TA studies specify the methods used in their course that make their results
reproducible and verifiable.
TA researchers have to indicate how they have established the boundaries
of their research subject (if not all consequences of a technology are
considered, why have only specific ones have been studied? How are
higher-order effects taken into account, which may result from nonconsidered first-order effects?).

in

considered
Technology is considered man-made
and hence steerable

Smits and Leyten arrived upon a new definition of 'new-style TA' that matched the
old definition by Daddario in many ways but had less pretence. TA is considered
instrumental in Smits and Leyten's view, but is also integrated in the political
decision-making process on a high level:
TA is a process consisting of analyses of technological developments and their
consequences and the discussion following from these analyses.

1.7

TA, a veritable menagerie

The execution of a TA study can have various different functions: Smits and Leyten
discern the following functions of TA:''
1. Reinforcement of the positions of stakeholders in the decision-making
process
2. Providing support for the actual policy
3. Initiation and development of a future policy
4. Providing an early warning (especially for negative consequences)
5. Broadening of decision-making concerning stakeholders (involving more
people)
6. Development of desirable technological adjustments
7. Promoting the acceptance of technology by the public
8. Promoting the societal responsibility of scientists

R.Sinits, J.Leyten, 1988, "Key issues in tlie institutionalization of TA", Futures, february.

R.Sniits, J.Leyten, 1991, Technology Assessment, waakhond of speurhond? Naar een integraal technologiebeleid,
Zeist, Kerckebosch.
Ruud Smits, Jos Leyten, 1991, Technology Assessment, waakhond of speurhond? Naar een integraal
technologiebeleid, Zeist, Kerckebosch, p. 268.

6. What is Technology A s s e s s m e n t ?

1. What is Technology Assessment?

11

10

r?ler*g,"ev*pmen,

Strlensions

Backcasting. This is a TA study with a very specific goal: A situation is defined


which is considered desirable to reach at some specific time in the future.
According to this situation a way to reach this situation is analyzed: which social
relations need to be constructed, which technological developments are
desired/required, which ones aren't. The degree in which the defined future
situation differs from the present situation determines the number of possible ways
to attain this change.

in which a TA s , . d , is perfornne.

a number of different forms of TA can be distinguished.

ATA Awareness TA, also called early warning TA: forecasting of P f f

|^

'

Forecasting methods, such as Delphi, play an important role herein.


STA Strateqic TA is meant to inform a specific stakeholder about his strategic
o p t t ^ f w t h ' esject o technological possibilities. It doesn't limit its analysis to
stakeholder analysis but analyzes an entire sector wherever possible. Contains.

.
.

Analysis of possibilities/threats
Goals to which the possibilities can contribute
Boundary conditions which apply to this case
Limits within the system

T.sVor:;"trc.tr:rSran. .

oen i n , r . a , c o n d u c e , w , . *

.o.

government and businesses.


CTA, Constructive TA, is geared
Awareness TA, forecasting of possible
towards influencing the process of
societal consequences of a certain
technological change. In this type of
technology to make government or the
approach it is tned to
make
public aware of opportunities and threats.
technological development conform
to social needs and wishes. To this
Strategic TA, is meant to inform a specific
end it is sometimes attempted to build
stakeholder on his strategic options with
up social relations between different
respect to technological possibilities.
parties or to stimulate debate in the
media or between more specifically
CTA, Constructive TA, is geared towards
involved parties. CTA thus mainly
influencing the process of technological
influences the social aspect of
change. In this approach it is tried to make
technological
development,
and
technological development conform to
doesn't
directly
define
which
social needs and wishes.
technologies are socially desirable.
Ways to do this are, among others:
Backcasting. This is a form of TA where
.
Contribute to the formation of
the
desired future social situation is
a
social
infrastructure
defined. The backcasting study then plots
pertaining to the technology
a route towards arriving at this situation.
involved;

Perform experiments and test


projects to promote learning effects and debate;
i^,ninn
.
Gaiher information on relevant experiments to contribute to learning
effects.
" u v 1 " a " b e valued absolute,, ,non-subiec.ive,y): on sonte issues
^o
agreement can be reached whatsoever and hence can t provide

fXof

me'lsn't

a dialogue present In Interaction processes and so

social demand and financial strength).

10

11

13

2. Technological prediction and classical TA methods


In this chapter several methods will be treated with which can be attempted to
chart future technological developments and their consequences. With that
the matter of how much can be predicted in a certain situation will also be
discussed. Finally the 'control dilemma' will be treated: in the phase when
the influencing of technological development is still possible, we aren't
sufficiently (or not at all) aware of the social consequences. When we finally
do know these consequences, the technology has set beyond the possibility
of adjustment.

2.1

Predictability

How can we make sure that predictions (or in the more general sense: statements
being predictive in nature) are more dependable than making a bet in the casino?
Predictions have been made for millennia in the past. Sometimes they were
religious in nature. Their pretence to truth usually wasn't irrational, but extra
rational, which means their claim to truth exceeded the boundaries of rationality
(oracles, revelations, visions etc.). These predictions invariably had the character
of 'fate' in the sense that they would certainly come true and couldn't be influenced
by any actors. Although belief in the 'customization' of society has drastically
reduced in the past twenty years, no valid reason exists to reduce it to 'zero'.
However, the kinds of predictions involving fate are unacceptable to us for two
reasons:
The predictions are to have a rational foundation, meaning a plausible
insight must be given into causal relations, including possible feedback
loops, on which the prediction is based.
Predictions are to have a conditional character, meaning they should
indicate the conditions under which the consequences (possibly) will occur
and hence also possibilities to influence the course of future events.
Predictions must be useable. Useable in this context means: answering a
question with a relevant answer, which is credible and communicable and
is given at the right time.
These demands on predictions can also be imposed upon TA studies. Another
demand for TA studies is balance. This pertains to the ethical standards relevant to
the subject of research - these are to be clearly described. Choices are to be
prevented where possible and where they occur nonetheless should be explicitly
stated.

2.2

Methods

There is no such thing as a perfect TA method because the relations on which the
prediction is to be based often hold an empirical or inductive character (meaning
extrapolations from observed relations in the past). It is hence principally uncertain
whether these relations are to remain valid in the future. It can even be a case of a
clearly measured relation between two variables only turns out to be a coincidence
of environmental factors. Therefore a study should preferably adhere to a
'theoretical framework', which indicates under what boundary conditions the
empirical relations used are valid. There is no method that leads with utmost

13

2. Technological Prediction and classical TA methods

2. Technological Prediction and classical TA methods

15

certainty to crystal-clear predictions; only methods that lead towards verifiable


conclusions.
In TA research and Forecasting mistakes often occur. Common mistakes areo research hypotheses aren't stated explicitly and aren't made credible This
can for instance lead to circular reasoning; a conclusion has been implicitly
stated in the research hypothesis,
o observations are too readily considered undebatable. Every observation
depends on the conceptual framework of the observer and can always be
debated in principle. Researchers should acquaint themselves with this
especially when it concems the observation of social phenomena This
holds quite often for the judgments of experts, especially where they are
judging technologies that threaten 'their own field'
In the past a number of methods and resources were used for TA studies None of
these methods is adequate to base an entire study upon; combinations were
practically always used and required the addition of literature studies and

ZTZ'Ir

'

commonplace that they


won t be discussed extensively ,n this section. However, some tips are important
o Never blindly act based upon the opinion of one single expert. Experts have
interests of their own which makes them have a specific vision. Also they
sometimes give advice on subjects that they aren't experts in at all
o The memory of people interviewed is often rather limited. Sometimes they
can proclaim fallacies with the utmost certainty. Therefore, always keep a
critical eye on statements made in interviews.
o Ask for specialist advice on the literature study. The number of databases is
so large that it would be impossible to find all the relevant data within a
reasonable time span.

2.2.1

Monitoring

In monitoring an eye is
continuously kept open for all
signals, which may play a role
in
developments
under
interest. This includes for
instance magazines, subject
literature, media coverage,

IWonitoring: the continuous observation of all


public information on a technological area
consisting of channels like publications, trade
fairs, lectures, annual business reports,
patents etc.

^fkr^^'""^'^^"'

' ^ P ^ ' ' advertisements, annuals, databases etc


''^^
"^"'^^ ""^f"'- Specifically 'gatekeepers', meaning
peop e who receive information from lots of different angles, are of importance
Monitoring a technology in this way is very intensive. A special way of
dplnn' n
screened using specially
Technological trends can appear from patent files
nSt
h
''"^^^^'
^ shifting of the focal
nlnio

^""^

ZiSg"s

^.o^.hil'rr

2.2.2

the Delphi method

The Delphi method is a method using which the expectations of experts on the
development of a certain technology can be charted. With a Delphi a number of
experts are required to answer a series of questions on the development of a
specific technology in writing. The response should contain a judgment involving a
timescale and an estimate as to the probability of future developments. A project
leader gathers all responses and poses a new, more specific series of questions to
the people interviewed. In this new round the matters on which consensus hasn't
been reached are the most important. The participants get to review the arguments
and estimations of the other participants anonymously. In an iterative process It is
attempted to reach a consensus among the experts. Using this a more solid
foundation is gained for a vision
In a Delphi a number of experts are
of the future. These rounds
interviewed in writing on the development of
force the experts into refining
a
specific
technology.
The
experts
and
supporting
their
subsequently
are presented with the
argumentation.
arguments and estimates of the other
No face-to-face
discussions
participants (anonymously). Afterwards they
take place, but the matter is
are again interviewed, especially regarding
treated in writing to prevent
the Items they showed disagreements on.
verbal trickery from being used
Usually a consensus is reached after about 3
and to keep status from playing
rounds.
a role. In practice usually no
further convergence of opinions
occurred after about 4 rounds.
Sometimes a forum discussion was organized between participants because of
waning response. The Delphi method is particularly suited to TA research in which
the opinion of experts plays a major role, such as with awareness TA where not
much is known about the technology yet.
Methodologically a Delphi comes with its own pitfalls. How can be prevented that a
bias occurs because of lack of response from a particular group, for instance
industrial experts? How can it be prevented that experts try to contact each other
outside of the study, in worst cases even to discuss on the answers they will both
give in the interviews, to consolidate their own interests (for instance more funding
or prestige for their work, or the sweeping under the rug of problems which can
lead to unwanted government interference)? Moreover a Delphi takes a lot of time,
both for the analysts and for the people interviewed.
In fact, Delphi in its original form is more of a forecasting method than a TA
method, because only the experts are being consulted and the social aspect of the
technology is hardly considered. It seems questionable to involve Delphi
participants from different interest groups. In a conflict of interests rational
arguments don't play the leading role. Is consensus still attainable in that case?
Are the participants still using the same standards?

analyzed These methods form a useful tool for detecting trends but can hardiv
say anything about their meaning.^"
u ^ DUX can nardly
2.2.3

l^ZrZt^^rr'

-'^-'"^^--kenningen

inventansatie

en selectie van

Cross Impact Assessment

A variation on Delphi is the Cross Impact Assessment method, which was first
used by the American aluminum company Kaiser, mainly in decisions on the
introduction of new products. For the company 60 developments were identified
which might happen at some stage. In constructing these, various experts were
involved to chart risks in terms of technology, competitor and client behavior, etc.
They weren't only asked to estimate the chance on such a possibility happening,
but also on what the chance on its occurrence would be if some other event had

14
15

2. Technological Prediction and classical TA methods

happened before. The result was a matrix of chances. This matrix could be
calculated using a Monte Carlo procedure, which made it easier to assess the risks
following from a decision in a certain complex situation.
Using the Cross Impact Method risk-avoidance measures could also be devised.
This method appears to be suitable mainly for strategic TA because it isn't so
much about exploring a new area as to reduce the complexity of a problem. Cross
Impact Assessment can in principle also be done using other elements besides the
opinions of experts.
Example of a limited events matrix regarding the introduction of the fax
machine on the market
If this event happens:
Probability
becomes:

of

event
Increasing
taxes/costs

Negative
legislation

Replacement
technology

Market
saturation

Increasing costs/taxes on
transmission (0,70)*

1,00
P(1|1)

0,40
P(1|2)

0,70
P(1|3)

0,51
P(1|4)

Negative

0,20

1,00

0,38

0,31

P(2|1)

P(2|2)

P(2|3)

P(2|4)

0,90
P(3|1)

0,72
P(3|2)

1,00
P(3|3)

0,33
P(3|4)

0,33
P(4|1)

0,35
P(4|2)

0,05
P(4|3)

1,00
P(4|4)

legislation

(0,40)*
Development
replacement
(0,60)*

of
technology

Market saturation (0,45)*

* Initial marginal (ceteris paribus) probability.

2.2.4

Social-technical maps

To not only be able to predict


technology
but
also
give
information on social effects,
more tools are needed. The
views
of
groups
and
organizations,
which
are
relevant, form the basis for
social-technical
maps
for
specific
technological
developments. Below a list Is
stated using which such a map
can be formed.

The social-technical map shows:


The state of development of a
technology,
The dynamics in development of this
technology,
The different stakeholders involved in
this technology,
The
views
and
interests
the
stakeholders have regarding this
technology.

0. Bounding of the technical


system (will it be a map of the car, the engine or for instance the electrical
engine) and time frame.
1. Construction of a crude tree showing hierarchy of technical alternatives and
mechanisms, which determine the selection between them, plotted against
time, meaning which alternatives are being worked on and what choices are
made in the process. Attention is also paid to alternatives that didn't make
the grade, but can be developed on separate tracks in other developments.

16

2. Technological Prediction and classical TA methods

17

2. Characterizing the alternatives according to contents (cognitive) and


origins (social). Which stakeholders are trying to get which items onto the
agenda? In the characterization of contents attention is also paid to
expectations, links between alternative technologies (for instance coupling to
base technologies) and any missing knowledge. In the characterization of
origins also focused upon are relationships between stakeholders
generating alternatives and the identity of the relevant stakeholders.
3. Does trajectory (subsequent technologies are all based on the same basic
design/knowledge base, like C-MOS technology) formation occur?
Trajectory formation is often linked to expectations regarding technological
progress, which results in disregard for alternative technologies. Trajectory
forming occurs together with entrenchment, the 'nesting' of technologies in
our society). This means that various different actors (legislators, research
organizations, or even consumers) get involved in the project and adapt
themselves towards this development.
4. What are the (environmental) effects of the different alternatives? When has
who acknowledged these effects, and how are they taken into account?
5. Are there critical episodes visible in the technological development? How can
those fractures be characterized in both a cognitive and social way?
Through which process did the fractures occur? What roles do the different
stakeholders play in this? Was it a matter of anticipation by technologists, or
external pressure? How was the social environment affected by the
technology? In what way were the technologists put under pressure by their
social environment? Were there specific actors linking the technology and its
social environment?
6. In the periods where no fractures were present (developing episodes), were
there attempts to bring about fractures? By whom, and why did they fail?
What difference is there with the critical episodes?

2.2.5

Checklist

The making of a social-technical map often takes a rather large amount of time.
The technology has to be monitored intensely and a large number of interviews are
needed to chart the positions of the different stakeholders. Often a quicker check is
wished for, to see if there should be cause for concern and to come up with
subjects for further research. The checklist presented here consists of three parts
(unequal in weight):
o research and development work
o product
o production process
'Product' in this case means that which is the intended final result (and usually is
sold), 'production process' means the steps taken to produce this product and
'technology' signifies the product, production process and knowledge incorporated
into these.
A: l-low acceptable is the research and development work that accompanies
the formation of the new product and its production process?
1) Do counteracting social forces exist against the methods used in the research
and/or development work or against the collection and storage of certain
data?
2) Is the research and development work scientifically interesting or does the
development of this technology provide a special contribution to (a) technical
and/or scientific discipline(s)?

17

2. Technological Prediction and classical TA methods

3) Can it be foreseen that investing in the development of a technology at this


moment could prevent a better alternative from being developed in the
future?
A1) These points can be considered in this case:
o Safety of the research for researchers and people living in the neighborhood
regarding for instance the release of poison, manipulated organisms,
radiation etc.;
o Abuse of test animals or test subjects (people);
o Possible forms of abuse of research data for socially controversial ends
such as weapons of mass destruction, gaining access to private
information, race-based discrimination etc.;
o Mistrust towards researchers, which are carrying out the research (a socalled Faustus, Jekyll, Strangelove or Buikenhuisen complex).
A2) Disconnected from the eventual results a research or development project can
sometimes provide a powerful stimulus to other technologies and disciplines. Many
examples are known. For instance, the construction of the closeable
Oosterschelde dam in the Netherlands yielded new knowledge and technology,
which could be used for other goals. This effect was also to be seen with the SDI
project. These effects are usually called 'spin-off effects' when coming from military
technology.
A3) This problem occurs quite often: investing in an improved waste-incineration
process that prevents the forming of dioxins makes developing a more
environmentally friendly alternative to PVC later on less attractive. Investing in a
more efficient petrol engine means that the development of the electrical car will
suffer as a result.
B: How acceptable is the new product in itself?
Ethically:
1) Are any social values connected to the product in itself, or the product that it
replaces?
2) Is the product considered unacceptable in the ethical system of specific religious
or cultural factions?
B1) People don't judge products based solely on their own interests; they also rate
the product in terms of their social opinions. Products can expect to receive a
certain amount of positive appreciation when they can be coupled to social
developments/changes, which are considered to be positive. In this way many
people in the neighborhood of Medemblik in the Netherlands held a positive
attitude towards the construction of a new 1 MW windmill because they expected a
positive environmental contribution from it.
A similar coupling is suggested for the appreciation of 'atomic electricity'. Here, two
conflicting social values were in play: economic growth and safety/health. Analyses
show that many of the differences surrounding the use of nuclear energy
corresponded to views held with respect to these social values.
A negative appreciation could for instance exist for products which can be related
to negatively valued social phenomena such as:
o Unemployment (this was an issue when the computer became hugely
popular);
o Forms of opulence and waste (probably the reason for the failure of the
electric toothbrush in Holland in 1973 by Philips, and their subsequent
decision not to introduce an electric corkscrew);
o Animal cruelty (fur coats);

18

2. Technological Prediction and classical TA methods

19

o Usurping of traditional culture (replacement of windmills by engine-driven


pumps in the nineteen-twenties and thirties).
B2) This particularly pertains to protests made by religious factions, for instance
against certain kinds of foodstuffs, preparation methods, contraceptives, medical
treatments and animal products. Such protests don't necessarily have an influence
on acceptance by the majority of the population (for instance the Roman-catholic
protests against the birth control pill).
Social acceptance
3) Will the costs of the new product be likely to attract criticism?
4) What effects on the environment will the product have? Which changes in
behavior will the product cause, and what environmental effects result from
that?
5) What are the risks the use of the product entails, both for the user and others?
6) Does the use of the product clash with habitual behavioral patterns of large
groups of people?
7) Are there financial or psychological barriers which hinder acceptance of the
product?
B3) The cost is partly a technical/economical boundary condition. For some
products it can be economically acceptable to have a higher selling price than the
one they currently have (for a product of equal value) because for instance legal
measures can be taken to limit competition, or because competition is hardly
possible at all. However, not every price that can be set market-wise (especially
concerning products that can be monopolized) is also socially acceptable. Often
social values play a role in this, such as in the debates on the price of university
and school fees.
B4) Readily visible environmental effects stem from the use of energy by the
product (and the change in it compared to that of an existing product). However,
change in behavior is also important: consider an increase/decrease in car use,
etc.
B5) Items here include damage to health, economical damage, psychological
damage as a result of the improper functioning of the product. In this case the
appreciation of risks differs: Voluntarily taken risks are much more acceptable to
people than risks imposed on them by others. See for an example the famous
study by Nader (1965) about the lack of safety in cars.
B6) Here, the change of commonly accepted behavior is the issue, for instance in
the case of the introduction of bio-bins and glass containers. In both these cases
this went fairly smoothly because people were motivated for a higher value
(concern for the environment). The Unilever product 'Dentabs' failed however,
because it infringed upon the deep-routed habit of teeth brushing which was
instilled in the Dutch population by way of commercials. Brown milk bottles (which
were less transparent to light) failed because consumers couldn't see whether the
bottles were clean anymore. Also, a new product can have 'hidden' deficiencies
that only show up later, after prolonged use (e.g. many people find it uncomfortable
to read long texts from a microfilm or a screen). Common habits are hard to
change, even if a change would be beneficial.

19

20

2. Technological Prediction and classical TA methods

B7) New products can sometimes pose advantages but nevertheless have
difficulty being accepted because the customer's inhibition towards using it is too
great. This inhibition can consist of a course required to get acquainted with use of
the product (typing courses and new software spring to mind), the necessity to
purchase special equipment before the product can be used, or psychological
bamers. This was in a sense the issue with new aramide-reinforced car tires
which had to compete against the 'Stand on Steel' campaign by Michelin when
they were introduced in the seventies.
Secondary social effects
8) Does the product permit new (economic or otherwise) activities? How should
these activities be judged?
9) Does the product threaten existing activities, which hold a certain social or
cultural value?
10) Does the product influence the social structure? (private life, local community
cultural region)
11) Does the product have any other (possible) uses than the one primarily
intended for it?
B8) New products can sometimes lead to a host of new possibilities which
promote the acceptance of the product. Copiers not only replaced carbon paper
^ ""^^^ '"""'^^^^
^"^^"t f copying done - something
IBM hadn t anticipated. In this way it is to be expected that new and improved
products not only replace existing ones but also introduce increased functionality
and with it a broader acceptance.
B9) A new product can lead to a decrease in demand for other products This can
cause the market for those products to become too cramped. Sometimes these
products are considered too valuable to disappear: think of reduced theater visits
as a consequence of the use of television, or a reduction in the use of public
transport as a consequence of increased car use. The community considered
these products (theaters, public transport) so important that they were often
subsidized.
B10) Many products exert an influence on the way people live together and
communicate. Where once the latest rumors were exchanged at the pump in the
village square, there now is the local cable TV network. This means for instance
the introduction of a partial news monopoly and the loss of a part of the local social
structure. The television also meant an attack on family life. Also consider
computer widows', scientists who daily email with their colleagues and commute
etc.
Often these drawbacks of a new technology pose no problem towards its
acceptation: the decision to accept is often individual (as are the advantages)- the
drawbacks (the dereliction of social communities) are usually collective.
B11) Especially base products often have many possibilities of use Polyethylene
or instance was originally developed as an insulator for submarine cables This
later only formed a fraction of its total use.

C: How acceptable is the production of the new product?


In itself:

20

2. Technological Prediction and classical TA methods

21

1) Are any ethical standards and/or values threatened in production?


2) Are the working conditions in the production process acceptable?
C l ) Consider for instance (miss-) use of animals or people, and religiously inspired
protests against the use of holy ground and violation of the rest on religious
celebration days. Also to be considered are forms of resistance against production
processes that mainly stem from negative associations and emotional responses.
This plays a part In for instance food conservation processes in which radiation
plays a role. Next to very concrete arguments on the effect radiation has on food,
ethical values on the value of life, creation and future generations also play a role
in this case. Ethical standards and values also crop up in the acceptance of food,
which has been produced by means of genetically modified organisms. Cheese,
which is produced using chymosine obtained through genetic modification, is
identical to cheese produced using rennet obtained from calves' stomachs. Yet the
'Gist-Brocades' company is unable to sell chymosine because (mainly German)
consumers don't wish to eat cheese produced in this way.
C2) Here attention has to be paid both to the physical and the psychological work
environment. Workload, safety, stress, exposure to hazardous substances,
continuous shifts, possibilities for employees to be involved in the organization.
Local

environment:

3) Which are the physical effects of the production facility on the environment?
4) Which are the expected (primary and secondary) effects of the production on
employment? What level of schooling is required for personnel?
5) What other consequences does the product have for the local environment?
6) What are the social implications of the production for the local community?
7) Does a potent breeding ground for local activism exist?
8) Can choosing a suitable location drastically reduce negative effects?
C3) This case concerns itself with the environmental impact which doesn't play a
big role in the overall environmental picture, but can lead to severe problems
locally, such as the emission of polluting substances into the air (it's mainly the
'stench' factor in this case), surface water or soil, a local garbage disposal
problem, noise, use of precious space, 'horizon pollution', interference of
electromagnetic signals, depletion of ground water, lack of safety (think of
sabotage too), disruption of animal life.
C4) There are both employment effects directly related to production (both for new
as well as disappearing employment) as well as indirectly related effects.
Examples of indirect effects include: farmers losing their land, the building
contractor building houses for his workers, a chips stand in front of the factory
gates, etc. New production activities can also draw other industries into the area.
The level of education of the personnel needed often is very important regarding
the possibilities of local employment and the migrations caused because of it.
C5) Here secondary effects like transport and traffic risks as a result of supplies
and distribution on the location of production come into play, but also possible
cooperative use of the infrastructure built for the production facility. This
cooperative use can lead to an improvement of the traffic situation or to better
public facilities. Also waste products (like waste heat) are sometimes useable by

21

2. Technological Prediction and classical TA methods

the local community. These effects can't be very clearly categorized at an early
stage, normally.
C6) In this case can be thought about the consequences of migration and
industrialization on the local culture. Does the local community have an 'open'
culture? Here the existence of a local industrial or trading tradition to which the
new activity can add is important. Also, the effects of migration on the local housing
market can be of importance. Other local consequences can be found in the area
of public amenities and local taxes.
C7) Local acceptance of nuclear power plants In the American situation was linked
to certain social-economic and political characteristics of the area. Particularly the
already present environmental lobbying activity appeared to be directly linked to
the strength of opposition against nuclear power plants and a higher living
standard (high average income, few welfare-supported) appeared to be inversely
proportional to it.
C8) Some of the negative effects of production will be negligible when the
production takes place within for instance a large industrial area. A major urban
agglomeration also offers different possibilities from a rural area. On the other
hand some rural areas may offer the best environment for new production.
Society:
9) Which (either existing or planned) economic activities are threatened by
production?
10) Is the existing balance of power influenced by new production? Consider the
following relations:
1- between employees (or unions) and employers;
2- between different producers;
3- between producers, clients and suppliers;
4- between government and industry branch;
5- between different governmental institutions.
11) What does new technology mean for the development of third-world countries?
Are relations between global trade blocks influenced by production?
C9) Items to consider here are unemployment, destruction of capital (both private
and public) in this and other sectors. Reduction of employment opportunities
and/or investments can lead to powerful protests when it concerns groups that are
well organized. When current employment opportunities can be maintained and the
destruction of capital can be avoided, these protests can be soothed.
CIO) Technologies sometimes can radically change the balance of social power.
This is related to C4, also. The position of employees or the union can be
undermined when tightly organized professions become obsolete (such was feared
in the graphics sector), the government gains power over the civilian (and often
also over lower-level governments) by the linking of databases, and a company
can sometimes reinforce its position regarding its suppliers or clients. This can lead
to resistance within the groups losing their position of power. Infamous is the
resistance of British unions against some technological changes (for instance in
the mining industry).
C11) New technology can spell the death-knell for the development of regions in
the Third World. A more economical use of resources born of environmental
considerations (or a replacement by other resources such as was the case with
phosphates in detergents) can often lead to a dramatic reduction in resource

22

2. Technological Prediction and classical TA methods

23

exports from the Third World. New technology can also have consequences in
trade politics.

2.2.6

Scenarios

Scenarios are often mentioned


In a scenario a possible future is portrayed
as one method. Constructing a
for
an organization, nation, discipline or
scenario however can hardly be
technology.
The scenario should be credible
called a method of forecasting
and
tantalizing
as a possible development
because the future data that
and
should
therefore
be consistent and
scenarios present are produced
sufficiently
detailed.
The
goal of a scenario
by other means. Scenarios
usually
isn't
to
make
plans
for future events,
would be better described as a
but
to
stimulate
creative
thinking
about the
way of presenting technology
future.
forecasting than as a method. In
a scenario it is attempted to plot
the choices, or alternative events expected and to translate the consequences of a
choice or event to later choices or events (a choice often involves the elimination of
a later possibility). There are always multiple scenarios possible. The
determination of a limited number of scenarios which are more likely to occur than
others is to be done using a different method, as should the analysis of which
choices would eliminate each other. The most important goal of scenarios however
is not to predict, but to 'wake people up' and make them aware of possible
changes:
During stable times, tiie mental model of a successful decision maker
and unfolding reality match... In times of rapid change and increased
complexity, however, the manager's mental model becomes a
dangerously mixed bag: rich detail and understanding can coexist with
dubious assumptions and illusory projections^^
Trend scenarios show developments that are in line with our current ideas. They
are also called 'surprise-free scenarios' because they do not incorporate any
sudden and unexpected events. The scenarios are normally shown as surrounding
a most probable scenario (which often represents 'business as usual'). Scenarios
often make complex problems clearer to policymakers. Shell often works with three
distinct scenarios: 'business as it used to be', 'frustration and conflict', and 'realism
and restraint'^^.
Framework-determining scenarios are meant to show possibilities for reaching
an ultimate situation that is not an extrapolation of current developments^^ (for
instance the trend-breaking scenario of traffic and transport of 1988^'*).
Beside these methods a great number of other tools and methods exist to make
more quantitative predictions. They are treated more extensively in the course
'Technology and the future' (WM0908).

PieiTe Wack, 1985, Scenarios: Uncharted Waters Ahead, Harvard Business Review, September October.
^- P.Rademaker, 1981, "Toekomstverkenning in het bedrijfsleven". In: Van Doom/Van Vught, pp. 170-189.
" Joseph van Doorn, Frans van Vnglit, 1978, Forecasting, methoden en technieken van toekomstonderzoek, Van
Goreum, Assen/Amsterdam.
Th.J.H.Schoemakcr, H.C.Van Evert, IVI.G.Van den Heuvel, 1988, Trendbreukscenario vervoer en verkeer, TU
Delft,
P.iM.Peeters, 1988, Schoon op weg, naar een trendbreuk in het personenverkeer, Amsterdam, Milieudefensie.

23

2. Technological Prediction and classical TA methods

2.3

Problem: the control dilemma

2.3.1

Introduction

In his bool< The Social Control of Technology' David Collingndge^^ treats the
problem of Icnowledge (insufficient, inaccurate Icnowledge) connected to the
development of technologies. According to Collingridge the developers of
technologies are caught in a 'control dilemma':
... attempting to control a technology is difficult, and not rarely impossible,
because during its early stages, when it can be controlled, not enough
can be known about its harmful social consequences to warrant
controlling its development; but by the time these consequences are
apparent, control has become costly and slow. (19)
One of two major ways to solve this
problem is by attempting to gain
Control dilemma: The development of a
insight in the design phase into side
technology can be steered quite well in its
effects occurring later on. This is,
eady stages; at that time the knowledge to
according to Collingridge, a dead
determine why, where and how to adjust is
end. Histoncal examples show the
lacking. When a technology is widespread
prediction of side effects is doomed
in
society
we
finally
know
all
to fail. Collingndge hence proposes
consequences of it. Control and steenng
an alternative way, which gears
of that technology, however, has become
itself towards the design phase and
quite difficult by that time.
attempts to integrate the realization
that decisions have to be made based on (partial) ignorance.

2.3.2

New criteria for the design phase

Concretely, Collingridge proposes to bear in mind the following aspects in the


development of new technologies.
a. Correctability of decisions
Starting from the knowledge that negative effects will keep appearing, decisions in
the design phase of technologies have to be constructed in such a way, that it is
possible to correct them afterwards. A decision made under ignorance should be
combined with a forward-looking notion of responsibility and a willingness to
revoke the onginal decision should it turn out to be wrong.
With every decision should go a pro-active monitoring process to investigate the
possible 'wrongness' of the decision. Also, decisions should be investigated for
their monitor response time - this is the minimum time-span between the time of
decision and the time at which its possible error can be recognized.
Collingridge pleads for the development of technologies in which the monitoring in
the design phase is cheap and the monitor response time is as short as possible.
b. Control of systems
In case of ignorance investment should be done into systems with a high degree of
controllability. The ideal case is a technology with a high degree of controllability
and which yields low costs in cases of any errors, which means that the cost of
dysfunctions stay low and they can be controlled in a timely manner. However,
there always remains a degree of tension between the degree of controllability
(design decision) and the costs of control (economical factors).

David Collingridge, 1980, The Social Control of Technology, Pinter Publishing.

24

2. Technological Prediction and classical TA methods

25

c. Flexibility
The relative number of options to which a designer has access in a decision can
be defined as the 'flexibility' of that decision. The flexibility of a technical design or
system is inversely proportional to the control costs and the time needed to
perform correctional measures. The greater the degree of ignorance, the more
should be strived for decisions with a high degree of flexibility.
One of the possibilities on a company or government level to reinforce this
flexibility is not to put all eggs in one basket, but to develop and stimulate multiple
different designs and/or technologies at the same time. In this way, agncultural
development shouldn't only be done in the direction of genetically modified crops,
but at the same rate in integrated farming- and biological agricultural techniques. If
transgenic crops would turn out to be more expensive than thought before (for
instance because of environmental damage), a quicker transition can be made to a
different technological approach which doesn't have to be researched from
scratch.
d. Insensitivity to errors
Choosing for decisions, which are highly correctable or flexible, or investing into a
system, which is easily controlled, also means choosing for decisions, which are
relatively insensitive to errors or mistakes. In this last case the costs of a wrongful
decision (which can be quickly corrected) are lower and don't differ greatly from the
costs made in case o f a 'correct' decision.

2.3.3

Elements hindering flexible design

Collingndge strongly realizes that implementing the criteria mentioned above in the
design phase isn't easy. He discerns five reasons why technological developments
are so hard to control:
1. Entrenchment
The
introduction
of a
new
Entrenchment is the phenomenon of
technology
also
has
technological products and processes
consequences
for
existing
being hard to change, because they are
technologies, which have to adapt
carefully attuned to other technologies
(e.g. because the new technology
and/or organizations, and a lot has been
is
cheaper).
If
the
newly
invested in them (financially or by means of
introduced technology has to be
education).
amended,
these
other
technologies also have to be
adapted. This makes control difficult, slow and expensive. Entrenchment has a lot
to do with the interconnections and dependencies of technologies in high-tech
societies. New technologies complement other, older technologies (for instance in
a transportation system) and change the structure of the system. The eventual
transition from classic to electnc-powered cars would require a huge adaptation of
other technologies in the 'car transportation system' (oil refineries, electrical power
plants, battery producers, transition of petrol stations to 'charging stations', etc).
But also even the transition from leaded to unleaded petrol can't be realized by
merely developing engines which run on unleaded petrol. The production
machinery for the old engines (using leaded petrol) have to keep being used until
they are written off economically, the petrol stations have to be amended, etc.
Entrenchment means that the time required to pass on structural changes is faidy
long in all cases. At the same time it has consequences regarding the debate on
the changing of technical systems: it gives the proponents of the status quo an
unfair advantage (because changes always cost money). Entrenchment plays the
biggest part in technologies, which are 'invariable' and 'very valuable'. The value

25

26

2. Technological Prediction and classical TA methods

can for instance be determined by calculating the costs caused by failure of the
system (take for example the failing of the electrical power supply).

3. The formation of new technologies


2. Competition
Technologies are almost always developed in a competitive context, often in the
economical sense. This reduces the flexibility of decisions made in these
technologies. Companies can't afford to lose 'the race' and will therefore be
inclined to follow the technological developments of other companies.
Competition against technologies of other companies forces them to make quick
decisions. Also a company may develop a technology sometimes merely to 'be
ready' for when another company decides to market the same technology. All this
while the company may not even have any special need for this new technology or
when the investment costs are quite (too) high. In this way competition forces
companies to invest more into the development of new technologies.

What consequences does a new technology have for society, and what
consequences does society have for technology?
In this section an overview of approaches is presented with which the relation
between technology and society can be conceptualized. Using these approaches it
will be shown why technologies often are so resistant to social wishes to change
them, and which can be interfacing points for the social influencing of technological
developments.

3. Positive feedback

3.1

4. 'Lead time'
Technologies requiring a long development time ('lead time') are difficult to control,
even without there being economic competition involved.
A characteristic of technologies involving long development times is a long-running
learning experience, with all the errors it entails. Only after a technology is through
the entire development phase any mistakes made become apparent. In that stage
it is too costly to still adjust the design or discard it. A clear example of this is the
nuclear breeding power plant at Kalkar (Germany), which never operated but cost
billions of euros.
An efficient prospective technology policy is only possible when knowledge exists
about the societal costs of certain technologies (possible to deduce from the costs
in case of system breakdown, for instance) and a clear vision of the future is
present. Both of these requirements are problematic. Firstly, the costs only
become apparent when the technology is implemented and integrated into the web
of other technologies and social patterns. Secondly, a clear image of the future is
always hard to obtain: unexpected events (market price fluctuations e.g. in the oil
crisis, accidents like Chernobyl) can have a great influence on the future course of
events.
5. Scale
The large scale of certain technical systems (for instance the 'car transportation
system', the electricity grid) is a major barrier working against any attempt to
change it or replace it with another system.
Collingridge observes a tension between smaller production units (which provide
options for control and steering of the system) and large-scale production units,
which often tend to be more economical. Choosing lower steering costs (smaller
production units) often means operating at higher economical costs which isn't
always possible in a competitive environment:
... a production system of small units with short lead-time is more
controllable and more flexible than a system of large units of long
lead-time. ... Control costs, the costs of buying and operating new
units and scrapping old ones, will, however, generally favor large units
because of economies of scale. This is to be expected, as flexibility
usually has to be paid for in some way.

Introduction

We can ask ourselves the question of how the process of technological


development works exactly. Knowledge of this development process and the
different relevant factors in it may give us a better insight in the possibilities of
influencing the development of technology in the future. This can help an engineer
better understand his own position. It can also be useful when we want to steer a
certain course in developing technology, enabling us to reach specific goals or to
avoid negative effects.
Within several scientific disciplines interests toward innovations and technological
development exist. After the Second World War these interests had a strongly
economic orientation. This stemmed from the problems economists encountered
when attempting to explain the post-war economic growth: it wasn't explicable
using macroeconomics alone. The economic influence of technology was merely
taken into account in the form of a constant. Technological development, however,
turned out to be an essential factor in economic growth.
The relations between fundamental research, development and technical
implementation took a central role in new models devised by economists. Inventors
were only interesting to them if they marketed their inventions themselves. The
focus was more on the businessman rather than the inventor. The division
between invention and innovation also stems from this. An invention only became
an innovation after it was marketed. This fit into a tradition inspired by J.A.
Schumpeter, an economist from the first half of the twentieth century. Inventorcum-businessmen like Thomas Edison and Alexander Bell became very famous
for that reason.
This type of innovation studies led to a fairly linear view on technological
development, in which the focus was drawn to the great successes. Figure 3.1 is
an example of such a view.

Basic

Appiied

T.oh.*,ica,

P-ct

Research

Research

Development

Development

^^^^^^

Figure 3.1 A six-phase model of an innovation process


26

27

3. The formation of new technologies


3. The formation of new technologies
Similar models came
m 2

h K

under heavy criticism starting

^ ^

in the seventies

The

Which Choices are continuously

rmtrtaror^eSr^^'^^ '''''''''

factors

Ta^y^l^

This is why researchers tried to form a more expansive frameworl< encomoas^inn


mnovations inventors and businesspeople, where the influence society has on he
real zation of technological developments is given a lot of attention An important
impulse here comes from the history of technology. This has a lonq tradSion n
"sTorl'^f'tTn'',
f - ' - - e n t s or technical systems Globa
the
history of technology has centered itself around both an internal and an extern^
approach. The intemal approach often concems itself with t h T p edse t ^ a S n T o
the development of a specific machine, for example the steam engTn
'
Internal
approach
towards
history
of
technology; technology possesses dynamics
of its own. Influences from the social
environment merely cause ripples in the
continuous flow of technological development
External approach: technological change is
completely driven by external forces. Every
'autonomous' development is an illusion as the
stability is caused by stabile social conditions

tJack box a o o ^ ^ h
t e c h n n ^ , n l T5 ?

The external approach places


the accent on the influence of
the social environment on
technology. The development
of technology is a resulting
vanable in this approach,
which depends on factors in
the social environment.
To get a hold on the process
of technological development,
various theories and models

P
^'^'^^'^S^
characterized by a
Economists and sociologists showed the tendency to view

acn'TT^^

^*^'^^'

- 9 - d e d the technorg^
""^^^
technological development process
stayed out of the picture this way. Historians did try to expose these^ho ces h e v
entered the black box but themselves spent little attention on he nflue S o he
environment on technological development. Theories that were devrped airr^
at explaining technology change as an autonomous process
visions have been developed which attempt to incorporate social factorsTntoTe
h i s m'oHJl
One ofthese is the so-called SCOT r^^def
tsdf a s T b

LtltL ,

MorrecX

t a ion Tn " ' " " V ' ' " '

'^^^^^^

- ^ - i f y de igned fo

hS

In this chapter we will deal with the question what drives technological changes An
implicit view that ,s often implicit in popular media is the view that techndogicaT
thrnunh "
^ ' ' ^ "^"^"^
technological change is n S inuen^^^^^
through social (economic, social and legal) powers. 'The progress of the
echno ogy cannot be halted', or 'As Einstein had not invented the generaTtheorv of
relativity sorneone else would have done if. Core of this w a / o f r e a n i n e is the
a

ScieVceTonly g owi g
'better'. Moreover
ImDroved
technologies help to improve other technologies. Hence, technology
L e Z S

tpin ^

refo

T T I ' '

"

"'y become

The state of society is determined by technology.

28

29

One of the best-known

philosophers that approached technology from a deterministic (or even fatalistic)


viewpoint was Jacques Ellul. Ellul's constant theme in all his publications is the
imminent 'technological tyranny over mankind'. Ellul creates a sharp divide
between the dassic (Medieval) technology and modern technology. Traditional
technology was according to him:

Limited in its application (because technology had been made for specific
functions on a specific place)

Only marginally dependent on resources and espedally dependent on


craftsmanship

Local in its character, (because local drcumstances are used, and local
culture has to be taken into account).

The result was that classic techndogy allowed the possibility of choice,
that is to say individuals and local communities could to a far extent
determine the shape of the technology that they applied.
Contrasting to traditional technology, Ellul characterized modem technology
through:

Automatism, i.e. there is only one 'best' way to solve a particular problem,
which is compelling where ever you are on this planet.
Self-replication, i.e. new technology strengthens the growth of other
technologies. The result is exponential growth.
Indivisibility. In order to partidpate in modern sodety, the technological
lifestyle must be accepted completely, with its good and bad sides.
Cohesion, i.e. technologies of different areas have much in common
Universalism, i.e. technology is geographically as well as qualitatively
omnipresent.

For Ellul this meant that modern technology is devastating for human freedom. In
his view, the future of mankind is extremely gloomy, for there is no way back.
Elluls' arguments partly can be recognized in the so-called Unabomber Manifesto.
Unabomber, the pseudonym of the Californian mathematics professor Kaczynski,
committed in the eighties and eady nineties attacks on research institutions and
aidines. A charactenstic line of reasoning in his manifesto is paragraph 127.
"127. A technological advance that appears not to threaten freedom often
turns out to threaten it very seriously later on. For example, consider
motorized transport. A walking man formerly could go where he pleased,
go at his own pace without observing any traffic regulations, and was
independent of technological support-systems. When motor vehicles were
introduced they appeared to increase man's freedom. They took no
freedom away from the walking man, no one had to have an automobile if
he didn't want one, and anyone who did choose to buy an automobile
could travel much faster than the walking man. But the introduction of
motorized transport soon changed society in such a way as to restrict
greatly man's freedom of locomotion.
When automobiles
became
numerous, it became necessary to regulate their use extensively. In a car,
especially in densely populated areas, one cannot just go where one likes
at one's own pace one's movement is governed by the flow of traffic and by
various traffic laws. One is tied down by various obligations: license
requirements, driver test, renewing registration, insurance, maintenance
required for safety monthly payments on purchase price. Moreover, the
use of motorized transpori is no longer optional. Since the introduction of
motorized transport the arrangement of our cities has changed in such a
way that the majority of people no longer live within walking distance of
their place of employment, shopping areas and recreational opporiunities,
so that they HAVE TO depend on the automobile for transportation. Or
else they must use public transportation, in which case they have even
less control over their own movement than when driving a car Even the
walker's freedom is now greatly restricted. In the city he continually has to

29

3. The formation of new technologies

3. The formation of new technologies

stop and wait for traffic ligfits that are designed mainly to sen/e auto traffic.
In the country, motor traffic makes it dangerous and unpleasant to walk
along the highway. (Note the imponant point we have illustrated with the
case of motorized transport: When a new item of technology is introduced
as an option that an individual can accept or not as he chooses, it does not
necessarily REMAIN optional. In many cases the new technology changes
society In such a way that people eventually find themselves FORCED to
use it.)"

31

so-called Penny Farthing represents a transitional stadium.


The gripe with such a representation is, that it isn't clear that choices are made. In
this way it appears that the Penny-farthing stays in the picture for too long, even

Besides tiiese fatalistic views, there are also very optimistic technological
determinist views. Especially a number of futurologists propagate bright images of
future technologies. Unimaginable speeds of transport, the conquest of space as
the 'final frontier', living at the ocean floor or on Mars, it can all be done. Whether
society really needs these techniques is of no concern. It is imaged as the
inevitable 'progress'.
The technological deterministic worldview is dubious:

It supposes one-way traffic between science, technology and society.


Technology is the product is of scientific growth and technological selfreplication. However, historical this is incorrect: technology often precedes
the formulation of underlying scientific principles. This holds for instance
for the steam engine that was already a century in use before Sadi Carnot
formulated the Carnot cycle in 1824. The Carnot cycle explains the
transformation of heat in work. The first airplane flew in 1903, but the
principles of flying by wings could only be understood by the work of
Prandtl around 1920.

Historical analyses show that technological innovation is not a process that


will inevitably lead to an optimal result. Choices can be made by social
groups.
For technological determinists every technological change has its inevitable
course. Actors, (even scientists and technologists that want to change their own
work e.g. based on moral or political convictions) produce in the eyes of
determinists only a light 'ripple' that can be neglected. Although technological
determinism is not very fashionable these days, it cannot be denied that there is a
core of truth in it: in our globalizing society, there is very little scope for national
authorities to influence or even steer processes of technological change.

3.3

The Social Construction of Technology

A more recent approach is the SCOT model: the "Social Construction Of


Technology".
In the SCOT model, technologies are considered social constructions, to which
various groups of people have given shape. Central to this view is the notion that
people influence the development of a technology by the meaning they attribute to
it (this will be explained further on).
The SCOT model has been developed according to a number of case studies among
them the transistor,
Bakelite, the neon light and the
Social Construction Of Technology:
bicycle. We will explain the model
technologies are social constructions, to
according to this last example.
which different groups of people have
When
we
consider
the
contributed. Groups of people attribute
development of the bicycle in the
different meanings to technologies, which
traditional linear way, we see the
lead them to different perceptions of
current safety bike (Lawson's
problems. They influence the influence the
Bicyclette) as the final product of
construction of technologies by seeking
an evolution which started as the
solutions for their problems.
Walking Machine and where the

30

Lawson's
Bicyclette
when the safety bike was already introduced - while it was technically inferior to
the latter. Moreover, such an approach often leads towards the emphasizing of the
models, which came out as the winners.
One of the goals of the SCOT model is to indicate that choices are being made.
The line from the Penny-Farthing to the Safety Bike is a theoretical construction,
which doesn't do reality justice, in which the two types co-existed and both were
under development. Let us take a closer look at this.
The Penny-Farthing had a large forward wheel and a little rear wheel. Pedals
attached to the forward axis propelled the bicycle. Because of the reachability of
the handlebars and the turning capabilities of this bicycle the cyclist had to sit
almost straight above the fonward wheel. The bicycle was fast and efficient, but
highly unstable. It was introduced in 1870 and lasted until the end of the century.

Figure 3.2. A linear development modfel of the bicycle.

32

3. The formation of new technologies

3. The formation of new technologies

of the bicycle. However, we should also involve the 'anti-cyclists' in the story. The
bicycle also evoked resistance with people; "....but when to words are added
deeds, and stones are thrown, sticks thrust into wheels, or caps hurled into the
machinery, the picture has a different aspect" (cited in; Bijker, 1984). These anticyclists not only had decency problems with female cyclists, but also with the
dangers that came with cycling. In London for instance cyclists used wooden
sidewalks, because the roads were otherwise unpaved. This evoked resistance
with the local population, further enhanced by existing class differences. The
largest user group with the Penny-Farthing turned out to be young men of
reasonable wealth, which possessed the courage and dexterity to handle the
bikes. Besides them was a group of potential users. The Penny-Farthing-riders young brave and from the higher circles of society, radiated superiority towards
their walking or horse riding brethren. For them, the Penny-Farthing was a 'macho
machine'. For potential users such as women, long-distance cyclists or older
gentlemen the Penny-Farthing was rather considered an unsafe machine.
Because the cyclist was seated almost directly over the middle of the forward
wheel, with his/her legs far from the ground, every stop or bump provided the risk
of falling over.

Lawson's Byciclette was characterized by rear-wheel transmission. This safety


bike stems from 1879. By the end of the century it had crystallized into the bicycle,
as we know it today. The typical characteristics were, besides rear-wheel
transmission, wheels of the same size and air-inflated tires - which majorly
contributed to its safety.
To better understand the development of the bicycle, we begin with an introduction
to two important concepts in the SCOT model:
Artifact: a consciously man-made, artificial object.
Relevant social group: people who are involved in a certain technical
development and who hold the same view regarding an artifact.
Around each artifact a number of relevant social groups can be distinguished.
People involved with an artifact all hold a certain image of it: they assign a certain
meaning to it. Especially important is what people find problematic about the
artifact. Groups form (or can be constructed) based on a shared assignment of
meaning and, embedded therein, a shared perception of a problem. A group can
have different problems regarding to an artifact. When schematized the situation
looks like this:

Figure 3.5. Solutions to


For problems (or clusters of problems)
multiple solutions are conceivable. Through
their way of defining the problem, possibly
including their solution(s), the different social
groups exert influence on the development.
Using the cluster "Technical artifact - relevant
social group - problem - solution" we are able
to explain the development process of an
artifact. An important term here is flexibility of
meaning; different social groups attribute
different meanings to one technical artifact.
This notion is essential to understand for
instance the development of the bicycle.
Relevant
social
groups
^ ^ ^ ^ ^ ^ ^ ^
surrounding the artifact 'bicycle'
artifact: a consciously manufactured manare the producers and the users
made object.
relevant social group: a group, which
assigns a specific meaning to an artifact,
flexibility of meaning; different social
groups assign different meanings to the
same artifact.

33

These different attributions of meaning also spawned different directions of


development;
1. For the sportsmanlike Penny-Farthing rider enlargement of the fonward
wheel was the best way to increase speed. This culminated in 1892 in the
Rudge Ordinary with a forward wheel diameter of about 1.4 meters. The
fact that this only made the bicycle more dangerous was considered by the
specific user base to be more of an advantage than a drawback.
2. To make the unsafe Penny-Farthing suitable for other users, experiments
were done with various different models. The wheels were reversed, such
as with the pony star, or the forward wheel was made smaller and the
saddle was put further backwards as in Lawson's Bicyclette (When the
safety bike finally developed into a faster bike than the Penny Farthing, the
latter's fate was sealed).
When we regard the technical development of the bicycle using the SCOT model,
it yields figure 3.6. This image shows that the application of the SCOT model
results in a different view on the development of technology, in this case the
bicycle, from a traditional phase model.
In the SCOT model the assignment of meaning, the signaling of problems and
solving them are aspects of the process of technological development. It is entirely
possible that the final product as a result hasn't been foreseen by any of the
parties involved. This certainly holds for the safety bike. The formation of it can't be
clearly dated, but stretches over a number of decades. At the start of the process
that led to this bicycle, none of the involved had a clear vision of it in its final form.
Several different models were present at that time too, among others Lawson's
Bicyclette, the Kangaroo and the Facile.

33

3. The formation of new technologies

3. The formation of new technologies

35

have to be an example of brilliant engineering. In the case of the Penny-Farthing


for instance, users complained that the handlebars would be in the way if the rider
fell over forward as a result of a sudden stop. The solution that was proposed for
this problem was to make the handlebars detachable in such a case to enable the
rider to land on his feet. Although such examples of solutions couldn't be
dismissed outright, the solution didn't last long in this case.
When explaining the development of an artifact the term 'technical framework' is
also important. With 'technical framework' is meant all of the solution strategies,
theories, skills, user practices, goals, values, ethical standards, and ethics
regarding a certain technology of a certain social group. This determines the
thinking, acting and interaction within the group. To show the involvement of a
stakeholder within a technical framework the term 'inclusion' is used. Inclusion in
different technical frameworks at the same time is possible. An electrical engineer
has a high inclusion in the technical framework connected to his own discipline, but
he may also have followed subjects at the faculty of physics during his course. This
makes it possible for the electrical engineer to be involved in another technical
framework too.
The technical framework can
be used to explain why a
Technical framework: a set of theories, skills,
certain solution is chosen. In
^s^^ practices, goals, values and ethical
the case of the Pennystandards regarding a specific technology.
Farthing, it's easy to picture
delusion: the degree to which a stakeholder is
involved in a certain technical framework.
that the builders and users
only had one goal in mind:
increasing the bike's maximum speed. Caught in the technical pattern this bike
adhered that was only possible by increasing the size of the forward wheel.
In the above the concepts central to the SCOT model have been introduced and
illustrated according to the example of the bicycle. Using the model we can explain
why the Penny-Farthing was such a success for a long time, even when the safety
bike had been introduced. The key to this pattern lies in the 'macho machine'
function the Penny-Farthing had.
The previous analysis also offers a starting point in case we want to steer the
development of a technology. It means that first it should be studied what a
technology will mean to the different people involved, and the relative importance
of their opinions. The process of choices made won't be completely predictable the steering of technology will therefore have to be a continuous process. Steering
(government-) agencies have to be aware of the (sometimes hidden) meanings a
technology can obtain, and adapt their steering activities accordingly.

3.4

Figure 3.6. The S C O T model applied to the development of the bicycle.

Paraphrasing, the development of an artifact globally consists of three stages:


1- interaction between technological development and social groups
2- variance in problems and solutions
3- the choice for one solution
Of course, the solution doesn't need to be of a permanent character: the solutions
obtained are temporary forms of stabilization. What is used as a solution doesn't

34

Technological development as system development

In the social-constructivlst (3.3) and the evolutionary-economic (4.4) views of


technological development the development of individual artifacts takes a central
role. Thomas Hughes posed another, opposing vision to this in 1983: not so much
the development of individual artifacts matters, as much as the formation and
further development of technological systems. Although he used an obvious
example to illustrate his view, electrical systems, he held a broader view: The basis
of the system-wide approach to technological development is the assumption that
development of all large-scale technology (not only electrical systems) can be
studied as a history of developing systems. The history of a specific system, the
electrical system, thus only serves as an illustration of a more general view. After
those done by Hughes, many other but similar studies were published.

This excerpt is largely based on the introduction to NETWORKS OF POWER, Electrification in Western Society
1880-1930, Thomas P. Hughes, 1983, pp. 1-17.

35

3. The formation of new technologies

3. The formation of new technologies

3.4.1

The electrical system

The construction of electricity grids has been an impressive and major event. Not
only because of the technical feat it consisted of, or the development of scientific
knowledge that was needed to do it, but particularly in the social, economical and
political effects of the distribution of electrical energy.
An enormous network of electrical wiring organizes the way we live. Inventors,
engineers, managers and entrepreneurs have organized our world by developing
this energy network. In the years between 1880 and 1930 the most important
decisions were made, and the technology was developed for this network.
Therefore, by studying this period the ordering, integrating, coordinating and
organizing of this network and the society, which it is part of can be analyzed.
Electrical energy systems require a sense of efficient action, the ability to make
rational analyses and the ability to effectively deal with 'vague' economical, political
and social developments from their inventors, operators and managers. Leading
engineers have acknowledged that their desire to 'clear up' matters often has to be
moderated to accommodate for the 'disorganized' phenomena that make our
society so vital.
How did the small-scale city lighting systems in the eighties of the 19th century
evolve to the regional electricity companies of the twenties? The problem here is
explaining the change in configuration of electricity-producing systems during the
1880-1930 period. These changes can be pictured in a series of network diagrams.
However, energy systems are cultural artifacts. This is the reason why an
explanation of these changes has to incorporate several different areas of human
activity such as technology, science, economics, politics and organizational
science.
Electrical energy systems embody the physical, intellectual and symbolical
resources of the society, which produces them. Therefore change in energy
systems can't be viewed separately from changes in available resources and the
aspirations of organizations, groups and individuals. Systems for the production of
electrical energy, which arise in other societies and in other eras, often hold a
number of basic elements in common. Variations in these basic elements however
often occur, too. These stem from variations in the availability of resources,
difference in traditions, different political situations and different economical
practices. This is why electrical energy systems are both a cause and an effect of
social changes. Energy systems reflect their environment, and change it too. They
possess internal dynamics of their own too, however.

3.4.2

Technological systems

How can technological systems,


A technological system consists of
which make up an increasing part of
linked components, which are centrally
our
environment,
be
defined?
controlled
to a certain extent. The system
Ludwig von Bertalanffy, a wellboundaries
stem from the reach this
known systems theorist, needed a
central
control
has. The goal of this
complete book to define 'system'".
central
control
is
to optimize the behavior
Therefore
a
more
inadequate
of
the
system
regarding
a certain goal.
description of the system concept
will have to suffice here. However,
several characteristics a system has
are very common. A system consists of components or parts, which are related to
each other. These components are connected by a network, or structure, which
may even be more interesting than the components themselves. The linked
components of a system are often centrally controlled. The system boundaries

follow from the range of this central control. Control is present to optimize the
performance of the system as a whole, and to direct the system towards a
common goal. The goal of an electricity production system, for instance, is to
convert the available energy, input, to the required output, which is fulfilling the
demand for electricity. Because the components are linked to each other, the state
or activity of one components influence those of another.
The network of connections between these components determines the system
configuration. A system can for example consist of horizontally, star-shaped or
vertically configured components. A horizontally arranged system connects
components, which perform similar functions, although not necessarily of the same
scale. A vertically connected system connects components in a functional chain. In
this way, an electrical system of the horizontal kind connects power plants,
regulated by a central office. A production system of the vertical kind for instance
connects a coalmine to a power plant under central management to take care of
the attuning of coal production to the demand for power. Systems are designed
hierarchically too, whereby smaller, in a sense independent, systems contribute to
the control of the larger, encompassing system.
Systems also interact with each other. Those parts of the world, which don't belong
to the system but do influence it are collectively called the 'environment' of the
system. Parts of the environment can sometimes be made part of the system by
bringing them under the system's control. An open system is subject to
environmental influences; a closed system is not. Hence, the behavior of a closed
system is, in principle, completely predictable - that of an open system isn't.
Some systems are planned completely (from the start), such as the construction of
a polder, while other systems grow more incrementally, possibly merge or split,
etc. The term 'system' here refers to a technical system, like an electrical
transmission system. However, not necessarily all components of the system must
be technical in nature. Maintenance services, training institutes, administration etc.
are as important to the system as its technical components.
Electricity-providing systems consist of electricity production, transformation,
control, transmission networks, distribution networks, and user components. In the
period of 1880-1930 the production of electricity was done using steam engines
and steam- and water turbines. Different generator types were coupled to these
'prime movers'. Transformers slowly grew into the main way of controlling the
characteristics of the electrical current in transmission and distribution. User
components were lamps, engines (both stationary and traction used in trams and
trains), heating and electrochemical equipment. The system served widely different
purposes. Transmission distances in this period increased from a couple of
hundred meters to regions covering tens of thousands of square kilometers.
Distribution networks transported the electricity from the transmission network to
industries and homes. Control components regulated the electricity production
system to keep characteristics such as voltage and frequency at the right level,
and made sure the system functioned optimally regarding its goals such as
efficiency, profit generation, reliability, etc. The components most difficult to define
are those at both ends - both on the side of demand and the side of supply. For
instance, are the mechanical 'prime movers' a part of the electricity system?
Waterpower sometimes escapes control by the system. Are the different usage
intensities a part of the system, when you consider the fact that the grid sometimes
has influence on them (peak loads) and sometimes not?
Hughes chose to take the 'prime movers' into the definition of the system because
inventors, engineers and researchers treated them as such and because they were
mainly controlled by the system. However, the definition of his system boundaries
remains somewhat unclear.
The invention of electric engines was for an important part guided by the
characteristics of the electricity system at that time. The functioning of those
engines however can be controlled only in a very limited way so they don't form
part of that system.

" Ludwig von Bertalanffy, 1968, General System Theory, Foundations, Development, Applications, New York.

36

37

37

3. The formation of new technologies

3. The formation of new technologies

3.4.4
3.4.3

Phases in system development

Hughes analyzes electricity systems, which were formed in different places (New
York, Chicago, London, Berlin, California) and at different times. Yet they
nevertheless are connected in his view, because of the fact that they all behave
according to the same model for evolving systems. In that model for system
development different phases can be distinguished: and in different phases of
system development different characteristics hold a dominant position. Moreover
the model indicates the skills managers must possess in each of the phases, and
the guiding interests.
In the first phase the emphasis is on the invention and development of a system.
The professionals dominating this phase are inventors/entrepreneurs, who differ
from regular inventors by their attempt to organize the entire process from
invention to 'ready for use'. Edison, of course, is the supreme example of such a
person. Engineers, managers and banks are also important during this phase, but
they are of less importance than the inventor/entrepreneur.
In the second phase the most important process is technology transfer from
region to region or from continent to continent. The transfer of Edison's electrical
system from New York to Berlin and London is an example of this. During this
phase various groups are involved in the development, such as the
inventor/entrepreneur, traditional entrepreneurs and banks.
The essential characteristic of the third phase of the model is system growth.
Growth of systems is analyzed by means of the concepts 'reverse salient' and
'critical problem'.
, ,
Because
system
components
^ reverse sahent ,s that part (or parts) of
often grow at different rates, parts
^ technological system, which is lagging
of the system can be identified
behind in development and therefore limit
which lag behind the others in
the growth of the system as a whole,
growth and are limiting the growth
^ critical problem is a redefinition of the
of the system as a whole. The
reverse salient info a (pnncipally) solvable
term 'reverse salient' stems from
technical challenge,
military history. In that context the

term signifies a section of a front line, which is lagging behind in the advance. This
metaphor is well chosen, because a progressing military battlefront also often
shows irregular and unpredictable behavior, just as a developing technological
system does. Often, military people will direct all their efforts towards fixing a
'reverse salient'. The same goes for the development of a technological system.
Inventors, engineers, entrepreneurs and others direct their creative and
constructive forces mainly at the correction of 'reverse salients', in such a way that
the system functions optimally in fulfilling its tasks.
If the reverse salients are identified, they will often be translated into a series of
critical problems. The redefinition of reverse salients as a series of critical
problems is the essence of the creative technological process in the system: An
inventor, engineer or scientist transforms an amorphous challenge (the lagging
behind of (parts of) the system) into a series of problems, which are expected to be
solvable. This is an essential part of the engineering profession; being able to
redefine unstructured problems into a series of solvable, critical problems. The
confidence in the solvability of the reverse salient increases dramatically after it
has been turned into a series of well-posed critical problems. Correct articulation of
problems usually helps a lot in approaching a solution. If engineers are capable of
correcting the reverse salient in this way, it usually leads to growth of the system.
However, sometimes a situation occurs where a critical problem appears to be
unsolvable.

38

39

Example

Around 1880 Edison realized the first electrical system in the world on Manhattan.
This system was based on the distribution of direct current. The distribution of
electricity only took place over short distances (hundreds of meters), using large
and massive copper wiring, at low voltage.
This system experienced as its most important reverse salient in its later stages of
growth the fact that it was only economical to use in built-up and localized areas,
because of the high transmission losses. This reverse salient could in principle be
translated into different critical problems;

reduction of transmission losses


improving the efficiency of smaller power plants

battery-based distribution in sparsely populated areas


The reduction of transmission losses was the most important problem to which this
reverse salient led in practice. Despite an accurate definition of the problem the
inventors and engineers using direct current at the end of the 19* century weren't
able to find a solution to this problem. In the end other inventors found a solution
outside of the DC system, to be specific; alternating current, which enabled the
system to transform the voltage easily by transformers. This caused the
coexistence of two conflicting systems for some time. By applying all sorts of
transformation technologies these systems could coexist for some time, until the
AC system became the dominant one. New systems can hence form where old
systems are unable to resolve their reverse salients by their own means.
When a system grows, it gains momentum. The fourth phase of a system is
characterized by substantial momentum. A system with a substantial momentum
possesses mass, velocity and a direction of motion. The mass consists of
machines, equipment, structures and other physical artifacts into which capital has
been invested. The mass also stems from the involvement of people who possess
professional skills specifically suited to the system. Entrepreneurial businesses,
government services, trade unions, educational institutions and other
organizations, which are directly attuned to the system's core also contribute to the
momentum of the system. Taken together these organizations form the culture of
the system.
A system also has a measurable speed of growth. That speed often increases in
this phase. A system also possesses a direction of motion, i.e. goals. The definition
of clear goals is more important for a new system than it is for an older one. In
older systems the momentum gained gives the system inertion in its further
development.
In electrical systems the public and private utilities were the institutions controlling
the system. From 1890 onwards to the First World War the most important utilities
in the USA, Germany and England concentrated on the supply of electricity to
densely populated and industrialized urban areas. The decisions made by their
managers in this period were of more importance to the character of the systems
than the decisions made by their engineers and inventors. In this phase the
conflicts between the utilities and other social stakeholders were often of great
interest.
Despite the momentum of systems and their inertia there are more or less
coincidental factors pushing them in other directions. The First World War for
instance made the supplying companies direct their activities more towards
fulfilling the energy demand generated by the industry rather than a maximization
of profit. This often required a cooperative stance and formed structures, which
would persist later on. External forces can hence bring about a reorientation of
system goals, even when systems have already gained a lot of momentum.
The last phase of system development can be characterized by the qualitative
change in the nature of the occurring reverse salients, and by the advent of
financers and consultants as problem solvers. Managers played the leading part in

39

41

3. The formation of new technologies

the momentum-gaining phase. In the later phase, which concerned itself with
planned and developing regional systems the most important reverse salients were
problems stemming from the need for the financing of large-scale systems and the
clearing up of legal and political barners. Financers and advising engineers were
able to respond adequately to these problems.
This phase was also characterized by a continually increasing competence, mostly
present with advising engineers and managers, in the effective planning of
systems.

4. Economic approaches to the formation of new


technology
What consequences does a new technology have on society and what
consequences do social developments have on technology?
In this section an overview will be presented of the way economists thinlc
about technology. Attention will mostly be paid to the (quasi-) evolutionary
approach. Finally the phenomenon of 'path dependence', where society
does end up with the most optimal technologies, will be analyzed.

4.1

The neoclassical economic framework

Neoclassical economic theory has been around for quite some time now. It formed
in the closing years of the 19* century and still holds great influence today.
Neoclassical theory plays a major part in government planning for instance in the
models used by the CPB, the Nethedands Bureau for Economic Policy Analysis.
We will only treat the elements of this theory, which serve our purpose here.
Neoclassical theory assumes entrepreneurs wishing to produce a certain quantity
of a product have a choice between different combinations of production factors.
The term 'production factors', when used in economics, signifies all that is needed
for production: manpower, monetary resources and physical resources. It can be
decided to perform production in a capital-intensive way (using lots of machines) or
in a labor-intensive way. Neoclassical theory assumes this choice primanly
depends on the cost of capital (interest) and manpower (wages).

capital

manpower
Figure 4.1 The production function
This choice is pictured in
the production function (see
figure). The function yields

40

j ^ e production function indicates with what


distributions of capital and manpower a certain
volume of a product can be produced.

41

42

5. From Impact Assessment to managing

all combinations of manpower (X-axis) and capital (Y-axis) with which a certain
quantity Q of a product can be produced. The entrepreneur is hence free to choose
any point on this curve. Where on the curve he decides to be, follows from the
pricing of manpower and capital. When certain prices for manpower and capital are
given, each of the slanted lines in the figure show the amounts of capital and
manpower which can be bought for a certain, fixed price. For each line more
towards the right (or towards the top) this price is higher. The entrepreneur who
wishes to produce his amount of products Q at the lowest possible cost, chooses
the point on the curve which lies tangent to the cost function - this is the point on
the curve where the total costs of manpower and capital combined are lowest.
When the pricing of manpower and capital changes, the slanted lines change slope
and hence displace the tangent point. This encourages the entrepreneur to switch
to a different point on the curve, i.e. to a different distribution of manpower and
capital. If for example manpower gets more expensive, the lines will be a at a
steeper slope and the entrepreneur moves to a point 'higher' up on the curve, to a
more capital-intensive production method. Of course this doesn't happen instantly,
but at a time, which suits the entrepreneur - for instance, when the machines need
to be replaced anyway.
Another interesting element of neoclassical theory is the set of supply- and
demand functions of a product. The theory assumes that supply and demand
depend on the price of a product. The higher the price, the larger the supply but
the lower the demand (figure 4.2). If the product is to be marketable, the supplyand demand curves have to intersect somewhere, lest the product become a
failure. This point of intersection determines the price of the product and the
quantity sold. Because of external influences the supply- and demand functions
can change forms. By applying cost-saving measures the supply curve can be
made to take a lower position (shifting to S'). This makes the point of intersection
with the demand curve shift from a to b, which means a greater quantity is sold at a
lower price. Because of an increase of consumer income, the demand curve can
shift upwards (to D'). This also means a greater quantity of the product will be sold,
but at a higher price this time (point c).

price

amount Q

4. Economic approaches to the formation of new technology

For our purposes neoclassical theory offers some possibilities, but also some
important restrictions. The possibilities lie in the fact that the influence of factor
costs on technological changes can be determined. After all, every choice made
between capital and manpower entails a technological choice, a choice between
production technologies. Using this theory we can hence gain some understanding
about the influence of manpower costs and interest rates on the choice of
production technologies. Sometimes the government can even decide to directly
influence factor costs to influence the outcome of technological choices. For
instance, the government can impose taxes on energy use to stimulate businesses
to use more energy-efficient production methods. Also taxes on emissions can be
used to reach technological changes. These are examples in which the costs of a
production factor are artificially enhanced in order to obtain a different
technological solution. Even changing consumer behavior patterns can be
obtained this way: by enhancing the costs of car use, travelers can be stimulated
to make more use of public transport.
Neoclassical theory also has many limitations when it concerns the role technology
plays. Its biggest problem is the lack of explanation for technological innovations.
Neoclassical theory discerns between the aforementioned shifts along the
production function and change in position of (part of) the production function itself.
In the first case there is no mention of innovation - existing technology is being
used in the new situation. In the second case, innovation takes place. (Part of) the
production function shifts towards the origin of the coordinate system. After all, an
innovation (in production technology) implies that an entrepreneur can make the
same product at a lower cost (in manpower and capital taken together), which also
makes the supply function drop and hence benefits the customer in the end.
Where the innovations come from isn't explained at all in this theory. It is assumed
they come from outside the economic system (economically exogenous), for
instance from technological development itself (technological determinism).
However, it is clear this isn't the case: economic factors do certainly have an
influence on the direction in which innovations take place. An increase in the
pricing of electrical energy for instance won't only stimulate entrepreneurs to
search among alternative existing production methods for more efficient ones, but
will also provide a stimulus for research into more energy-efficient technologies.
A second problem is the assumption that a large choice of production methods is
available, with a large range of costs in production factors. In practice, this is rarely
the case. Usually, there are but a few different alternatives for the small part of the
production function around which the prices of capital and manpower fluctuate. If
those prices start to deviate strongly from that region, all sorts of new technologies
will have to be developed. In this way it can be seen why wars, when there
appears a shortage of certain production factors, always are a stimulus for the
development of new technologies. So, even for movement along the production
function technological innovations may be needed.
With this conclusion the difference between movement along the production
function curve and movement of the curve itself disappears. In the case of most
innovations it isn't clear which of those two changes the innovation belongs to.
There are more points of criticism against neoclassical theory, such as:
o Entrepreneurs usually don't research all possible methods of production
before they make an 'optimal' choice, but they limit their choice to a few
distinct options out of which a 'satisfying' solution is chosen,
o The theory doesn't yield insight into the consequences of changes made to
the final product in terms of the production technology used for it.
o The supply- and demand functions are abstractions, which have no directly
observable equivalents in the real world. Besides, they are of a short-term
nature: if the price increases, entrepreneurs will be stimulated to utilize
their production capacity to the fullest and hence the supply will increase.
But after a while they will structurally enhance their production capacity

Figure 4.2 Functions of supply and demand

42

43

43

5. From Impact Assessment to managing

44

and the price will drop again to the previous level. In this way, the supply
function doesn't sketch the correct course of events in the long run.
The same goes for the demand function.

4.2

Long waves

Kondratieff was a Russian economist who in the 1920-1930 period performed


research into long-term waves in the economy. The notion of waves occurring in
economy stems from Marx (short term economic cycles theory). Kondratieff
specifically focused his attention on long-term waves of economic growth and
regression, having a 50-year cycle approximately. Kondratieff saw the irregularity
in the replacement of capital goods as the main cause for these cycles. He
concluded that the waves were accompanied by irregularities in the appearance of
technological innovations.
Schumpeter continued this line of
thought in the thirties. He was an
Long waves: wave pattern in the
economist working at Harvard
development of the economy having a
University in the US. Schumpeter^'*
period of approx. 50 years. This is caused
perceived (clusters of) innovations
by the
development
of
new
key
as the main cause for economical
technologies.
waves (called 'Kondratieffs' by
him). He perceived the following
pattern:
o 1787-1842 First Kondratieff: cotton, iron, and steam
o 1843-1897 Second Kondratieff: railroads
o 1898-1939 Third Kondratieff: electricity, cars.
Later on others added 'steel' to the third Kondratieff, and identified a fourth
Kondratieff: chemistry, electronics, aircraft"'. In his book from 1939, Schumpeter
considered entrepreneurs as being the driving force behind innovations, because
they turn inventions into marketable products. The innovations occurred very
irregulady in time in his view, and therefore caused sudden breakthroughs in
economy. Later, Schumpeter described the occurrence of innovations as being
more spread-out"". Not entrepreneurs themselves play the major role, but research
laboratories (which is why he tended to support planned socialism more). In
research laboratories existing knowledge is being improved upon in an
'evolutionary' manner. This thought got incorporated into the current thinking of
evolutionary economists.

4. Economic approaches to the formation of new technology

side with technological determinism. The 'demand-pull' theorists mainly see the
market demands as the main cause for technological innovations. Market demand
largely determines the formation and introduction of new technological possibilities.
Representatives from both camps have supported their theories with a host of
empirical research. These schools of thought have closed in on each other
however, and a number of economists therefore see the combination of technology
push and market pull as the driving force behind technological innovations. A
tension between technology push and market pull exists, in which processes of
variation and selection led to technological changes. What is new about this view
versus technological determinism mainly flows from the role, which economic
factors play in the innovation and diffusion of technology.

4.4

'Push-puir debate

In this debate among economists the


central question is; what is the
explanation behind the occurrence
of technological innovations? The
proponents of the 'technology push'
theory view the developments within
science and technology as the main
driving force behind technological
innovations, and hence share their

'Technology
push':
developments
within science and technology are the
main driving force behind technological
innovations.
'Demand pull' or "Market pull': demand
(or change in demand) from the market
causes technological innovation.

.loseph A. Schumpeter, 1939, Business Cycles, New Yorl<: McGraw-Hill.


Oil. Freeman, C. Perez, 1988, in: G. Dosi et al (eds.), Teclinical Change and Economic Tlieory, Pinter, London,
1988.
Joseph A. Schumpeter, 1942, Capitalism, Socialism and Democracy, RKP.

44

The evolutionary theory

While economists expressed their first and important criticism on the notion of the
autonomous character of technological development, some others went a step
further by pointing towards the shortcomings of a strictly economic explanation of
technological development and -introduction. Within the discipline of economics it
was mostly Nelson and Winter (1977 and 1982) and Dosi (1982 and 1988) who,
building on the 'push-pull' theory and Schumpeterian additions to neoclassical
economics, attracted attention towards the role which social-cultural and
institutional factors play in the processes of innovation and diffusion of technology.
From the science of economics the Schumpeterian models for technological
change were further expanded to what are now often known as evolutionary
theories. The theory on different technological trajectories by Nelson-Winter/Dosi is
perhaps the most articulated evolutionary theory.
The development of technology
Evolutionary approach to technological
can, according to Nelson and
change: Development of technology is a
Winter, be interpreted as an
succession of variation- and selection
ongoing succession of variationprocesses geared towards the solving of
and selection processes, which
technologically
defined problems. A certain
are directed towards solving
rigidity
is
present
in technology, which
technologically defined problems.
often
only
permits
small changes in
New technologies or amendments
existing
technology
to
occur. However,
to existing
technologies
are
such
a
new
variation
doesn't always
constantly invented and selected
survive
the
selection
by
clients,
for usage. These variation- and
governments or other stakeholders.
selection processes don't just
occur

4.3

45

'at

random'

or

for

no

reason, but show a clear structure. A certain rigidity and inertia is present in the
rate of change of technology, which prevents vanations from cropping up without
limits. There is a certain regularity and direction to be found in technological
development, which is encapsulated by the concept of 'technological trajectory'.
What ensures the control, the structure in the processes of technological
development?

4.4.1

Regime and trajectory

Nelson and Winter introduce the


phrase 'technological regime'
for this, which encompasses the
extents
of
change
and

A technological paradigm or regime is the


way in which technology developers perceive
their current and future technologies: it
encompasses
scientific
principles
and
theories, rules derived from practice, rules for
searching solutions for problems and
(successful) examples.

5. From Impact Assessment to managing

46

development of a technology within a certain problem area. Dosi uses - analogous


to the structure found in development of scientific knowledge - the term 'paradigm'.
A technological paradigm or technological regime forms the dominant cultural
matrix of technology developers and encompasses a limited number of scientific
principles, insights and heuristics (searching rules) and a limited number of
physical technologies. Central to the
^ ,
, . , ^ . ^
xu
^
technological regime or paradigm
Technological trajectory: path of
are the exemplar and the heuristics.
development of a technology govemed
The
exemplar
is
the
basic
by a specific regime,
technological design from where all
subsequent adaptations and developments are beginning. Starting from this base
technology, the direction in which is searched by the processes of variation within
the technological trajectory is determined by the heuristics. The development of a
technology in a certain problem area takes place over a long time within the
boundaries of such a technological regime and is pre-structured as such.
Now it can also be described more clearly what a technological trajectory is
exactly: the changes In the technology which take place within the framework of
such a technological regime or paradigm, in other words the 'direction of progress'
within a certain technological regime. A technological trajectory can be seen as the
whole of all 'standard' problem-solving activities, which are circumscribed and
designed by a technological regime. A technological regime and the direction of
development of technologies are to a certain extent inert and subject to little
change. Nelson and Winter mention 'natural trajectories', which partly possess
dynamics and an optimization pattern of their own. Even when it becomes likely
that within the current dominant regime and trajectory the problems which are
focused upon can be solved in a less satisfactory way than they could be in a
different trajectory or regime, it doesn't mean that the current trajectory is
abandoned. Major breakthroughs in the treatment of problems using completely
new technologies, in other words changes in the technological regime, occur only
sporadically. A change in technological regime almost always encompasses a
change in technological trajectory, because different heuristics and base
technologies are applied and the dominant cultural matrix, in which the technology
developers reside, shifts showing sudden new solution possibilities.
On one hand a technological regime restricts the development of technology by
imposing a limitation on the possible variation in new technologies to be
developed. On the other hand such a regime enables accelerated development by
concentrating efforts and use of resources in one specific direction of research and
development. A technological regime dictates how solutions to problems should be
found, and what tools are available to do so.

4.4.2

Selection environment

Which factors influence technology development, in the end? Within evolutionary


theory the term 'selection environment' is used to indicate the collection of
stakeholders, structures and institutions, which determine the nature of the
selection process. This selection can take place on three distinct levels: selection
of a technological paradigm, selection of a trajectory and selections within a
trajectory. This selection environment consists, according to Nelson-Winter and
Dosi of three dimensions or types of factors:
1- Science and technology; this pertains to the momentary level of knowledge
and technology, the artifacts present, the type and scale of research
institutions, the technology present in neighboring fields, etc.;
2- Economy; this regards for instance the prices of factors, market structure
defining competition among companies (also internationally), the spread of
income, consumer demand, government funding possibilities, structure of
production etc.;

46

4. Economic approaches to the formation of new technology

47

3-Social-cultural and political base; this encompasses the balance of power,


the distribution of possessions, the legal situation, the cultural matrix of
scientists and technologists, government policy and governmental
measures, etc.
Which actors, factors and institutions from this selection environment are most
important in technological change differs for each development process and can
only become apparent using historical case studies.
Using this model a significant number of case studies have been performed, also
in Holland. They keep showing that the Nelson-Winter/Dosi model offers enticing
anchors and concepts to describe and analyze the development of technology.

4.4.3

Quasi-evolutionary theory

The relationship between technological development and selection environment is


a complex one. Influence on the development process originating from the
selection environment occurs both 'ex post', after technologies have been
developed, and 'ex ante', before the developments take place. An example if this
last case is the anticipation of the increasing demand for ecological products.
Moreover, Van den Belt and Rip observed that not only does the selection
environment influence the technology development, but the reverse process also
occurs. Technological developments pose demands on (and change) the selection
environment, for example in those cases where changes in the production process
have consequences for the organization of the working environment. Because of
these statements the term 'quasi-evolutionary theory' is often used nowadays: the
selection environment of plays an important role in the selection process, and the
selection environment is often strongly influenced by the variation process (both by
the result of that process and by the parties involved in it). Van Lente summarized
quasi-evolutionary theory in these 7 statements:
Scientific research and technological development activities can be seen as a
series of searching processes leading to products like artifacts, texts or skills.
Searching processes are led by searching rules, which hold the promise of being
practically successful. They don't guarantee it, however.
The results of R&D are, in the end, a result of a process of selection and variation.
The combination of searching processes which yield variations, which are selected
elsewhere differs from biological evolution because in technological development
selection and variation are mutually dependent. This is why the process is called a
'quasi'-evolutionary process.
Expectations are shared and connected to each other to such an extent that they
can be defined as a cultural matrix of expectations.
A technological paradigm can be said to be present when a certain coherent set of
searching rules has become dominant among producers of variations.
The selection can take the form of a strategic 'game' between players. Actions
players take are determined by anticipating other players' actions and on expected
future variations.

4.4.4

In conclusion

To what extent does this evolutionary theory definitely separate itself from
technological determinism? The development of technology is seen by Nelson and
Winter, Dosi and many others who followed in their footsteps as a process
determined by more than solely the inescapable logic of science and technology.
Although this approach doesn't preach simple technological determinism, it

47

48

5. From Impact Assessment to managing

concedes that technology does possess certain dynamics of its own. The
development of technology along a technological trajectory within a regime is
relatively autonomous, having a local optimum of its own. Within such a trajectory
marginal changes under influence of the selection environment can take place.
The usage of biological-technical terms such as selection, variation, evolution and
natural trajectories reinforces the idea that technological change possesses a
certain inherent logic. Moreover the selection environment mostly influences
changes within the technological regime, while the influence of the selection
environment on the change of regime itself is still unclear.

4.5

VHS, MS-DOS, Qwerty, twists of fate^'

Who still remembers cp/m, Philips-Miller, or Stereo-8? Cp/m was a popular


operating system for PCs in the eariy eighties, Philips-Miller was a recording
device from the thirties and Stereo-8 once was a popular cassette recorder in the
United States. All three they were pushed out: cp/m by MS-DOS, Philips-Miller by
the tape recorder and Stereo-8 by the compact cassette recorder.
Windows, the CD and the 35 mm photo film are products for which it's hard to
imagine alternatives. It seems as if there have never been any alternatives to them,
as if these products possess a superior quality over any other possibilities. Still,
they often are no more than the 'lucky' winners of a competitive battle.
Take MS-DOS, for instance. If the wife of Gary Kildall (cp/m's inventor) hadn't sent
away the IBM-managers who rang the doorbell somewhere in July 1980, they
probably wouldn't have driven to the Microsoft offices and this article would maybe
have been written on a PC running some cp/m version.
The unpredictable process in which one technology remains as the market
standard can only partly be explained by economic theory. In many business
branches a few large, dominant market leaders are present producing the socalled 'A-brands'. Besides those there usually are tens of other, smaller
manufacturers. Products are judged against the 'standard' of the market leaders. If
they're more luxurious they're called 'premium products', are they cheaper they're
called 'discount products'.
With technological products the situation is different. Economists call it the 'winner
takes all' principle: with technological standards there is one system dominating
the market. Sometimes there still is room for a second one, but producers of a third
or fourth system have very little chance of ever making a profit. A company
exclusively possessing a standard has a near-monopoly. In this way, Microsoft
holds 90 percent of the market in PC operating systems, Intel holds 90 percent of
the market in microprocessors, and IBM holds 83 percent of the market in
mainframe computers, and practically the entire non-Japanese market for
mainframe operating systems.
Qwerty, the most boring standard ever
The qwerty-standard is the classical example of a standard that is impossible to
replace. Samuel Sholes, an American, invented a typewriter ini 868 which suffered
from 2 problems: the little hammers got stuck together when he tried to type fast
and the machine needed to have a special trick he could demonstrate. By using
the QWERTY configuration Sholes put the most often-used keys widely spaced, so
the little hammers wouldn't get in each other's way. This also made it possible to
very quickly type the word 'typewriter' on the top row - a handy sales trick.

4. Economic approaches to the formation of new technology

All Sholes still needed then was someone to produce the machines. He found an
ally in arms producer Remmington, who was looking for new businesses after the
Civil War had ended.
To help sales of the typewriter, Remmington organized typing contest, which
competing machines also entered into. Remmington contracted the winners. It was
only a short while before typists invented touch-typing, using 10 fingers.
Educational institutes quickly adopted the system, and soon everybody wanted the
qwerty system: companies because their secretaries could work quickly using the
system and the educational institutes wanted it because most companies used
qwerty machines. Despite the fact that there was no longer a technical necessity
for the QWERTY arrangement, most other manufacturers had also adopted the
system by the turn of the century.
Since then a few alternatives to QWERTY have been designed. The most well
known of these is the keyboard developed by the American ergonomist August
Dvorak in the thirtiies. Using the 'Dvorak Simplified Keyboard' people learned to
type twice as fast, learned to type at a rate twice higher than the QWERTY people
and experienced a twenty-fold decrease in hand strain. Some Dvorak-machines
were made and the configuration is optional on some apple computers, but despite
all that Dvorak was no success.
After Dvorak even better keyboards have been developed, but none of them
managed to push QWERTY of the throne. Qweriiy had installed itself into the brains
of millions of people and couldn't be erased from them.
Why hasn't that happened? Surely an ergonomical layout would have prevented
lots of stress and maladies like repetitive strain injury (rsi)? And it can't be that
expensive just to plug in a new keyboard into your computer?
A number of explanations can be given for the survival of the QWERTY system.
Firstly, 'qwerty' can't be unlearned. Touch-typing is never unlearnt once learnt, in
the same way swimming is. People who touch-type won't be quick to switch
keyboard layouts.
A second explanation is that the improvements in learning speed, typing speed
and hand strain apparently aren't sufficient reason for young people and typing
institutes to justify switching to a new keyboard layout. Apparently typing is
supposed to be difficult and hard to learn. Experts on innovation state that a new
standard should be about 10 times as good as an old one in case it is to be
adopted.
Furthermore, vested interests play a role. Why would educational institutes be
interested in learning their students to type in a shorter time? Their income would
only suffer and they would also get into trouble with companies who would have to
purchase a new machine or software for use by their new secretary.

4.5.1

Increasing returns

The Economist recently characterized these near-monopolies as 'technopolies'.


The classical law of decreasing returns doesn't hold in the case of technological
standards. According to that law, a product which's sales volume keep increasing
reaches a ceiling, beyond which profit doesn't increase any further. The causes for
this are increasing costs per unit sold and the launching of alternative products by
competing brands. This mechanism keeps prices low and prevents excessive
market shares from being reached.
On the other hand, in the case of technological standards, the situation is one of
increasing returns. Sales start off only slowly, but when in the end everyone
switches to standard market shares soar towards 90 percent. After this phase any
potential competitor has quite a challenge to rise up to.

'" This paragraph is talcen from Intermediair, May 22nd, 33rd annual, no. 21, pp. 47-51, author: Gerben Bakker

48

49

49

5. From Impact Assessment to managing

50

Economist W. Brian Arthur from Stanford University describes how a networl< of


new companies forms around every new technological product'-. In Microsoft's
case its computer manufacturers, software developers and microchip producers
and in the case of photographic films its camera manufacturers, film developing
services and camera film producers.
Paul David, also an economist at Stanford University, calls the phenomenon
'system scale economies'. Users desire a product of a uniformly valid standard, so
they can easily obtain compatible software and exchange information with others.
Because of this high 'networking' tendency, a small advantage in the early
developments of such a standard will persuade more and more people to buy the
product, after which sales increase by themselves. Production costs drop, while
those of the competitor rise.
Technological standards always enter the market in pairs, David asserts; a
software part and a hardware part. Software anchors itself in the market, and gives
the hardware part leverage to increase sales. When consumers have bought the
software part, they are committed to that standard. Nobody will decide to throw
away his CD collection in a hurry, or to exchange all Windows machines in the
office for Apple computers.
The heavy competition amplifies the snowball effect. Manufacturers lower their
prices, more people buy their new products and hence create an even larger
market for the winner. Netscape and Microsoft were even giving away their
products to win the battle for the Internet browser market, making it expand rapidly
in the nineties.

4.5.2

End-game strategy

There can be only one winner, but how is the game played? The most important
part is the end-game strategy. A business interested in making its technology the
standard on the market, shouldn't try to pursue short-term profits. In this way.
Philips gave away licenses in the sixties to everyone wishing to produce its music
cassettes. Microsoft signed an agreement with IBM in the early eighties that didn't
yield it much profit, but ensured that its operating system MS-DOS became the
standard.
Besides the end-game strategy there are more specific explanations for success or
failure. For instance, the term 'quality' of a product must be interpreted in a loose
setting. Strictly, the sound quality of a CD is actually worse than that of an LP
(nuances lost by discretization of the audio data), but other aspects of quality
determined the outcome: the longer playing time, the smaller size, the ease of use
and the fact that CDs are more robust.
Proper marketing also is essential. When it introduced the CD, Philips covered up
the worse sound quality of CDs in a clever way by emphasizing all the advantages
they had over classical LPs.
Entrepreneurs in the computer industry recognize an important factor responsible
for the success or failure of a new technology: the killer application. Such an
application is a certain application that encounters such popularity that it
persuades millions of people to buy The spreadsheet Visicalc for instance was
largely responsible for the successful introduction of the PC by Apple and IBM.
Killer applications also occur outside the computer industry, though: Edison
invented the phonograph using wax rolls in 1877. Edison, who was more of an
inventor than an entrepreneur, thought the many uses of his product formed its
novelty: as a voice recorder, recording news, speeches, strange languages and
oh, also maybe for music, too.
The German entrepreneur Emile Berliner foresaw music becoming the killer
application. He marketed the gramophone invented by him and at the same time

4. Economic approaches to the formation of new technology

51

started a record label. Within a few years the gramophone was market leader and
Edison started producing them too.
The laserdisc is a more recent example. Philips tried to market such a device for
playing interactive CDs under different names three separate times, to no avail.
Competitor Pioneer introduced the killer application: a laserdisc machine suitable
for Karaoke, incredibly popular in Japan. Japanese sales skyrocketed, as did
Pioneer's market share.

4.5.3

Licensing politics

Besides the end-game strategy, quality, marketing and finding a killer application,
licensing policy also is an important factor determining the success of a new
technology. By giving out licenses a business can quickly increase the market
share of a new standard and rapidly increase the number of companies that have
an interest in it.
Microsoft is well known for its extremely crafty licensing policy. When the company
wanted to compete against cp/m in 1980, it developed MS-DOS at a low price for
IBM's first PCs.
That wasn't where it ended however, American computer journalist Robert X.
Cnngely writes in his book 'Accidental Empires'''^ When the IBM-PC was already
marketed, many American computer manufacturers were still producing their own
brand PCs. Microsoft offered to make a different version of MS-DOS to each
computer manufacturer. Whichever company would win the battle for the standard;
it would have an MS-DOS operating system.
As soon as the manufacturer had signed the contract, Microsoft told them that not
all IBM applications would work on the system. This startled the manufacturer: a
PC having no compatible software won't sell. Subsequently Microsoft divulged they
happened to have a senes of programs - from word processor to spreadsheet which were easy to adapt, at a price of course. Even before the computers were on
sale, Microsoft had made its profit.

4.5.4

Ruining the market

Within established industries, another factor is of interest: the poker game played
by the big businesses dominating that industry. These businesses watch every
move of the others carefully, paranoid as they are that the competition will run off
with some new standard. This can make the market lock up completely, with the
formation of alliances between different competitors as the only solution.
The Formula
Michael Hay and Peter Williamson'"' the following tips to entrepreneurs in a new
branch:
o Don't put all your eggs in one basket. Exchange licenses in case a
competing standard wins,
o Attract the renowned companies, the opinion leaders, as first customers,
o Make sure you get quick market feedback.
o Invest in production technology, necessary process technology and
supporting products at the same time,
o Recognize changes in structure and composition of the competition as soon
as possible.

''^ Robert X . Cringely, 1991, Accidental Empires, H o w the Boys o f Silicon Valley Make Their M i l l i o n s , Battle
W . Brian Arthur, 1996, Increasing Returns and the N e w W o r l d o f Business, Harvard Business Review, July-

Foreign Competition, and Still Can't Get a Date, Haiper Business.


Michael Hay, Peter Williamson, 1991, The Strategy Handbook, London: Basil B l a c k w e l l .

August 1996

50

51

5. From Impact Assessment to managing

52

o Think ahead towards the end game, when the branch has settled. Think of
ways to maximize your profit in that stage, and don't try to make all your
money in the chaotic and insecure penod before then.
A non-regulated introduction of a technology can in such a situation lead to a
ruining of the market. The record industry experienced this in the forties, when the
Amencan record company Columbia released the LP record (33 revs) in 1948 and
competitor RCA released the single (45 revs). Both record types sounded clearer
and lasted longer than the old 78-rev record, but didn't work on the same record
players.
The consumer refused to make a choice and wasn't prepared to buy two separate
record players either. For four years the market in the US was stuck. This 'battle of
the speeds' only ended when a record player was introduced which was capable of
playing both formats. At this point the record industry flounshed: Adults mainly
bought the more expensive LP records, youths bought the singles.
Growth of the video recorder market was hindered too by the existence of two
standards. Only when it became clear that the VHS system would win over the
V2000 (VCC) system by Philips, the market resumed growing at a high rate.
Philips' system was technologically more advanced than the Japanese effort, but
had to halt production of the \/2000 system in the end. Less-than-sensible market
distribution agreements, the lower standard of reliability, the bulky design of the
first generation of recorders and most importantly the lack of available movies
became the death-knell for the system in the end.

4.5.5

53

To profile themselves against their competitors, some manufacturers choose to


pursue an open systems approach. Hewlett Packard for instance became
successful by making products that could interface with almost any other.
Sun Microsystems is a typical breaker of standards. The company's strategy is to
quickly gain market share using open systems and standards. The first computers
manufactured by the company (which was founded by students) were built using
existing, non-reserved parts and designs from Stanford University - SUN stands
for 'Stanford University Network'. Subsequently the company designed simple
software and simple operating systems, the source code of which they made
publicly available. Those first programs were incredibly buggy, but everyone used
them - including IBM - because they were free. This made it easy for Sun to sell
the hardware supporting the programs. This wasn't the end of the story, however.
Sun then encouraged other manufacturers to clone the Sun workstations, so a
standard could be formed quickly. The way Sun would make its profit was by
continually releasing hardware that was that bit better than that of the competition.
The programming language Java, widely used on the Internet, even appears to
'open up' the competitors' systems.
The Amencan computer company Cisco isn't protected by a standard at all. The
company makes Internet routers, switchers that reroute packets of data towards
their destinations over the Internet, and has a market share of about 85 percent.
Users can, in principle, make the switch to one of the competitors' machines in a
matter of hours. The only way Cisco can survive is by simply being better than the
competition. If the company misses a new innovation, it has no choice but to merge
with the competitor who has implemented it. Cisco has spent billions of dollars on
such practices. The laws pertaining to technological standards don't apply here
anymore - only the law of the jungle: eat or get eaten.

Tremendous interests

Large businesses realize that it is often more profitable to cooperate than it is to


compete for profit among each other. Every manufacturer knows that if their
standard wins, there is major profit to be made. However, to increase the chance
of success technology can also be shared with competitors. Because everyone is
aware of the tremendous interest at stake, this cooperation leads to a complicated
game of strategy. At the introduction of the digital versatile disc (DVD), which is
supposed to replace the videotape, CD and CD-ROM, it started all over again.
The electronics industry divided itself into two camps: an alliance around Philips
and Sony, and another one around Toshiba and Matsushita (including Panasonic).
The manufacturers initially opted for cooperation, which resulted in tough
negotiations dragging on for a long time. For Sony and Philips, together holding the
most important technology, this took too long. They declared they would continue
pursuing their own standard past autumn. From fears for the market-ruining
competition that was about to break out, the competitors decided to return to the
negotiation table. Finally, they reached an agreement on a common standard.
Sony and Philips now rake in the biggest royalties: 2.5 percent off of each DVD
player sold and 4.5 dollar cents for each DVD disc sold.
Besides the standards which form in the chaotic competitive battle on new markets
and in the poker game played by established industries, there is a third category:
standards enforced by the government. This generally concerns branches, which
used to belong to the public sector such as television, the telephone system or the
electricity gnd. Businesses commit huge resources in persuading the government
to adopt their system as the standard. This is why negotiations on the standard for
HDTV (High Definition Television) have dragged on for ages. Telecommunications
companies in Japan have been busy lobbying for the new standard of the cellular
phone network. In the end, the US standard was adopted.

4.5.6

4. Economic approaches to the formation of new technology

4.6

The formation of trajectories: positive feedback

The scientific discipline that concerns itself with the question, which choices
companies make when in uncertain circumstances, is evolutionary economy.
According to evolutionary economy cert:ain 'technological trajectones' or 'paths'
form over time which companies can get stuck into. The technologies in which
companies or economies get stuck don't always have to be those technologies that
are most efficient for their users'"". Arthur states, that the technology which wins the
competition for becoming the standard, i.e. gaining a major market share, doesn't
have to be the best option for users in the long term. An example of users being
caught up in an inefficient technology is the VHS video system. VHS 'won' over
competitors
Betamax
and
V2000, despite the fact that it
Positive feedback: The situation in which an
was neither the cheapest nor
advantage in market share of a technology
the technically supenor system.
(versus competing technologies) reinforces
According
to
Arthur,
the
itself (leading to market dominance).
formation
of
technological
Trajectory dependence: The dominance of
paths, which can be inefficient,
one technological alternative over competing
is a consequence of the fact
alternatives
is
dependent
on
the
that
the
phenomenon
development trajectory of all alternatives
'increasing
returns
with
(and
not
just
dependent
on
the
increasing market penetration'
price/performance ratio of each separate
occurs. This means that the
alternative).
more a technology is adopted.
fragment from paper 'Milieustrategien en positieve feedbaclc: lcunststofveipald<ingsafval als illustratie' by
Caroline van Leenders and Paulien de Jong, UvA, 1996.
In neoclassical economy it is assumed that the 'best' technology would conquer the market. In Arthur's model the
process is trajectoiy-dependent, which makes it impossible to predict which technology will conquer the market.A
consequence of this is the fact that an inefficient technology can actually win the race.

Breaking standards

52

53

54

5. From Impact Assessment to managing

the more it improves and the more attractive the technology becomes for further
development. A situation in which a technology has an advantage in adoption and
this advantage is self-reinforcing is also called a 'positive feedback' situation.
Arthur mentions six factors causing positive feedback, to be specific:

References for further study

5. From 'impact assessment' to 'managing technology


iri society'

a. Expectations
The development of a certain technology can be influenced and accelerated by the
expectations people hold as to the success of the technology. Expectations contain
a certain image where a future situation is sketched, connections are made and
roles are described. Based upon these expectations new actions are undertaken.

It is both more efficient and more satisfying to all participants when social
conflicts concerning the introduction of new technologies could be
prevented. To this end a number of approaches have been developed in the
past decade. Constructive Technology Assessment tries to tune
technological development to social demands. This hence means that ways
have to be found to involve the different groups of people relating to a
technological development in that development itself

b. Familiarity
When a technology is better known and better understood, it has an increased
chance of being adopted. Arthur also describes this factor as 'increasing returns by
information'.
c. Network characteristics
Positive feedback shows up more strongly with technologies possessing network
characteristics. It is advantageous for a technology to be associated with a network
of users, because this increases availability and the number of product varieties.
Again, a good example here is the VHS video system. To be able to function, this
technology needs a network consisting of video rental stores stocked with VHS
tapes. The more users are present, the better the possibility is for users to profit
from VHS-recorded products.
d. Technological connectivity
Feedback processes are stimulated by the occurrence of 'technological
connectivity'. Rosenberg already posed in 1979 that innovations depend on the
existence of complementary technologies. Often, a number of other subtechnologies and products get absorbed into the infrastructure of a growing
technology. This gives it an advantage over technologies, which would need a
partial demolition of that infrastructure to function. An important study regarding
feedback processes caused by technological connectivity is David's research into
the QWERTY keyboard (which's name refers to the first six keys on the top row of
the keyboard)'*''. Summarized: a technology, which fits into the system of already
existing technologies has a relatively better chance to develop than a technology,
which lacks those connections.
e. Economies of scale
When an increasing volume of products is produced while the costs per unit
production don't increase linearly with it, the price of a product is lowered. This
means that a technology can become more economical when it is applied on a
larger scale.
f. Learning processes
Positive feedback during the development of a technology can finally take the form
of a learning process, because a technology can be improved more quickly when
more is learned during its use. Arthur states that, when more is learnt about a
technology, this technology gains an advantage in application. So when a
company learns a lot about using a specific technology but learns little about
another, this last technology has less chance of being adopted in the future.
For the successful innovation of environmental innovations mainly 'interactive
learning' is of importance. This is a specific form of learning and is also called
'learning by interacting'. This kind of learning occurs when contact exists between
different stakeholders in the development process.
In this study, David aslcs liimself wliy this specific Iceyboard has won from the competing alternatives. According
to David, one of the factors of importance here is that of technological connectivity, in this case compatibility with

5.1

Constructive technology assessment (CTA)

The traditional forms of TA are geared towards the early identification of problems,
which are brought about by new technology (Awareness TA). Also, the American
OTA regularly performed policy studies, in which it indicated what measures the
US government could take to ease or direct the introduction of new technologies.
The main gripe with this approach was
-

\^
that the development of technology itself
^TA, Constructive TA, is focused
wasn't under debate. Actually, a rather
n
influencing the process of
deterministic vision on the development
^ ' " ^ 3 * ' " f
technology,
of technology was used: the direction
The aim is to introduce those needs
was fixed, at most the speed could be
^^at are not expressed on the
influenced. Scientists working in the
^^e processes of
area of technological dynamics showed
^""^^t'^g
technology,
in the eighties that the development of
technology is a highly social process, on
which social, economical, political and cultural factors have a great influence. This
meant that technological development could actually be directed to a certain
extent.
Starting from this vision, some of the scientists launched the idea of 'Constructive
Technology Assessmenf (CTA): a set of activities and studies with the goal of
tuning technological developments to social demands. The decision-making
process on new technology would have to be broadened by involving more
stakeholders, including social groups, at an early stage. The term 'constructive'
regards the fact that the activities and studies would have to attribute to the design
or 'construction' of new technology.
Constructive TA (CTA) could be exercised by various different stakeholders. First
of all the government is considered a candidate, because it effectively holds two
points of view. On one hand the government tries to promote the development of
new technology in order to stimulate the national economy, employment and living
standard. On the other hand the government tries to counteract negative social
effects by enforcing safety standards, protecting the work environment and
preserving the natural environment. Those different aspects of government policy
are normally situated in different departments (Economic affairs versus Social
affairs and the department of the Interior), which plan their policies individually, in
different directions. For the government CTA could be a way of joining those two
perspectives, by having representatives of economical and social interests plan
technological innovations together
For social groups, such as the unions or the environmental lobby, CTA can be a
way to define their social responsibilities. Since the development of the nuclear
bomb more and more scientists and engineers had gotten the idea that they share

the system.

54

55

Part 2: Philosophy of Science and

a social responsibility. However, it was still unclear fiow tfiey were to express ttiat
in tiieir daily scientific or tecinnological activities. Ttie discussion with social groups
offered them an opportunity to solve this.
The three types of stakeholders mentioned above were designated by the
scientists who introduced the term CTA as:

T-stakeholders: technology developers;

M-stakeholders: social groups;


0-stakeholders; governmental organizations.
Recently the part consumers and producers of technology (companies) play for the
integration of social demands into technology development. They are respectively
called C- and P-stakeholders. Companies use CTA mainly to prevent their new
products from failing due to social resistance, or to avoid having to make
expensive changes in a design late along the development line.''*'
Important in CTA is the 'broadening' ofthe decision making process, in such a way
that social groups, excluded from the decision-making process up until then, get a
chance of their own to contribute their demands.
In this setting, the term 'innovation processes' primarily means the development of
new technological knowledge, not the application of existing technology. In this
way, potential users are involved in projects developing IT solutions, and in city
planning sometimes neighborhood organizations and future inhabitants are
consulted. These activities aren't generally called CTA, because they don't involve
the development of new technological knowledge.
It may be clear now what is meant by CTA, but still unclear is how it is actually
implemented. Technology dynamics shows us which social factors influence the
development of technology, but offers no hints for possibilities of actually
influencing that process. This mainly happens through trial and error: practical
studies develop their own methods as they go along. Those methods use a faidy
random collection of concepts from technology dynamics. In the following
paragraphs some examples of this will be shown. First some connections are
pointed out with theories of technology dynamics, after which a number of CTA
methods are treated.

5.2

CTA from an evolutionary perspective

In general a technological development is seen as a process of variation and


selection from the standpoint of technology dynamics. This goes for socialconstructivism as well as the evolutionary theory. Some take the variation- and
selection model as their starting point for CTA. Seen from this model three options
exist for influencing the course of technological development:
1. The development of new variations
This is in fact an obvious way to influence technology development. The
government or social groups simply develop their own new technologies for
which they feel a social demand exists. Examples are the Dutch government
promoting the development of electrical cars, and the environmental
organization Greenpeace, which had CFC-free refrigerators developed. The
drawback of this approach is that these new technologies may not be able to
gain a market position. This didn't hold for the Greenpeace refrigerators, but
the future of the electrical car is uncertain at best.
2. Changing ofthe selection environment
The government among others can change the selection environment of
technology. Examples are the tax collection on the application of a

Part 2: Philosophy of Science and Technology

technology (cars, for example) to motivate users towards other choices, and
legislation (safety rules, etc.). But also rules made by for instance insurance
companies can consciously or subconsciously influence the technological
choices companies and consumers make. In this way insurance companies
can make demands regarding the safe keeping of certain objects, which can
motivate their owner to purchase security systems. People living in the
neighborhood can sometimes influence the choice of process technology in
a company. These changes in the selection environment can also create
new markets for new technology.
A disadvantage of changing of selection environment is that it can still lead to mere
adaptations of existing technology, and not to entirely new technological
options. In this way companies often react to government regulation using
so-called 'end-of-pipe' technology, where the emission of harmful
substances is reduced but no fundamental changes are made to the
production process.
3.Creation of nexuses between variation and selection
This approach fits onto quasi-evolutionary theory (an adaptation of evolutionary
theories). Its starting point is the notion that variation and selection aren't
completely separate events (which would have to be considered separate in
'pure' evolutionary theory, as well as in biological evolution theory). In
development processes of new technology (variation) ideas on the market
and potential use (selection) do play a part. On the other hand, technology
developers can influence the selection environment, for example by
including directions of use with a product.
Johan Schot has introduced the term 'nexus' based on these observations, which
literally means 'connection' (in this case between variation and selection).
He defines it as: institutions or sets of actions that translate demands from
the environment to technical specifications, or translate demands made by
technology to specifications for the environment. In CTA it can be tried to
actively create nexuses between selection and variation, with the intent to
make the voice of social groups heard more cleariy in the development of
technology. For instance, extra nexuses can be created between the R&D
departments and the environmental departments within companies.
Three types of nexuses are distinguished (Fonk, 1994):
o Channel: passing on of external aspects to technology developers;
o Alignment: alignment between stakeholders;
o Micro cosmos: exposing products to the selection environment (testing,
prototyping).
In the Consumer CTA approach (see below) the notion of nexuses is elaborated
upon.

5.3

Network approaches to CTA

A second important application of technology dynamics in CTA is the approach to


technology development from networks of stakeholders. The idea is that the
network of stakeholders involved in the innovation determines the direction
technological innovations take. If we want to involve social influences in a stronger
way, we have to make sure the social groups are absorbed into the stakeholder
network. Unfortunately the theory doesn't dictate under which circumstances and
in what way this can be done successfully. An elaboration on the network
approach is stated below, where the manipulation of social networks is mentioned.
The 'Consumer CTA' method also uses elements of the network approach.

Besides, some eompanies liold the opinion that they as a company are socially responsible for the consequences of
their actions. CTA can be a way to make this view concrete.

56

57

Part 2: Philosophy of Science and

58

5.4
Leaming processes,
strategic niche management

articulation

of

demand

and

Constructive Technology Assessment adopts the notion from technology dynamics


that the development of technology is accompanied by learning processes. This
holds for both the developers and the users of a certain technology; both gain
experience with the technology. Technology developers use this expenence to
continually refine the technology. They react to practical problems, and l<eep
discovenng ways to improve the
^ ,
technology or production method
'-'"'"9 Processes: (reg technology)
or to lower its price. Users create
^^ose processes in society that let us, on
their own applications for the
both an individua scale and a col ective
technology. In this they aren't
sca^e, profit from the intended effects of a
passive in the sense of docilely
technology and avoid the unwanted
following the intentions of the
effects in an increasingly efficient manner,
producers, but they actively and
creatively assign meaning to the technology. In doing so they encounter problems,
which can form an input for the designers. As a consequence of these learning
processes the technology becomes more and more embedded in society.
In Constructive Technology Assessment learning processes are actively
stimulated. This can tal<e place through communication between designers,
producers, users and others involved. This also happens by exposing new
technology, which performs better in meeting with social demands, to the user
environment at an eady stage to observe the learning processes occurring in that
case. The concept of flexibility of meaning (talken from social constructivism) plays
an important role there: technology gains its meanings during the learning
processes.
An example ofthe organization of learning processes is visible in expenments with
electncal cars performed in California. The electncal car holds environmental
advantages over cars using a combustion engine, especially in urban areas, but
has its small range as main disadvantage. By making the car available to a number
of users at an eady stage, it turned out that learning processes occurred which
encouraged use of the car. For instance, users changed the routes they took in
such a way that the small range of the car became much less of a problem.
Neverfheless, some processes also occurred which weren't intended effects of the
introduction of the electnc car: it got the role of 'second car' in many cases. In this
way, it contnbuted to the total size of the car population. In Holland experiments in
the use of electrical cars were also done, geared towards establishing learning
processes within specific user
. .
, _
.
^, ^
groups
(taxi drivers, urban
Articulation of Demand, presupposes that
distribution companies).
^^ere is a latent need for a specific new
A general concept TA borrows
technology. That need is no expressed on a
from technology dynamics is that
impossible to apply for nonof articulation of demand. This
^f'^t'^S
technologies^
By
presenting
concept assumes that a latent
alternative options, needs become visible as
demand or need can exist in
demands.
society for (conceivable) new
' ' '
technologies,
for
example
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^
environmentally friendly technology, but that no one happens to be offering a
technology, which suits those quiet demands. Only by presenting a technological
answer it becomes clear that the demand existed at all. An example of this is the
CFC-free refrigerator, which Greenpeace Germany had developed and produced;
only then the vast demand for such a product became apparent and established
refrigerator manufacturers followed this example.
Subsequently, a process can start in which the technology is improved and
adapted, and in which demand grows even further because ofthe popularity ofthe
product. The recent experiments with systems using the 'call-a-car' concept for
example show that an increasing demand exists for car sharing. Initially the

58

Part 2: Philosophy of Science and Technology

inventors of the initiative apply very little new technology, but it is conceivable
specific IT systems will be developed for it in the future.
Finally, we can mention strategic niche management as an approach used in CTA.
This involves creating a 'safe' user

environment for a new technology.


Niche management: The creation of a
in which it doesn't have to compete
'safe' environment for a new technology,
directly with other technology. This
where it doesn't have to compete
is possible in case the govemment
against
widely
used,
established
or other parties subsidize certain
initiatives. In such a niche learning processes can take place which lead to
adaptations of the technology or its user environment. Only in a later phase the
technology is exposed to the market. This approach suffers from the same
disadvantages as the 'development of new variations' approach mentioned above:
it is very uncertain whether the new technology will sun/ive among the competition
on the market.

5.5

IVlethods

It is apparent that theory gives us hints on the application of Constructive


Technology Assessment, but direct methods aren't handed to us. However, those
are being developed in practice. This normally doesn't yield fully developed
methods, but rather presents concrete approaches.

5.5.1

Manipulation of social networks

An important method in TA is the making of connections between different (groups


of) stakeholders, having the goal to stimulate innovations taking place in a certain
direction. Two examples will be treated here which can both serve as classical
examples of CTA studies: the study Environmentally sound product design
through cooperation by the Rathenau Institute (called NOTA at the time) from
1992 and the PRISMA project (also organized by the Rathenau Institute).
The goal of the 'Environmentally friendlier product design through cooperation'
study (Den Hond et al., 1992) was the reduction of environmental loads by the
waste of discarded cars. Junkyards were growing, and processing the waste
became harder because of all the plastics present in the car wrecks. TA
researchers made an analysis of the innovation- and production chain of cars.
They distinguished between two groups of stakeholders: those in the design
context and those in the processing context. Important stakeholders in the design
context were:
o Car manufacturers;
o Materials suppliers;
o Parts suppliers;
o Design agencies.
The processing context mainly consisted of:
o Car wreckers;
o Car garages and servicing stations;
o Shredder companies.
Car garages and service stations hold a role as re-users of materials from car
wreckers, while shredder companies tear up the wrecks into smaller pieces. The
researchers established, through interviews with the people involved, that
innovations occurred in both of those contexts (which in the processing context
generally means new disassembling and waste processing techniques), but that
contact between the contexts was minimal. Moreover the direction in which

59

Part 2: Philosophy of Science and

innovations w/ere developed (the 'dominant searching direction') proved to be


completely different in the two cases. In the design context the main directions
were:
o Reduction of car weight by application of new matenals;
o Increasing safety of car use;
o Modeling;
o Anti-corrosion measures.
In the processing context the main directions were:
o Re-use of valuable parts;
o Recycling of scrap metal;
o Reduction in volume of leftover waste products.
As a result of these differences, stal^eholders in the processing context
encountered problems caused in the design context, among others the previously
mentioned problem of non-reusable plastics disposal. The TA researchers advised
to set up a dialogue between stakeholders in the design- and processing context to
encourage car designers to take dismantling and waste processing into account,
They took a first initiative by organizing a workshop with representatives of both
groups. Experiences gained in the processing context could in that way be
communicated to the design process. Since then contacts between the two groups
have been much more regular.
The second example of the manipulation of social networks is found in the Prisma
project"" performed by the Rathenau Institute in the 1988-1991 period. The project
was set up against the backdrop of the government enforcing strict rules
concerning environmental policy onto companies. Those companies would
defensively react to this policy by making sure they would only just meet the
requirements imposed by the government and nothing more. The regulations didn't
encourage creative thinking about making changes in the production process in
order to reduce emissions to the environment. The goal of the project was then, to
make a contribution to industrial waste reduction at the source, and also promote
the recycling of waste at the user end.
The previously mentioned example of the car recycling and processing study was
only a study, but this project was about intervention on a small scale, which was
hoped to serve as an example for others. The way in which this project was
undertaken was by creating networks between company people and
representatives of environmental protection institutions from the government,
social groups and other people who held an interest in the case.
Ten companies willing to participate in the project were selected to serve as an
example. The people running the project put together a guide on waste reduction
and prevention, and subsequently a discussion forum was formed representing all
people involved: branch organizations, governmental agencies (local too), unions
representatives and politicians. For each company a waste prevention team was
formed containing both people from the company and external people. The teams
generated ideas on the problem and put them into practice. This method turned out
to yield significant reductions in waste generation, which not only proved to be
beneficial to the environment, but also financially advantageous in a lot of cases.
Another result was that in this way not only companies paid more attention to the
waste problem, but also the local government was attended to it.
5.5.2

Social experiments

Two types of activities are meant with 'social experiments', to be specific:


Deliberately organized experiments involving a new technology to see how
users and/or producers use it in practice, and which new forms of use the
technology experiences. Examples of this are for instance local
experiments with the 'chipknip' and 'chipper' (electronic wallets) several

Project Industrile Successen met Afvalpreventie, translated: Project for Industrial Success in Prevention of
Waste.

60

Part 2: Philosophy of Science and Technology

years ago, experiments involving alternative ways of garbage collection


and disposal, etc.
. Spontaneous experiments. This means that locally operating individuals or
groups go to work on developing an altemative, in the form of a new
technology, to a problem bothering him/them. This often holds in practice
for projects involving environmental technology. Examples are:
o Local groups building or exploiting windmills;
o an entrepreneur deciding to develop environmentally friendly
detergents, using an environmentally friendly production process
(the Belgian company Ecover, for instance).
Of course, to the people directly involved these aren't experiments but efforts to
express a wish for change. To society as a whole these can be considered
experiments, however.
In these types of activities new technology is mainly developed outside the scope
of traditional sectors (govemment and large companies). The success rate ofthese
projects varies greatly. Greenpeace's refrigerator was a success. It even saved a
dying company producing refrigerators in former East Germany and other
manufacturers took it up too. In other cases the activity can retain a local character
and not find much enthusiasm elsewhere.
The question is in which situations these spontaneous social experiments radiate
more innovativeness than the activity in itself. In the examples mentioned older
technology was often used, which when seen from the environmentally friendly
perspective suddenly gained new possibilities. The example of the fridge works
well in this respect: a mixture of propane and butane was used s a cooling agent
This used to be done before but was since prohibited because of the fire hazard it
caused. The established refrigerator producers hence had permanently left that
track and were looking for CFC replacements in the form of an environmentally
fnendly and non-flammable substance in the same direction (HFK was one
candidate). The reason for flammable cooling agents becoming feasible again, was
that the amount of cooling agent used in fridges had been drastically reduced
hence also reducing the fire hazard that accompanied it. Greenpeace as an
outsider, noticed this before the producers did themselves. The older technology
had, owing to the new circumstances (and by improvements in other technologies)
been given a new lease of life.
It seems that social experiments are particulariy handy in the starting period of a
new social issue. This poses opportunities for disused or half-forgotten technology
0 be re-used for a new purpose. After a while, the vault of 'old' knowledge suitable
for new use will become exhausted, however. At that point more fundamental or
complex technological innovation is needed, for which new research must be done
the resources for which local groups and small entrepreneurs lack the means.
5.5.3

Social simulation

Social experiments often are quite labor-intensive. A much simpler method to


make
an
estimate
on
the
interaction
between
technological
and
social/organizational changes is the social simulation. A social simulation (or
game) starts from a given social/organizational situation (rules, technoloqv
organizational structures and/or spaces). Social simulation can't be applied to any
new technology, however.
^
Social simulation doesn't entail the

simulation of social processes but


Social
Simulation: simulating
the
actual social interaction within
dynamics of social patterns, within the
simulated
boundary
conditions.
context of pre-given social structures, by
Parts of these boundary conditions
ganiing
can also act as input variables. The
output variable usually is a decision, or multiple decisions, which are reached
(sometimes implicitly) within the game. This form of simulation lends itself
excellently to gaining insight into a still rather unstructured problem This is

61

Part 2: Philosophy of Science and Technologji

because it doesn't require any direct previous insight into any relations that might
possible occur nor an indexing of the opinions held by all stakeholders. The effect
of interventions in relationships between stakeholders (the introduction of new
technology, measures or regulations taken) can be determined without having
charted all relationships (the network) previously. On the other hand the claim that
such a simulation represents what will happen in the actual situation is rather hard
to support. Family bonds (not taken into account) and personal relationships can
for instance have a great influence in a real situation.^"
5.5.4

Participative TA

Participative TA is geared towards involving as many people holding an interest in


an innovation as possible. The starting point for it can both be a social problem and
a new technology. An example of the first situation is the fairly recent series of
initiatives by 'Rijkswaterstaat' (the
Dutch
water
and
transport
Participative T A tries to involve as
management agency) to involve the many stakeholders as possible in the
population in the formulation of plans
entire process of an innovation, from
to
change
the
infrastructure
problem definition to decision making
(InfraLab Rijkswaterstaat'). This
encompasses the changing of roads already present in the infrastructure, not the
construction of new roads. This concerns the application of already existing
technology, although i t s not out of the question that the development of new
technology will flow forth from it.
The reason for this project is the inaccurate image people hold about
Rijkswaterstaat. According to polls about 90% of the population considers
Rijkswaterstaat to be 'experts' on the subject. At the same time more than 50% of
the people interviewed thinks that the organization should cooperate more with
others. In the experiment. Rijkswaterstaat organizes discussions between experts
(from the organization itself) and people holding an interest, such as concerning a
road which is regularly clogged up, and where adaptations are in order.
Rijkswaterstaat also tracks regular road users (including professional drivers)
using their license plate number and invites them to participate in an evening
session. Furthermore people living nearby and organizations like the ANWB,
ENFB, the police, local nature area wardens and the fire department are also
invited. In the discussion first a definition of the problem is commonly made. This
can lead to unexpected results; in this way the motorists in one specific case didn't
see the traffic jams itself as the problem, but the unpredictable times they were
stuck in them. Subsequently solutions were drawn up, and a choice was made
between them. Rijkswaterstaat tests whether the chosen solution meets with
sufficient approval when more than a certain number of parties or people are
present.
The above was an example of participative TA. An example of an application
directed more towards technology is the 'Broad Social Debate on Energy policy'
(Dutch abbreviation: BMD) early in the eighties. This discussion formed following
the conflicts on the application of nuclear energy. The question was whether to
continue the development of nuclear energy generation. The problem of energy
supply itself was also a hot issue.
A second, more recent example of participative TA is a discussion organized in
Germany on resistance of plants to weed killers by genetic modification. The
special thing about this discussion is that it lasted for a long time. In the mean time
all kinds of research were done into the various forms of impact this technology
had. People taking part in the discussion were people having an interest in the
Frans Bauke Van der Meer, 1986, "Social Simulation; A research methodology and leaming strategy for social
impact assessment", in: Henk A.Becker, Alan L.Porter, Impact assessment today, Utrecht, Publisher Jan van Arkel.
Peter Boskma, 1986, "Social Impact Assessment by social simulation", Henk A.Becker, Alan H.Porter, Methods
and experiences in impact assessment, D.Reidel Publishing Company, Dordrecht/Boston/Lancaster/Tokyo, pp.
135-148.

62

Part 2: Philosophy of Science and Technology

situation and experts on the subject. Workshops and conferences were used to
define research questions and to draw conclusions. The costs involved in this
discussion were of course significant, approximately 250,000 euro (While the
Dutch BMD cost 11 million euro). The experiment was largely a failure however,
because in the mean time the environmental lobby ceased participation because it
didn't agree on the course the project was taking. Besides, its sentiment was that
not enough of its proposals for research were awarded opportunities.
A method, which can be used in participative TA, is the consensus conference.
Here laymen pose questions to experts on the subject, and have to come to a
near-unanimous judgment eventually. This judgment can then play a part in the
research itself or in setting the boundaries for the application of the new
technology.
5.5.5
Consumer CTA
Consumer CTA is a method, which is geared towards making the wishes
expressed by consumers have an effect in technical developments. This method
holds the character of a proposal: it has never been evaluated in its entirety in
practice. It has been tested in an illustrational project in the aforementioned
Sustainable Technological Development program (on 'novel protein foods'). The
method itself is based on experiences with CTA-type activities where consumers
were involved. The central research question of Consumer CTA is considered to
be:
To what extent and in what way can consumers directly or through the
consumer organization adopt the role of 'change agent' by creating suitable
nexuses around the development- and design process of new technologies,
based on consumer requirements?
The term 'change agent signifies a stakeholder who has set himself the goal of
changing the existing situation and pursues it. Consumer CTA relates to future
products.
In consumer CTA different steps are distinguished which would have be followed
before commencing the design process for a new product:
1- Selection and analysis of technology/product;
2- Indexing of stakeholders;
3- Indexing of factors: viewpoints of different stakeholders;
4- Development of suitable nexuses.
The concept of 'nexus' was encountered eadier. It entails; connections between
vanation and selection. In consumer CTA the point is giving stakeholders from the
selection environment a role in the development process. In this case mainly the
representatives of consumers are important, the so-called C-stakeholders, but also
other relevant stakeholders are involved.
Subsequently the design process starts where the stakeholders involved in each
phase think of so-called "Future Scenarios for Consumers". These future scenarios
consist of;
1- Descnption of technical idea and applications;
2- Effects on consumers, transaction parties and others;
3- Relevant demand from consumers;
4- Consumer reactions on sketched/presented technology, consumer
requirements;
5- Scenarios (possible future situations).
This boils down to sketching a
coniplete image of the product,
puture Scenarios for Consumers; draw a
Its functions for consumers, and picture of a technology and its context that
the
role
of the different
technologists aim at creating. The Future
stakeholders
in this.
Every
Scenarios for Consumers aim at facilitating
dialogue
between
consumers
and
" The changes the 'change agents' propose e technologists
requirements than existing products, or better th_
....
without the involvement ofthe 'change agents'.

63

, ....

^-s

Part 2: Phosophy of Science and

stakeholder sketches such a future vision for himself. Subsequently a discussion


takes place to reach a consensus. This has to be repeated in each phase, because
new information relevant to practical use can be generated in the research and
design process.
An example approaching Consumer CTA to a certain extent has become apparent
in the development of HDTV. HDTV basically is a TV with a higher resolution than
existing sets have. Besides, HDTV is supposed to have a different image size, one
that better fits movie images (widescreen).
The development of HDTV was started by a faidy limited (although not in the
absolute sense) network of stakeholders: Industry and the Japanese and
European governments. In 1985 Japan came up with a proposal for a HDTV
standard. Standards are extremely important in this industry: woddwide, or at least
continent-wide uniformity in technical and functional specifications is needed to
make the system work propedy. At the same time standards play an important role
in industnal competition; the company that gets its specifications accepted as the
standard, gains an immediate advantage over its competitors.
European industry and government quickly reacted by putting forward their own
proposal, the EG-MAC standard. The European norm differed from the Japanese
norm in the sense that it specified a transitional segment between the current TV
format and the HDTV format. The transitional configuration consisted of a
widescreen television (according to the so-called D2MAC norm), which didn't have
the resolution of HDTV yet. In the Japanese norm those systems would be
incompatible. A European development program was set up in which the Dutch
company Philips fulfilled a central role.

Technologie/produktontwikkeling
Produkl-

Technische
normen

definitie

ProduktPrototypen

Idee

Introductie

concept

Part 2: Philosophy of Science and Technology

Late in the eighties this program reached a dead point. Not enough support existed
to launch HDTV in the short term. The Dutch department of economic affairs.
Philips and the PTT (the Dutch telecommunications agency) therefore set up a
platform in 1989 (the HDTV platform) to involve more stakeholders actively in the
project.
Besides the parties who set up the initiative the platform consisted of:
. The NOS (the Dutch broadcasting companies);
The NOB (The program makers);
The Nozema (broadcasting facilities);
. The VECAI (Cable companies);
Other departments o f t h e government: Public Works and Infrastructure, and
Welfares Public Health.
Other organizations were invited as so-called B-participants. They served a more
limited role in the platform. These organizations were:
the Consumer Society;
Producers and importers of antenna technology;
the Union of Entrepreneurs in Electrical Engineenng.
It appeared vanous factors were important to the further development of
HDTV
The demand from technologists and producers for a wider and better TV
image;
Technical problems in transmission;
Exchangeability of film and video/TV;
Interests ofthe European industry;
Cultural identity of Europe (a foreign TV system would also facilitate the
introduction of foreign productions);
Transitional trajectory (producers of programs and consumers).
The platform made efforts to give new impulses to the HDTV program, but they
didn't prove effective. Eady in the nineties the HDTV plan was put on the back
burner. In hindsight the platform played a positive role in involving important
stakeholders in HDTV, which weren't participating before, such as the cable
companies responsible for distributing the signal. Analysts deemed the position of
and the efforts made by the Consumer Society insufficient to adequately express
consumer aspects across the platform. Besides, the platform came too late;
standards had already been established and research had already gotten
underway, hence fixing the direction the project would take.

Beoordeling van

Panelondefzoek
Consumentenaspecten

Consumenten

Figure 5.1, The use of Future Scenarios for Consumers in the innovation process.

64

65

65

Part 2: Philosophy of Science and

66

The individual HDTV


consumer

Others (collective and societal, non-HDTV


consumers)

Costs of purcliase;
Size;
Image quality;
Sound quality;
Ease of use;
Safety

Use of resources;
Energy consumption;
Environmental effects of discarding and early
replacement of equipment;
Environmental effects of production and use of HDTV
equipment

Transmission

Cable contract

Capacity of infrastructure;
'Visual pollution' by antenna dishes;
Increase in cable subscription costs

Software

Programs on offer;
Information on offer;
Integration with video,
CD-ROM and
photography
(multimedia)

Cultural effects;
New producers (cable companies, film maimers);
Information distribution;
Unequal distribution of information, HDTV consumers
having the advantage;
Disappearance of PAL-broadcasts off cable and off the air

General

Compatibility and
convertibility;
Costs;
Availability of
equipment;
Brand distinctions

Infrastructural costs;
Increasing dissatisfaction with own status, leading to
obsession with consumption

Home
equipment

Table: Consumer aspects of HDTV


Another example in the direction of consumer CTA is in the application of
biotechnology in agriculture, especially in the development of genetically
modified crops. Directions of research developers of technology are taking in this
area are for instance resistance to disease and faster methods of improving
genetics. For consumers the quality of the product, the prevention of residues and
the 'naturalness' of products are often more important. An important issue is
whether the label on products should state whether it has been genetically
modified or not. Producers don't like this idea, because it has a deterring effect on
many consumers. An organization playing a nexus-role in this area is the
'Consumer and Biotechnology' foundation which, subsidized by the government,
does research into the consumer aspects of biotechnology.
As stated before the only proper experiments in consumer CTA are being done at
the moment. Yet there are already some side notes to think of with this approach:
Are future reactions of consumers foreseeable in an adequate way? It is well
known that consumers strongly think in terms of existing products, and
have difficulties perceiving the value of completely new products. A
specific problem with that is how to establish how much consumers are
prepared to pay for a product.
Does this approach hold an extra value over existing consumer research?
Can such an elaborate procedure be performed against acceptable costs?
Is it possible for producers to present their plans to as many other
stakeholders as possible?
Can secrecy versus competitors be guaranteed in an appreciable way?
The lead-user approach can be named in this context as a more traditional
approach of consumer research, which can be helpful to use along with consumer
CTA. Progressive users nevertheless representative towards the intended
consumer group are consulted on a new product. In short, the approach consists of
the following three steps:
1- Define a product and user base

66

Part 2: Philosophy of Science and Technology

2345-

Observe important trends in the user base


Find out who leads these trends and in what way dissatisfaction
with existing products is apparent
Define the characteristics of a new product together with a small
group of trend leaders (according to technical possibilities)
Find out in what ways others value this product proposal, and what
they would be prepared to pay for it.

This approach has been used in an industrial context, in the design of a CAD
system intended for designers of electronic circuits. A relevant trend among the
users in this case was an increase in density of the switches and elements on the
circuit boards. Led-users for the CAD system were designers of densely printed
circuit boards. In a workshop involving several of them specifications for the CAD
system were established. Subsequently the result was verified using a poll among
a broader user base.
This approach or the consumer CTA approach can of course also be used in
physical or non-physical consumer products. When mentioning non-physical
consumer products a possible example would be information services.
5.5.6

Consensus conference/public debate"

In societal debates, experts usually are heard the best. Whether it is about nuclear
power plants, privacy or environmental development, those same professionals
keep popping up. But what does the rest of the population think? What is the
layman's view?
In 1993 the first public debate involving a panel of laymen was organized in
Holland. Using funding from the departments of agriculture and education the
Dutch Organization for Research of Technological Aspects (the current Rathenau
Institute) invited eight men and eight women to form an opinion on the genetic
modification of animals. In 1995 a laymen's panel was involved in tracking
research into predictive genetics. That year, two panels were started up: one on
gender choice for non-medical reasons, the other on environmental development.
Consensus conferences were invented in the United States in the seventies to
obtain a weighted judgment on medical matters. A unanimous judgment by a
laymen's panel was an important weapon for the pharmaceutical industry to
counteract damage claims for new drugs.
In Denmark this model was adopted in the eighties and adapted so the focus was
less on the conclusion and more on the (public) debate. Considered particulariy
important in this case was the fact that the laymen asked questions from their own
common sense and judgment and that they were acting from their own views.
Publicity surrounding the debate made sure the rest of the Danish citizens picked
up on it. This method has since been used in many countries among which are the
United States, Japan, England and Norway.

5.6

Backcasting

Backcasting is used to reach previously specified goals specified in the


technological design process. Based on those goals technologies are determined
for which there would be a possible demand in the future. Forecasting- or
innovation projects are undertaken to scout out this technology and possibly
actually develop it. This is also called normative forecasting. It has to be remarked
that this concept of backcasting doesn't really fall within the aforementioned
concept of CTA because initially there is no involvement from any other parties. It
is geared towards social regulation of technological developments and thus shares
its goal with CTA.
An example is the Interdepartmental research program Sustainable Technological
development (Dutch acronym: DTO). The basis for this program is the notion that
Partly taken from Intermediair, august 2T

^ ^ ^

Backcasting: This is one form of TA where the


desired final social situation is defined.
Backcasting provides a means to reach this
situation.

Part 2: Philosophy of Science and

in a time span of approximately 50 years a reduction in environmental impact by all


kinds of economic activities has to be reduced 20-fold (This assumes the world
population will grow 2- or 3-fold, that the developing countnes are to reach the
same living standards as the western countries and that the total environmental
impact may not increase). The goal of the program is to define technological
development trajectories that enable such a reduction within that term. This
change in trend pertains to:
Technology;
Culture: ethical standards and values, needs;
Structure: economy, institutions.
In the program, so-called demand-field analyses are performed for specific activity
areas: which developments and needs will express themselves in the long term,
and what technologies can accompany them. Then illustrational processes are

Part 2: Philosophy of Science and Technology

TA now is: What network of stakeholders is an optimal one to consider for each
phase ofthe development process?
Often, a problem is specifically related to a certain technology. The solution can be
found in an adaptation of the existing technology, or in the development of an
entirely new one. The optimal solution to pursue can't always be seen from
beforehand. In such cases it is desirable to activate a process leaving both options
open. This calls for the flexibility of innovation processes.
Finally it is desireable that technological barners and perspectives receive more
attention in CTA. At the moment is strongly focused on the question of which
directions of technological development are desired. Besides that is it of course
important which directions hold possibilities for technological developments. Users
of CTA assume those possibilities are perceived by experts involved in the
process. It would however make sense to have the CTA researchers investigate
those possibilities themselves. To this end modified versions of Delpi could for
instance be used. Speaking more generally, adaptation of this type of traditional
forecasting techniques for CTA means would be desired.

organized in which
prototypes of those
technologies
are
put into practice.
An example is a study into the future of the Rotterdam seaport. The central
question was: How can the seaport serve its function in the future without imposing
an increasing strain on the environment? Participating in this study were
governmental departments, the Municipal Port Company of Rotterdam and two
consulting agencies. Together they developed future scenarios for a sustainable
seaport. Central to this was the notion of 'dematerialization';
The seaport would focus more on creating added value rather than transfer
and distribution;
The seaport acts as a transport chain director, not just as one element within
that transport chain;
Less use of space, increased role of information and knowledge.
Furthermore changing traffic streams due to the advent in recycling and changing
production processes in other sectors have to be taken into account. As
illustrational processes were shown, among others;
A hydrogen-powered vessel^^;
Underground tube transport;

Figure 5.2 Backcasting: Looking backward from a future


perspective

5.7

Discussion

Some general theoretical concepts used in TA and some methods developed in


practice have been shown in the previous sections. It may be clear that CTA is still
being developed. The available methods are still rather limited, and a clear
framework stating which method to use in which situation is still missing. Many of
the methods used today are based on adaptations in the network of stakeholders
involved in a technological innovation. An interesting research question to pose for
" The use of hydrogen as fuel was seen as a suitable technology to use in this seaport concept, because it would not
result in harmful emissions to the environment (at least not locally), and could be produced in an environmentally
friendly way (using for instance solar power).

68

69

2: Philosophy of Science and Technology

Part 2: Philosophy of Science and Tect

70

6. Why do we need Sustainability?

6.1

Introduction

This chapter will explain what Sustainable Development is, and what the
consequences are of unsustainable practices. It will do so by sketching Easter
Islands' history, which is an example of the decline of a community caused by its
unsustainable practices. The chapter will briefly describe some of the history of the
concept Sustainable Development, and describe a framework to set targets for
future improvements.

6.2

Easter Island

Easter Island (Rapa Nui) is one of the most remote, inhabited places of the world.
This Pacific Island is 3747 kilometre removed from Chile, to which it belongs, and
2250 kilometre removed from the nearest inhabited island called Pitcairn. The
enormous statues of Easter Island make the island a well known attraction for
tourists.
In 1722, the Netherlands admiral Jacob Roggeveen landed on the Island. In his
journey around the world, he was searching for a large mysterious island that was
supposed to be in the Pacific. Easter Island was briefly explored by Roggeveen
and his crew.
The population amounted about 3000 people. They were living in shacks, and
were dressed in rags. Roggeveen saw the large statues (called Moai) and was
intrigued by them (See Figure 1). The statues were placed in groups on platforms
(Ahu) near the shoreline. They were all facing inland. Some of them were wearing
separate stone hats. The largest standing statue weighs 82.000 kilogram and
measures 9.80 meters. The statues were cut in a quarry in the middle of the island.
At this site several semi-finished statues can still be observed. The distance to the
shoreline mounted up to 14 kilometres. Several statues which did not make It to
the shoreline can still be found.

Roqqeveen stayed only for a couple of days on the island. Explorers that came
I ter described the island life in more detail and wondered about the statues.
Clearly the creation of the platforms, cutting the statues, transporting them over a
distance of several kilometres took great efforts. There are methods to transport an
average statue with about 20 people. Transport will then take 30 to 70 days.
However, these methods are risky in the rough terrain of Easter Island. More
secure methods of transport use rollers or sledges. These methods were faster,
but took more people .
Two things are clear:
. The creation of statues took much labour. As most statues were dated between
1400 and 1600, Easter Islands' society had to be far more prosperous and far
better organized at that time in order to be able to support these efforts.
The transportation methods that did not use wood were very risky for the statue
and probably not adequate at the rougher terrain of Easter Island.
Another fact points to the use of wood: The island was covered by bushes, not by
trees. Only two Chilean wine palm trees were ever discovered on the island. They
grew in a canyon and could not be reached. Archaeologists have proven that the
island has once been covered by various palm trees.
Theories regarding the history of Easter Island nowadays state that the Easter
Islanders came from Polynesia. They arrived at Easter Island between 400 and
800 AD. The hchness of the land and sea gave the islanders the means to develop
a rich culture. The population grew to a level of approximately 7000 in the 16th
century. As the Polynesian social system is based on the clan as dominant unit,
clan-life was probably the dominant social system on Easter Island too. The Ahu
were probably religious symbols that also expressed the status of its owning clan.
What happened?
The growing population and the vast activities to create Moai took their toll. The
fertile soil eroded and more land was needed for agriculture. Trees were cut in
great numbers for building Moai, boats and houses. The island probably ran out of
wood and food. Still the islanders must have continued chopping trees until the last
available one. As a consequence, they were unable to continue the creation and
placement of Moai, they could not build boats anymore and construction of new
houses became impossible. This coincided with armed conflict between the clans.
The civilisation of Easter Island collapsed by its unsustainable nature. But why did
nobody do anything about it? It could be that the inhabitants were not as aware as
we are now about the harm they inflicted upon the ecosystem of their island. It is
not realistic to assume that a civilisation as highly developed as Easter Island did
not recognise that their last tree was cut. However, in the competitive struggle
among clans, not cutting a tree implied leaving it to the axes of a competing clan.
This is an example of a prisoners dilemma.

6.3

Unsustainable Societies collapse

The example of Easter Island shows that man cannot act as if his resources are
without limit. But Easter Island is no outrageous example: 'Various civilizations
throughout the world collapsed by unsustainable forms of agriculture. To mention
only a few examples:
When irrigation is used, agriculture yields good harvests in the beginning.
However, when the water evaporates on the land, it leaves salt in the top layer,
Salt is a poison to plants. Soon the yields start declining and the land cannot be
used anymore for agriculture. There is evidence that this phenomenon contributed
to the collapse of the civilizations of the Maya in Central America, the Indus Valley
in Southern Asia and Mesopotamian cities in the Middle East.
When agricultural land is drained, saline water can rise from lower layers by
hydrostatic pressure. This can also lead to salt poisoning.
Figure 1 Moai at Easter Island

70

71

Part 2: Philosophy of Science and

Irrigation often contributes to erosion, removing the productive top layer of soil
from the agricultural land. Erosion might also lead to dust storms which can by
itself pose a threat to farms and villages. In the 1930s, the so called Dust Bowl,
caused by years of low rainfall and extensive agricultural production, created a
severe problem in the Mid West of the USA. Throughout the world, communities
have been destroyed by land erosion.
When population grows, the increasing need for food might lead to extensive
mono-cultural (single crop) land use. This can lead to epidemics of plant disease
(especially without any crop rotation or fallow). The resulting famine might destroy
civilizations. Probably the famine that caused Abrahams' descendants to leave the
land of Canaan for Egypt was due to wheat rust . Well known is also the Irish
Potato Famine between 1845 and 1850. Ireland was a densely populated nation in
1845. To produce enough food, it almost completely depended on potato crop, as
potatoes had very high yields per area. Crop rotation, to prevent explosions of soil
related plant diseases, was hardly possible. The blight (Phytophthora infestans)
first struck in 1845. It left the potatoes rotting in the fields. Stores were also
affected. The blight struck again in 1846 and 1848. About 1 million Irish died of
famine and many fled the country, causing a drop in the Irish population from 8
million to 5 million .

6.4

But how about u s ?

Nowadays, world oil consumption is about 30 billion barrels annually. The proved
reserves of oil are about 1150 billion barrels, which is sufficient for 25-35 years,
depending on the increase in energy consumption . New reserves are explored
especially if oil prices rise. However, there will be a moment that there are no new
reserves available. Probably, if we continue consuming oil in the current pace we
will run out of fossil oil somewhere between 2050 and 2100. Oil is crucial for our
transport systems, electricity supply, heating and materials supply. We may
recognize the problem, but will we be able to take action in order to develop the
alternatives we need? Technologies that use tar sands or coal might provide
alternative fuels but what about climate change? Scientists gradually reach
consensus on the phenomenon, but can the global community reach consensus on
actions to take? The aftermath of the Kyoto conference on climate change shows
that consensus on world wide measures is hard to reach.
The Easter Island story contains another lesson: using the dichotomy between
technology and society is not very productive. The Easter Island technologies were
aimed at statues which were important in the culture and social organisation of the
island. Changing the technologies would have had serious consequences for the
islands culture and organisation. Change of technologies always needs to be
attuned to social processes, while the change of social structures will have
consequences for technology. Therefore, we need to address socio-technical
change.
The crisis, which we face today both has environmental and social components.
Many people are not concerned about the crisis because, on the one hand, they
have an insufficient perception of its magnitude, especially, in the developed
countries; and on the other hand, the consequences are hard to accept.
"We have acquired the capability to disturb the Earths natural systems, but we do
not want to accept the responsibilities of this practice"
"Perhaps more dangerous for the environments' integrity is our way of perceiving
the threats than the threats themselves. Most people resist accepting the crisis'
extreme seriousness"
Nowadays, we have access to a great amount of information about the world
situation. If humanity does not act, it is not due to a lack of information. The UN
Environmental Programme regularly publishes data sets that deschbe the state of

72

part 2: Philosophy of Science and Technology

the world . The World Watch Institute publishes its annual State of the World, and
bimonthly the World Watch Magazine . International conferences like:
, The United Nations Conference on Environment and Development in Rio de
Janeiro, 1992 and in Johannesburg, 2002,
. The International Conference on Population and Development in Cairo, 1994
and New York, 1999,
. The Fourth World Conference On Women, Beijing 1995,
. The World Food Summit, Rome, 1996
. The World Summit on the Information Society, Geneva 2003,
. the Conference on Human Settlements Habitat in Istanbul, 1996 are widely
covered by the media.

6.5

Sustainable Development?

The natural environment is the source of all substances that sustain human life.
We take from it food, water, fuels, minerals and metals and we use it as a receptor
('sink') of our waste . The general attitude towards the environment changed
sharply during the past decades. In the 1960s, most people perceived the natural
environment as infinite. In 1962 Rachel Carson showed in her book Silent Sphng,
that severe problems had been created by the use of agricultural chemicals.
Especially a modern chemical like DDT, widely used to kill insects especially
because of its low toxicity to humans and animals, turned out to have disastrous
effects to wildlife. The cause was found to be the accumulation of this substance in
the fat tissue of predator animals. Various incidents, like the soil contamination of
the Love Canal site in Niagara Falls (USA), where a school had been built upon a
chemical dump site, and the enormous oil spills following the first running aground
of a super tanker, the Torrey Canyon near the British Scilly Islands in 1967,
contributed to the public attention for the environment.
In 1972, Dennis Meadows and 3 co-authors took a wider perspective in his Limits
to Growth. The book marked a change towards a finite vision of the world. It
therefore not just focussed on waste and emissions but also on resource
consumption. Limits to growth observed that population and consumption growth
was exponential while resource production could only grow linear. Factors like
population growth, resource consumption, food production and pollution were
integrated in a single model. The model predicted a collapse before the year 2000.
The pessimist tone of the book triggered a reaction by American futurist Herman
Kahn that emphasized a bright future:
200 years ago almost everywhere human beings were comparatively few, poor
and at the mercy of the forces of nature, and 200 years from now, we expect,
almost everywhere they will be numerous, rich and in control of the forces of
nature.
The basis for this vision was technological improvement. Kahn predicted that
mankind would leave the track of exponential growth, to end up at a final level in
which every world citizen could live a prosperous life. Given the environmental
crises described above, this bright future is by no means certain. It is a challenge
to direct our efforts to.
In the 1980s, the UN World Commission on Environment and Development,
presided by Nonway's' prime minister Gro Harlem Brundtlandt introduced the term
Sustainable Development to designate the challenges for the future development
of our planet:
Sustainable Development is a development that meets the needs of the present
without compromising the ability of future generations to meet their own needs .
'n the further text of the report, the commission proposed to reconcile the
development issue with the protection of the planets' resources. Sustainable
Development is about reaching new equilibriums:
- between the poor and the rich
- between current and future generations

73

73

2- Philosophy of Science and Technology

Part 2: Philosophy of Science and Technology

- between mankind and nature


Sustainable Development is a direction to proceed. Nobody can claim the wisdom
of defining an ultimate solution for the world.
Sustainability does not imply that everybody will end up living exactly in the same
way as everybody else on this planet. Needs differ among cultures and among
individuals. Cultural diversity is worth preserving as it is our living hentage.
However, why should the richer part of the world population be able to fulfil even
the most absurd needs while many others starve of famine, are unable to attend
any school, or are dying of diseases that could easily be cured in richer parts of the
world? Sustainable Development is a moral issue built on the assumption that
every human is born with equal rights to build up the life that he or she chooses
(without harming the rights of others). Human rights, gender issues, employment
and cultural traditions cannot be separated from the Sustainability issue. Countries,
regions, and local communities should develop their own path towards
sustainability, as there is not only one route. Any attempt to develop sustainable
directions of development for others should therefore be judged with suspicion.

6.6

What means Sustainability more concrete?

Although we are not able to define a sustainable society as a final situation to go


for, some more concrete principles can be described;
Resource consumption should be minimized
Cycles of consumption of non-renewable materials should be closed
Renewable materials and energy sources should be preferred
Development of human potentials like communication, creativity, cooperation,
intellectual development, love, should be stimulated.
One should contribute to the common good, not just to the private good
Unsustainable activities:
require a constant consumption of non-renewable resources or consume more
renewable resources than the Earth system might generate
Cause degradation of the environment
Require such quantities of resources that will never be available for everybody
Bring species to extinction
Stimulate selfishness
Create the risk of a disaster
Sustainable Development implies a redefinition and review of concepts such as:
production, wealth, and interest. Economic theory should find ways to include the
assets of nature and human development into its calculations. National and
international law, taxation and commercial practice needs considerable change.
Can we afford to switch to a sustainable development? There is no alternative for
sustainable development. The world has to proceed in that direction or it will face
chaos and decline. Moreover, the rich world will have to make a start. The rich
world should be aware that its affluence is due to an ecological debt.
A sustainable development will create more employment. Energy conservation and
organic agriculture, for instance, are far more labour-intensive than industhalized
aghculture and fossil fuel- or nuclear power production . However, individual
countries or regions might suffer considerably if they would completely opt for
Sustainable Development. Sustainable fisheries might imply fishing much less to
maintain a rich and productive ecosystem. However, foreign fishermen might take
over fishing grounds and fish markets, thereby preventing sustainable fisheries to
take effect and making the sustainable fishermen lose their jobs. Such a course of
"sustainable development" will be a dead end street.
For Sustainable Development, we need cooperation and international agreements.
Agreements that take into account the rights and needs of all.

74

67

The role of technology: Factor X

Technology will have an important role in Sustainable Development. However, the


sustainable technology is not going to be the one that chops the trees faster and
more efficiently, but the one that is able to increase the useful lifetime of the trunks.
What improvements in the environmental efficiency of technologies do we need?
In the 1970s it was debated which factors mainly contributed to the problems we
were facing: consumption growth, overpopulation or the state of technology. A
rough relationship between these factors can be described by the so called IPAT
equation:
| = P*A*T
I

(1)

Total environmental Impact of Mankind on the planet


P =
Population
A = Affluence, number of products or services consumed per person, i.e. for
economists the annual Gross National Product per capita;
T =
Environmental Impact per unit of product/service consumed. This is often
called the factor Technology efficiency'. However note that T diminishes as
technologies become more efficient! Moreover, T also reflects more or less nontechnological issues like product re-use and the organisation of production.
The IPAT equation might be used to get some more clarity on the magnitude of the
environmental technological efficiency improvements that we have to reach in the
long term. Therefore we need estimates of the various factors:
Environmental Impact. Our current use of natural resources is unsustainable.
Suppose we want to cut it by half
Population growth has been exponential. In the year 2000, world population
was approximately 6 billion. In the past decade, we have seen declining population
growth rates. This is especially due to the devastating effects of the HIV epidemic.
In large parts of Africa, and also on an increasing scale in Asia, up to half of the
teenagers are HIV positive. Not only the direct death toll is important, but especially
the fact that youngsters do not reach the age of reproduction. Population growth is
hardly affected by government policies. Even large wars hardly influence it. Only
long term policies might stabilize the global population. A growing affluence
contributes considerably to a decline of population growth. The global population in
the year 2050 is predicted to be between 8 and 11 billion people. Therefore, a
rough estimate of population growth is a factor of 1.5 .

Affluence. The richest 20 % of the world population is roughly consuming 80 %


of the worlds' resources. This leaves only 20 % for the remaining 80 % of the world
population . The rich people in the world therefore consume on average16 times
more resources. The economies of the rich world are growing on average by 2 %
annually. Over a 50 year period this implies a growth factor of 2.7. If the poorer
nations want to catch up with the richer nations, they need to grow by a factor 16 *
2.7 = 43.2, which means an annual growth of 7.8 %. The combined growth of the
poorer and richer parts of the population can then be calculated. Let's assume
consumption now is 100. The rich consume 80, growth 2.7 so consumption in 50
years 216, Poor consume now 20, growth 43.2 so consumption in 50 years 864,
which leads to a total consumption of 1080, or 10.8 times the starting level.
We can substitute these estimates into the IPAT equation. For the year 2000, we
can set all factors at a reference value of 1:
12000 = P2000 * A2000 * T2000 = 1 * 1 * 1 = 1

(2)

75

Part 2: Philosophy of Science and Technology

Now we are able to calculate the value of T2050 (X* T2000) by substituting the
estimates that were made above:

Part 2: Philosophy of Science and Technology

There were various environmental disasters that only received media coverage
the nations in which they occurred. Try to find out which environmental disaster
was especially important in creating environmental awareness in your country.

12050 = P2050 * A2050 * T2050 = 1.5 * P2000 M 0 . 8 * A2000 * X * T2000 = 0.5 *


12000
(3)
We can now calculate X;
X = 0.5/(1.5*10.8)= 1/32.4

(4)

Therefore, based on the above assumptions, technology should be 32.4 times


more environmentally efficient than it is today.
Naturally, these calculations should not be interpreted literally as a target for each
separate technology;
- First of all, technological change always leads to social changes. Needs are
dynamic, influencing technology, and new technologies change needs.
- Secondly, because the world will never be Tinished'. Sustainable Development
must be an open process, in which future generations (and underprivileged people
that have no voice now) will join, and set new targets. Therefore, there will always
be new problems and new challenges, even if we have improved the
environmental efficiency of technology by a factor 32.4.
- And thirdly, it will probably show that some technologies can be improved only
marginally while other technologies can be improved radically, yielding much more
than 32-fold improvement as they can do without any consumption of resources, or
emissions altogether. Marginal and radical technological improvements will both be
important for Sustainable Development. However, we have to stimulate the radical
ones because only when we produce sufficient large leaps in technological
efficiencies we can reach the orders of improvement that are needed.

6.8

Questions, Discussion and Exercises

1. The IPAT equation describes the environmental impact as a linear function of


the factors Population, Affluence and Technology.
a. Is this description in your opinion realistic? Why do you think so?
b. Consider the case in which the Impact is dependent on the square of the
Population. How does this affect factor X?
c. Suggest which of the factors Population, Affluence and Technology are
interrelated (i.e. a function of one of the other factors) and give reasons why you
believe the interrelation exists.
2. Calculate the required improvement in the environmental efficiency of
technology in 2050, under the requirement that the affluence of the whole world
population would be at the level of affluence in the rich countries of the year 2000.
3. Short supply of oil is claimed to be a major threat to the worlds' future. Describe
another threat that could potentially lead to catastrophe, and analyze its
consequences.
4. Search the Internet for other great collapses of cultures than the one mentioned
in the text. What were the reasons for the collapse? Was the unsustainable
behavior the cause of the problem? How was this behavior embedded in the
culture?

76

77

PART 2: Texts on Philosophy of


Science and Technology

Text 1: "Scientific Explanations". In: Samir Okasha (2002), Philosophy of Science:


A very Short Introduction; Oxford: Oxford University Press, Chapter 3 (pp. 40-57).
Text 2: "Technical Artefacts" and "Technological Knowledge". In: Pieter Vermaas,
Peter Kroes, lbo van de Poel, Maarten Franssen, and Wybo Houkes (2011), A
Philosophy of Technology: From Technical Artefacts to Sociotechnlcal Systems,
Ft. Collins/Pnnceton/Bonita Spnngs/Seattle: Morgan & Claypool, Chapters 1 and 4
(pp. 5-20 and 55-66)

79

Hempel's account is l<nown as the covering law model of


explanation, for reasons that wl become dear.

<

Hempel's covering law model of explanation

One ofthe most important aims of science is to try and explain what
happens in the world around us. Sometimes we seek explanations
for practical ends. For example, we might want to know why the
ozone layer is being depleted so quickly, tn order to try and do
something about it. In other cases we seek scientific explanations
simply to satisfy our intellectual curiosity - we want to understand
more about how the world works. Historically, the pursuit of
scientific explanation has been motivated by both goals.

00
O

Quite often, modern science is successful in its aim of supplying


explanations. For example, chemists can explain why sodium turns
yellow when it bums. Astronomers can explain why solar eclipses
occur when they do. Economists can explain why the yendechned
in value in the 1980s. Geneticists can explain why male baldness
tends to run in families. Neurophysiologists can explain why
extreme oxygen deprivation leads to brain damage. You can
probably think of many other examples of successful scientific
explanations.
But what exactly is scientific explanation? What exactly does it
mean to say that a phenomenon can be 'explained' by science? This
is a question that has exercised philosophers since Aristotle, but our
starting point will be a famous account of scientific explanation put
forward in the 1950s by the American philosopher Carl Hempel.

The basic idea beMnd the covering law model is straightforward.


Hempel noted that scientific explanations are usually given in
response to what he caEed 'explanation-seeking why questions'.
These are questions such as 'why is the earth not perfectly
spherical?', Vhy do women Hve longer than men?', and the like they are demands for explanation. To give a scientific explanation is
thus to provide a satisfactory answer to an explanation-seeking
why question. If we could determine the essential features that such
an answer must have, we would know what scientific explanation is.
Hempel suggested that scientific explanations tjTiically have the
logical structure of an argument, i.e. a set of premisses followed by a
conclusion. The conclusion states that the phenomenon that needs
exTilainiog actually occurs, and the premisses tell us why the
conclusion is true. Thus suppose someone asks why sugar dissolves
in water. This is an explanation-seeking why question. To answer it,
says Hempel, we must construct an argument whose conclusion is
'sugai' dissolves in water' and whose premisses teh us why this
conclusion is true. The task of providing an account of scientific
explanation then becomes the task of characterizing precisely the
relation that must hold between a set of premisses and a conclusion,
in order for the former to count as an explanation ofthe latter. That
was the problem Hempel set himself.
Hempel's answer to the problem was three-fold. Firstiy, the
premisses should entail the conclusion, i.e. the argument should be
a deductive one. Secondly, the premisses should all be true. Thirdly,
the premisses should consist of at least one general law. General
laws are things such as 'all metals conduct electricity', 'abod/s
acceleraticn varies inversely with its mass', 'ah plants contain
chlorophyll', and so on; they contrast with particular facts such as
41

S
I
g"

'this piece of metal conducts electricity', 'the plant on my desk


contains chlorophyll' and so on. General laws axe sometimes called
laws of nature'. Hempel allowed that a scientific explanation could
appeal to particular facts as well as general laws, but he held tiiat at
least one general law was always essential. So to explain a
phenomenon, on Hempel's conception, is to showthat its occurrence
follows deductively from a general law, perhaps supplemented by
other laws and/or particular facts, all of which must be true.

want to explain are general. For example, we might wish to explain


why exposure to the sun leads to skin cancer. This is a general law,
not a particular feet. To explain it, we would need to deduce it from
still more fundamental laws - presumably, laws about the impact of
radiation on skin cells, combined with particular facts about the
amount of radiation in sunhght. So the structure of a sdentific
explanation is essentially the same, whether the explanandum, i.e.
the thing we are trying to explain, is particular or general.

To illustrate, suppose I am trying to explain why the plant on my


desk has died. I might offer the following explanation. Owing to the
poor light in my study, no sunlight has been reaching the plant; but
sunhght is necessary for a plant to photosynthesize; and without
photosynthesis a plant cannot make the carbohydrates it needs to
survive, and so will die; therefore my plant died. This explanation
fits Hempel's model exactly. I t explains the death of the plant by
deducing it from two true laws - that sunlight is necessary for
photosynthesis, and that photosynthesis is neeessary for survival and one particular fact - that the plant was not getting any sunhght.
Given the truth of the two laws and the particular fact, the death of
the plant had to occur; that is why the former constitute a good
explanation of the latter.

It is easy to see why Hempel's model is called the covering law


model of explanation. For according to the model, the essence of .
explanation is to show that the phenomenon to be explained is
'covered' by some general law of nature. There is certainly
something appeahng about this idea. For showing that a
phenomenon is a consequence of a general law does i n a sense take
the mystery out of it - it renders i t more intelligible. And in fact,
scientific explanations do oftenfitthe pattem Hempel describes.
For example, Newton explained why the planets move in ellipses
aroimd the sun by showing that this can be deduced from his law of
universal gravitation, along with some minor additional
assumptions. Newton's explanation fits Hempel's model exactly: a
phenomenon is explained by showing that it had to be so, given the
laws of nature plus some additional facts. After Newton, there was
no longer any mystery about why planetary orbits are elliptical.

Schematically, Hempel's model of explanation can be written as


follows:

The phenomenon to be explained is called the explanandum, and


the general laws and particular fects that do the explaining are
called the explanam. The explanandum itself may be either a
particular fact or a general law. In the example above, it was a
particular fact - the death of my plant. But sometimes the things we

Hempel was aware that not aU scientific explanations fit his model
exactly. For example, if you ask someone why Athens is always
immersed in smog, they wfil probably say "hecause of car exhaust
pollution'. This is a perfectiy acceptable scientific explanation,
though it involves no mention of any laws. But Hempel would say
that if the explanation were spelled out in fall deta, laws would
enter the picture. Presumably there is a law that says something like
'if carbon monoxide is released into the earth's atmosphere in
suffident concentration, smog douds will form'. The fuU
explanation of why Athens is bathed in smog would cite this law,
along with the fact that car exhaust contains carbon monoxide and

42

43

General laws
Particular facts
,

=>

Phenomenon to be explained

|'
=
|'
"

Athens has lots of cars. In practice, we wouldn't spell out the


explanation in this much detail unless we were being very pedantic.
But if we were to spell it out, it would correspond quite well to the
covering law pattern.
Hempel drew an interesting philosophical consequence from his
model about the relation between explanation and prediction. He
argued that these are two sides of the same coia. Whenever we give
a covering law explanation of a phenomenon, the laws and
particular facts we cite would have enabled us to predict the
occurrence of the phenomenon, if we hadn't already kaown about it.
To illustrate, consider again Newton's explanation of why planetarj'
orbits are elliptical. This fact was known long before Newton
explained it using his theory of gravity - it was discovered by Kepler.
But if it had not been known, Newton would have been able to
predict it from his theory of gravityi for his theory entails that
planetary orbits are elliptical, given minor additional asstunptions.
Hempel expressed this by saying that every scientific explanation is
potentially a prediction - it would have served to predict the
phenomenon in question, had i t not already been known. The
converse was also true, Hempel thought: every reliable prediction is
potentially an explanation. To illustrate, suppose scientists predict
that mountain gorillas wi be extinct by 2010, based on information
about the destruction of their habitat Suppose they turn out to be
right According to Hempel, the information they used to predict
the gorillas' extinction before it happened will serve to explain that
same fact after it has happened. Explanation and prediction are
structurally sjnnmetric.
Though the covering law model captures the structure of many
actual scientific explanations quite well, it also faces a ntunber of
awlcward counter-examples. These coimter-examples fall into two
classes. On the one hand, there axe cases of gentnne scientific
explanations that do not fit the covering law model, even
approxhnately. These cases suggest that Hempel's model is too
strict - it excludes some bonafide scientific explanations. On the
44

General laws

Light travels in straight lines


Laws of trigonometry

Particular facts

Angle of elevation of the sun is 37


Flagpole is 15 metres high

=>

Phenomenon to be explained

Shadow is 20 metres long

The length of the shadow is deduced from the height of the


flagpole and the angle of elevation of the sun, along with the
optical law that hght travels in stiaight lines and the laws of
trigonometry. Since these laws are true, and since the flagpole is
indeed 15 metres high, the explanation satisfies Hempel's
requirements precisely. So far so good. The problem arises as
foUows. Suppose we swap the explanandum - that the shadow
is 20 meties long - with the particular fact that theflagpoleis
15 meties high. The result is this:
General law

light travels in straight lines


Laws of trigonometry

Particular facts

Angle of elevation of the smi is 37


Shadow is 20 metres long

Phenomenon to be explained

Flagpole is 15 metres high

This 'explanation' clearly conforms to the covering law pattern too.


The height of theflagpoleis deduced from the length of the shadowit casts and the angle of elevation of the sun, along with the optical
law that light travels in straight lines and the laws of trigonometry.
But it seems very odd to regard this as an eayjlanation of why the
flagpole is 15 meties high. The real explanation of why the flagpole
is 15 metres high is presimiably that a carpenter deliberately made it
so - it has nothing to do with the length of the shadow that it casts.
So Hempel's model is too hberal: it allows something to count as a
scientific explanation that obviously is not
.^6

other hand, there are cases of things that do fit the covering law
model, but intuitively do not count as genuine scientific
explanations. These cases suggest that Hempel's model is too
hberal - it allows in things that should be excluded. We wUl focus on
counter-examples of the second sort.

The problem of symmetry


Suppose you are lying on the beach on a srmny day, and you notice
that aflagpoleis casting a shadow of 20 meties across the sand
(Figure 8).

8. A 15-metreflagpolecasts a shadow of 20 metres on the beach when


the stm is 37 overhead.

Someone asks you to explain why the shadow is 20 metres long.


This is an explanation-seeking why question. A plausible answer
might go as follows: light rays from the sun are hitting the flagpole,
which is exactly 15 metres high. The angle of elevation of the sun is
37. Siace light travels i n straight lines, a simple trigonometric
calculation (tan 37 = 15/20) shows that theflagpolewiU cast a
shadow 20 meties long'.
This looks like a perfectiy good scientific explanation. And by
rewriting it in accordance with Hempel's schema, we can see that it
fits the covering law model:
45

The general moral of the flagpole example is that the concept of


explanation exhibits an important asymmetry. The height of the
flagpole explains the length of the shadow, given the relevant laws
and additional facts, but not vice-versa. In general, if x explains y,
given the relevant laws and additional facts, then it wfll not be true
that y explains x, given the same laws and facts. This is sometimes
expressed by saying that explanation is an asynunetiic relation.
Hempel's covering law model does not respect this asymmetry. For
just as we can deduce the lengti of the shadow from the height of
the flagpole, given the laws and additional facts, so we can deduce
the height of the flagpole from the length of the shadow. In other
words, the covering law model imphes that explanation should be a
symmetric relation, but in fact it is asymmetric. So Hempel's model
fas to capture fiflly what it is to be a scientific explanation.
The shadow and flagpole case also provides a counter-example to
Hempel's thesis that explanation and prediction are two sides of the
same coin. The reason is obvious. Suppose you didn't know how
high the flagpole was. I f someone told you that it was casting a
shadow of 20 metres and that the sun was 37 overhead, you
would be able to predict theflagpole'sheight, given that you knew
the relevant optical and trigonometrical laws. But as we have just
seen, this information clearly doesn't explain why the flagpole has
the h e i ^ t i t does. So in this example prediction and explanation
part ways. Information that serves to predict a fact before we know
it does not sen^e to explain that same fact after we laiow it, which
contiadicts Hempel's thesis.

The problem of irrelevance


Suppose a young chfld is in a hospital in a room fiill of pregnant
women. The child notices that one person in the room - who is a
man called John - is not pregnant, and asks the doctor why not. The
doctor rephes: 'John has been taking birth-contiol pflls regularly for
the last few years. People who take birth-contiol pifls regularly
never become pregnant. Therefore, John has not become pregnant'.
47

Let us suppose for the sake of argument that what the doctor says is
true - John is mentally ill and does indeed take birth-control pills,
which he believes help him. Even so, the doctor's reply to the child is
clearly not very helpftil. The correct explanation of why John has
not become pregnant, obviously, is that he is male and males cannot
become pregnant.
However, the explanation the doctor has given the child fits the
covering law model perfectly. The doctor deduces the phenomenon
to be explained - that John is not pregnant - fi-om fhe general law
that people who take birth-control pills do not become pregnant
and the particular fact that John has been taking birth-control pills.
Since both the general law and fhe partictdar fact are true, and since
they do indeed entail the easplanandum, according to the covering
law model the doctor has given a perfectly adequate explanation of
why John is not pregnant. But of course he hasn't Hence the
covering lawmodel is again too permissive: it allows things to cotmt
as scientific explanations that intuitively are not.
The general moral is that a good explanation of a phenomenon
should contain information that is relevant to the phenomenon's
occru-rence. This is where the doctor's reply to the child goes wrong.
Although what the doctor tells the child is perfectly true, the fact
that John has been taking birth-control pills is irrelevant to his not
being pregnant, because he wouldn't have been pregnant even if he
hadn't been taking the piUs. This is why the doctor's reply does not
constitute a good answer to the child's question. Hempel's model
does not respect this crucial feature of our concept of explanation.

Explanation and causality


Since the covering law model encounters so many problems, it is
natural to look for an alternative way of understanding scientific
explanation. Some philosophers beheve that the key Mes in fhe
concept of causahty. This is quite an attractive suggestion. For in
many cases to explain a phenomenon is indeed to say what caused
48

explain the height of the flagpole by pointing to the length of the


shadow it casts.
The general moral of theflagpoleproblem was that the covering
law model cannot accommodate the fact that explanation is an
asymmetric relation. Now causality is obviously an asymmetric
relation too: if x is the cause of y, then y is not the cause of x. For
example, if the short-circuit caused the fire, then the fire clearly
did not cause the short-circuit. I t is therefore quite plausible to
suggest that the asymmetry of explanation derives from the
asymmetry of causality. I f to explain a phenomenon is to say what
caused it, then since causality is asymmetric we should expect
explanation to be asynunetric too - as it is. The covering law model
runs up against theflagpoleproblem precisely because it tries to
analyse the concept of scientific explanation without reference to
causaiit}'.
The same is true ofthe birtii-contxol pill case. That Jobn takes
birth-control pills does not explain why he isn't pregnant, because
the birth-control pflls are not the cause of his not beiag pregnant.
Rather, John's gender is the cause of his not being pregnant. That is
why we think that the correct answer to the question 'why is John
not pregnant?' is 'because he is a man, and men can't become
pregnant', rather than the doctor's answer. The doctor's answer
satisfies the covering law model, but since it does not correctly
identify fhe cause of the phenomenon we wish to explain, it does not
constitute a genuine explanation. The general moral we drew from
the birth-control pill example was that a genuine scientific
explanation must contain information that is relevant to the
explanandum. In effect, this is another way of saying that the
explanation shoidd tell us the escplanandum's cause. Causalitybased accounts of scientific explanation do not run up against the
problem of irrelevance.
It is easy to criticize Hempel for fafling to respect fhe dose link
between causality and explanation, and many people have done so.
50

it. For example, if an accident investigator is trying to explain an


aeroplane crash, he is obviotisly looking for the cause of the crash.
Indeed, the questions 'why did the plane crash?' and 'what was the
cause of the plane crash?' are practically synonymous. Similarly, if
an ecologist is trying to explain why there is less biodiversity in the
tropical rainforests than there used to be, he is clearly looking for
the cause of the reduction in biodiversity. The link between the
concepts of explanation and causality is quite intimate.
Impressed by this link, a number of philosophers have abandoned
the covering law accoimt of explanation in favour of causality-based
accotmts. The details vary, but the basic idea behhvd these accounts
is that to explain a phenomenon is simply to say what caused it. In
some cases, the difference between fire covering law and causal
accotmts is not actually very great, for to deduce the occm-rence of a
phenomenon from a general law often just is to give its cause. For
example, recall again Newton's explanation of why planetary orbits
are elMptical. We saw that this explanation fits the covering law
model - for Newton deduced the shape ofthe planetary orbits firom
his law of gravity, plus some additional facts. But Newton's
explanation was also a catasal one, since elliptical planetary orbits
are caused by the gravitational attraction between planets and
the sun.
However, the covering law and causal accounts are not fully
equivalent - in. some cases they diverge. Indeed, many
philosophers favonr a causal account of explanation precisely
because they think it can avoid some ofthe problems facing the
covering law model. Recall the flagpole problem. Why do our
intuitions tell us that fhe height of the flagpole explains the length
of fhe shadow, given the laws, but not vice-versa? Platisibly,
because the height of the flagpole is the cause of the shadow being
20 metres long, but the shadow being 20 metres long is not the
cause of theflagpolebeing 15 metres high. So unlike fhe covering
law model, a causal account of explanation gives the 'right' answer
in the flagpole case - it respects our intuition that we cannot
49

In some ways, this criticism is a bit unfair. For Hempel subscribed


to a philosophical doctrine known as empiricism, and empnicists
are traditionaUy very suspicious of the concept of causality.
Empiricism says that all our knowledge comes from experience.
David Hume, whom we met in the last chapter, was a leading
empiricist, and he argued that it is impossible to experience causal
relations. So he concluded that they don't exist - causality is a
figment of our ima^ation! This is a very hard conclusion to accept.
Smrely it is an objectivefe.ctthat dropping glass vases causes them to
break? Hume denied this. He aUowed that it is an objective fact that
most glass vases that have been dropped have in fact broken. But
our idea of causality includes more than this. I t includes fhe idea of
a causal hnk between the dropping and the breaking, i.e. that the
former brings about the latter. No such Imks are to be fotmd in the
world, according to Flume: all we see is a vase being dropped, and
then it breaking a moment later. We experience no causal
connection between fhe first event and the second. Causality is
therefore a fiction.
Most empiricists have not accepted this startling conclusion
outright. But as a result of Hume's work, they have tended to regard
causality as a concept to be treated with great caution. So to an
empiricist, fhe idea of analysing the concept of explanation in terms
of the concept of causality would seem perverse. I f one's goal is to
clarify the concept of scientific explanation, as Hempel's was, there
is little point in using notions that are equally in need of
clarification themselves. And for empiricists, causality is definitely
in need of philosophical clarification. So the fact that the covering
law model makes no mention of causality was not a mere oversight
on Hempel's part. In recent years, empfricism has declined
somewhat in popiflarity. Furthermore, many phflosophers have
come to the conclusion tbat fhe concept of catisality, although
philosophically problematic, is indispensable to how we tmderstand
the world. So the idea of a causality-based accoimt of scientific
explanation seems more acceptable than it would have done in
Hempel's day.
51

Causality-based accounts of explanation certainly capture the


structure of many actual scientific explanations quite weU, but are
they the whole story? Many philosophers say no, on the grounds
that certain scientific explanations do not seem to be causal. One
type of example stems from what are cahed "theoretical
identifications' in science. Theoretical identifications involve
identifying one concept with another, usually drawn from a
different branch of science. Water is H2O' is an example, as is
'temperature is average molecular kinetic energy'. In both of these
cases, a familiar everyday concept is equated or identified with a
more esoteric scientific concept Often, theoretical identifications
furnish us with what seem to be scientific explanations. When
chemists discovered that water is H^O, they thereby explained
what water is. Sunilarly, when physicists discovered that an
object's temperature is the average kinetic energy of its molecules,
they thereby explained what temperature is. But neither of these
explanations is causal. Being made of H^O doesn't cavse a
substan,ce to be water - it just is being water. Having a particular
average molecular kinetic energy doesn't came a hquid to have the
temperature it does - it just is having that temperature. If these
examples are accepted as legitimate scientific explanations, they
suggest that causahty-based accounts of explanation cannot be the
whole story.

Many people believe that in the end, science will be able to explain
facts of this sort. This is quite a plausible view. Molecular biologists
are working hard on the problem of the origin of life, and only a
pessimist would say they will never solve it. Admittedly, the
problem is not easy, not least because it is very hard to know what
conditions on earth 4 bilhon years ago were like. But nonetheless,
there is no reason to think that the origin of hfe will never be
explained. Similarly for the exceptional memories of autistic
children. The science of memory is still in its infancy, and much
remains to be discovered about the neurological basis of autism.
Obviously we cannot guarantee that the explanation will eventually
be found. But given the number of explanatory successes that
modem science has already notched up, the smart money must
be on many of today's unexplained facts eventually being
explained too.

!?

But does this mean that science can in principle explain everything?
Or are there some phenomena that must forever elude scientific
explanation? This is not an easy question to answer. On the one
hand, it seems arrogant to assert that science can explain
everything. On the other hand, it seems short-sighted to assert that
any particular phenomenon can never be explained scientifically.
For science changes and develops very fast, and a phenomenon that
looks completely inexpHcable from the vantage-point of today's
science may be easily explained tomorrow.

^.
i

Can science explain everything?


Modern science can explain a great deal about the world we five in.
But there are also numerous facts that have not been explained by
science, or at least not explained frilly. The origin of life is one such
e.xample. We Imow that about 4 bilhon years ago, molecules with
the abfiity to make copies of themselves appeared in the primeval
soup, and life evolved from there. But we do not understand how
these self-repHcating molecules got there in the first place. Another
example is the fact that autistic children tend to have very good
memories. Numerous studies of autistic children have confirmed
this lact but as yet nobody has succeeded in explaining i t

52

f
?
=
"

According to some philosophers, there is a purely logical reason


why science wl never be able to explain everything. For in order to
explain something, whatever it is, we need to invoke something else.
But what explains the second thing? To lustiate, recall that
Newton explained a diverse range of phenomena using his law of
gravit}'. But what explains the law of gravily itself? I f someone asks
why all bodies exert a gravitational force on each other, what should
we teh them? Newton had no answer to this question. In
Newtonian science the law of gravity was a fundamental principle:
it explained other things, but could not itself be explained. The
53

moral is generalizable. However much the science of the future can


explain, the explanations it gives will have to make use of certain
fundamental laws and principles. Since nothing can explain itself, it
follows that at least some of these laws and principles will
themselves remain unexplained.
Whatever one makes of this argument, it is undeniably very
abstiact It purports to show that some things will never be
explahied, but does not teh us what they are. However, some
philosophers have made concrete suggestions about phenomena
that they think science can never explain. An example is
consciousness - the distinguishing feature of thinking, feeling
creatures such as ourselves and other higher animals. Much
research into the nature of consciousness has been and continues to
be done, by brain scientists, psychologists, and others. But a
number of recent philosophers claim that whatever this research
throws up, it will never fully explain the nature of consciousness.
There is something intrinsically mysterious about the phenomenon
of consciousness, they maintain, that no amount of scientific
investigation can eliminate.
What are the grounds for this view? The basic argument is that
conscious experiences arefrmdamentaliytmlike anything else in the
world, in that they have a 'subjective aspect'. Consider, for example,
the experience of -watching a terrifying horror movie. This is an
experience with a very distinctive 'feeT to it; in the current jargon,
there is 'something that it is lil^e' to have the experience.
Neuroscientists may one day be able to give a detailed account of
the complex goings-on in the brain that produce our feeling of
terror. But will this explain why watching a horror movie feels the
way it does, rather than feeling some other way? Many people
beheve that it will not. On this view, the scientific study of the brain
can at most tell us which brain processes are correlated with which
conscious experiences. This is certainly interesting and valuable
information. However, it doesn't tell us why experiences with
distinctive subjective 'feels' should result from the purely physical
54

goings-on in the brain. Hence consciousness, or at least one


hnportant aspect of i t is scientifically inexphcable.

Though quite compelling, this argument is very controversial and


not endorsed by all philosophers, let alone aU neuroscientists.
Indeed, a well-known book published in 1991 by the philosopher
Daniel Dennett is defiantly entitled Consciousness Explained.
Supporters of the view that consciousness is scientifically
inexphcable are sometimes accused of having alack of imagination.
Even if it is true that brain science as currentiy practised cannot
explain tlie subjective aspect of conscious experience, can we not
imagine the emergence of a radicaUy different type of brain science,
with radically different explanatory techniques, that does explain
why our experiences feel the way they do? There is a long tradition
of philosophers trying to teh scientists what is and isn't possible,
and later scientific developments have often proved the
philosophers wrong. Only time wiU tell whether the same fate
awaits those who argue that consciousness must always elude
scientific explanation.

Explanation and reduction


The different scientific disciplines are designed for explaining
different types of phenomena. To explaha why rubber doesn't
conduct electricity is a task for physics. To explain why turtles have
such long hves is a task for biology. To explain why higher interest
rates reduce inflation is a task for economics, and so on. In short,
there is a division of labour between the different sciences: each
specializes in explaining its own particular set of phenomena. This
explains why the sciences are not usuafly in competition with one
another - why biologists, for example, do not worry that physicists
and economists might encroach on their turf.
Nonetheless, it is widely held that the different branches of science
are not all on a par: some are more fundamental than others.
Physics is usually regarded as the mostfimdamentalscience of afl.

Why? Because the objects studied by the other sciences are


ultimately composed of physical particles. Consider living
organisms, for example. Living organisms are made up of cells,
which are themselves made up of water, nucleic acids (such as
DNA), proteins, sugars, andhpids (fats), all of which consist of
molecules or long chains of molecules joined together. But
molecules are made up of atoms, which are physical particles. So
the objects biologists study are ultimately just very complex
physical entities. The same applies to the other sciences, even the
social sciences. Take economics, for example. Economics studies the
behaviour of corporations and consumers i n the market place, and
the consequences of this behaviour. But consumers are human
beings and corporations are made up of human beings; and human
beings are hving organisms, hence physical entities.

00

I
I
o
.
I
"

Does this mean that, i n principle, physics can subsume all the
higher-level sciences? Since everything is made up of physical
particles, surely i f we had a complete physics, which allowed us to
predict perfectly the behaviour of every physical particle i n the
universe, all the other sciences would become superfluous? Most
philosophers resist this line of thought. After all, i t seems cra2y to
suggest that physics might one day be able to explain the things that
biology and economics explain. The prospect of deducing the laws
of biology and economics straight from the laws of physics looks
very remote. Whatever the physics o f t h e future looks like, i t is most
unlikely to be capable of predicting economic downturns. Far from
being reducible to physics, sciences such as biology and economics
seem largely autonomous of it.
This leads to a philosophical puzzle. How can a science that studies
entities that are ultimately physical not be reducible to physics?
Granted that the higher-level sciences are i n fact autonomous of
physics, how is this possible? According to some philosophers, the
answer lies i n the fact that the objects studied by the higher-level
sciences are 'multiply realized' at the physical level. To illustrate the
idea of multiple realization, imagine a collection of ashtrays. Each

56

individual ashtray is obviously a physical entity, like everything else


in the universe. But the physical composition o f t h e ashtrays could
be very different - some might be made of glass, others of
aluminium, others of plastic, and so on. A n d they will probably
differ i n size, shape, and weight. There is virtually no l i m i t on the
range of different physical properties that an ashtray can have. So i t
is impossible to define the concept 'ashtray* i n pinrely physical
terms. We cannot find a true statement of the form 'x is an ashtray ff
and only i f x is
' where the blank is filled by an expression taken
from the language of physics. This means that ashtrays are multiply
realized at the physical level.
Philosophers have often invoked multiple realization to explain why
psychology cannot be reduced to physics or chemistry, but i n
principle the explanation works for any higher-level science.
Consider, for example, the biological fact that nerve cells live longer
than skin cells. CeUs are physical entities, so one might think that
|
this fact w i l l one day be explained by physics. However, cells axe
almost certainly multiply realized at the microphysical level. Cells
=
are ultimately made up of atoms, but the precise arrangement of
|
atoms win be very different i n different cells. So the concept 'cell'
cannot be defined i n terms drawn, from fundamental physics. There
is no true statement of the f o r m 'x is a cell i f and only ff x i s . . . '
where the blank is filled by an expression taken from the language
of microphysics. I f this is correct, i t means that fimdamental physics
wfll never be able to explain why nerve cells five longer than skin
cells, or indeed any other fects about cells. The vocabulary of cell
biology and the vocabulary of physics do not map onto each other i n
the required way. Thus we have an explanation of why i t is that ceU
biology cannot be reduced to physics, despite the fact that cells are
physical entities. Not aH philosophers are happy w i t h the doctrine
of multiple realization, but i t does promise to provide a neat
explanation o f t h e autonomy of the higher-level sciences, both from
physics and from each other.

5:^

O
o

CD
o CD

Q-

" 3.

Co

IM
O

H -0
O a>

Tl

T3

5o

Ol
cn

CD
cn
O

CD

s>

(fl

(D O

a<
^2,
2 =

o
= <o

1=

5'

CD

o a.
<S. H

cn^

00

CD

CD O
3 3 -

o ro
a> o

=1

O'

3 O
cn

CQ
W
=3

SL

7\
o
QJ O
Q cn
3

O 3

CD
<
T3
O
O

CO
O
O
O'

CQ
^ CD
oo' _

CD 3 . i - CD
O cp FT
CD

S5

^1
2
Q-

r<

Q. I
(Q

ro

CHAPTER

Technical Artefacts
We start i n this chapter by analysing the nature o f technical artefacts. W h a t sorts o f objects are
diey? W h a t are the typical characteristics of technical artefacts? Given that technical artefacts play
such a predominant role in our modern world, these are all important questions. The way in which
people interact with one another and relate to nature is determined, to a very high degree, by the
technological resources they have at their disposal I n order to gain a better understanding of the
nature of technical artefacts, we shall first compare them with natural and social objects. We shall also
stop to consider the moral status of technical artefacts, which w i l l involve us i n the question whether
they may be seen as good or bad, i n the moral sense o f the word. We w i l l come to the conclusion
that technical artefacts are physical objects designed by humans that have both a function and a use
plan. I f one adopts this view, then technical artefacts themselves can actually be morally significant.

1.1
CD
O

A WORLD MADE BYHUMANS

Human intervention i n the material world has taken on such huge proportions that one might safely
assert that we live, nowadays, in a human-made world.Technical artefacts have come to dominate our
environment and affect almost every facet of our behaviour. They have virtually superseded natural
objects i n every-day Hfe, to the extent that VK now even feel obUged to protect nature. The most
familiar types of technical artefacts are common objects of use such as tables, chairs, pens, paper,
telephones, calculators, et cetera. There is a huge variety o f technical artefacts from very small to
very big, from simple to complex, from component part to end-product and consisting of chemical
materials, et cetera. What all o f these things have i n common is that they are material objects that
have been deliberately produced by humans i n order to fiilfil some kind of practical fimction. They
are often described as technical artefacts i n order to emphasise that they are not natarally occurring
objects. Here, we do not classify artistic works as technical artefacts because they do not fiilfil any
practical kind of fimction (the producing o f art typicaUy also draws on different creative skills and
qualities than those demanded of an engineer).
The notion of technical artefacts and the distinction between natural and artificial objects
presents many philosophical problems. Take the assertion that technical artefacts exist. I t is indisputably true that humans enrich the world i n which they live with all kinds o f objects such as cars,
calculators, telephones and so on. None of those objects existed a couple of centuries ago, but today
they do, thanks to our technological ingenuity. What, though, do we precisely mean when we state
that such technical artefacts exist? Technical artefacts are artificial (or man-made) objects. But the
word 'artificial' also has the connotation of 'unreal' or 'not actual'. Does this mean that the very

1. TECHNICALARTEFACTS
existence o f technical artefacts, as artificial objects may be put into question? Take, for Instance, a
knife. For anyone who believes that the world is made up of only physical objects, i n the sense that
all the properties of objects can be fiilly described i n terms of physics, a knife does not exist as a
knife, that is, as a technical artefact, since 'knife' is not a physical concept. A l l that can be truly said to
exist from that point of view is a collection of interacting particles - atoms and molecules - of which
the knife is made up. Some metaphysicians therefore deny technical artefacts an existence of tiieir
own. For Peter van Inwagen, for mstance, a knife is nothing more than atoms that are organised
i n a knife-Hke way.^ Yet other metaphysicians, like Lynne Rudder Baker and Amie Thomasson do
grant technical artefacts an existence of their own, separate from the atoms of which they are made
up.^ The distinction between artificial and natural objects is also problematic. I t maybe taken to be
an artificial distinction i n itself i n the sense of being 'unreal', since one could point out that people
themselves are natural organisms and that everything that a human being, as an integral part o f
nature, produces is inevitably natural (in much the same way that a beaver's dam may be termed
a natural object). By attributing fimdamental significance to the natural-artificial distinction, i t is
almost as though we humans are incHned to somehow view ourselves as external to nature. That,
in turn, instantiy gives rise to the question of how precisely humans may be -newed as non-natural
beings.
These are not the kinds o f questions that w i l l preoccupy us here. I n this chapter, we shall
confine ourselves to a 'conceptual analysis' o f the term 'technical artefact'. We shall endeavour to
elucidate the term by comparing the way i n which we describe technical artefacts w i t h the vray i n
which we describe natural and social objects (what exactly is meant by a social object wiU. become clear
i n Section 1.3). I t will be presumed that similarities and differences i n the methods of description
teU us something about the nature of technical artefacts. I t is hoped that i n that way, we w i l l gain
more insight into that nature, into how technical artefacts differ from other objects i n the world, i n
particular from natural objects and social objects.
We take the modern aeroplane as an example of a technical artefact. Planes have now existed for
more than a century, and they are deployed for various ends. Our focus wiU be on tiiose used for civil
aviation purposes, such as the Airbus A380. There are many different kinds o f descriptions for such
planes, rangingfrora very general descriptions of the type given in encyclopaedias and advertisements
to extremely extensive technical descriptions of t i e kind given in, for instance, maintenance manuals.
I n those descriptions, one may distinguish three different aspects, each of which is important when
answering the question o f exactly what kind o f an object a civil aviation aircraft is. The first aspect
relates to the question o f the purpose served by planes (for example, they are used for transporting
people). The second aspect pertains more to the aeroplane's structure and to how i t is buUt up
(comprising a fiiselage, wings, engines, a ta, a cockpit, and so on). Finally, the third aspect has to
do with how a plane is used (describing, for example, what a pilot does to get i t airborne).

^Van Inwagen, P. [1990].


omasson, A. [2007].

1.2. TECHNICALARTEFACTS AND NATURAL OBJECTS

I n general, there are always at least three relevant questions that can be asked about technical
artefacts:
Question 1: W h a t is i t for?
Question 2: What does it consist of?
Question 3: H o w must it be used?
The answers to these questions wUl describe the foUowing respective matters:
Aspect 1: The technical function of the technical artefact.
Aspect 2; The physical composition of the technical artefact.
Aspect 3; The instructions for use accompanying the technical artefact.
These three aspects are not independent of each other because the physical composition must be
such that the technical fiinction can be fiilfitled, and i f that fimction is to be reaUsed, the user wiU
have to carry out certain actions as described i n the instructions for use as laid down in, say, the
user manual. A technical artefact cannot therefore be considered i n isolation o f t h e accompanying
instructions for use. These instructions specify a certain useplan^ a series of goal-directed actions
to be met by the user with respect to the technical artefact to ensure that the fimction is realised,
presuming that the technical artefact is not broken. Each technical artefact is, as it were, embedded
in a use plan that is closely attuned to the relevant fiinction.

I n the light of what has been stated above, we may therefore define a technical artefact as a
physical object with a technical function and useplan designed andmads by human beings. The requirement
that the object must be designed and made by humans is added to ensure that any natural objects
that happen to be used for practical purposes are not also termed technical artefacts. For instance, a
sheU can be used for the purpose of drinking vrater. I n such a case, i t has a function and a use plan.
Nevertheless, since i t does not also meet the requirement of being human-made, it is not termed a
technical artefact.

\
;
'

*
i
i

We shaU now first consider how technical artefacts, as physical objects w i t h a fimction and a
use plan, differ from nataral and social objects. We shaU see that the first question (What is i t for?)
cannot be sensibly applied to natural objects and that the second question (What does i t consist of?)
has no relevance to social objects.

1.2

TECHNICAL ARTEFACTS AND NATURAL OBJECTS

The most obvious difference between natural objects and technical artefacts is the fact that the ktter
result from purposefiil human action whUst the same cannot be said o f natural objects; a natural
forest is one that has spontaneously evolved, without any kind of human intervention. Nature is that
which has in no way been subjected to or disrupted by any kind o f human intervention. The essence
of this difference between nature and technology, which is i n many respects problematic, takes us
back to a distinction first made by Aristotie between things that evolve through their very 'nature'
and tilings which owe jhcir origins to other causes.^ Things that exist by nature - natural things ''Houkes and Vermaas [2010].
5p*)rjica, Bool: n , 192*.

1. T E C H N I C A L ARTEFACTS
possess i n themselves their principle of change. The nature of a thing is an intrinsic developmental
principle directed at realising the aim or end of that same thing. The 'aim' of a birch seed is to
develop into a fufly-grown birch tree, and the principle of growth lies within the birch seed itself. A
fully-grown birch tree, which is the result o f (caused by) that growth pruiciple, of the nature o f the
birch seed, is thus a natural thing.
A bed made Crom birch wood is not, by contrast, a natural object because the wood of the
birch tree has no internal principle of change that is geared to turning i t into a bed. A bed is not
a naturally occurring object because i t possesses, as bed, not an intrinsic principle o f change. As
Aristotle rightly observed, i f you plant a piece of wood taken from a bed, i t w i l l not automatically
grow (change into) a new bed. The reason for the existence of the bed lies outside the bed. I t Hes, of
course, in its designer and maker who create i t w i t h a certain purpose i n mind. The bed itself has no
aim or end, and insofar as a bed has a fimction, it has that fimction only i n relation to an external
aim o f the designer or user; here one sees, again, how close the relationship between function and
use plan is. I t is no natural object, but i t is the result of the human skill {Teyvrj or techn i n Greek);
it is a technical artefact.
A bed, as a bed, therefore has no nature i n the Aristotelian sense of the word. This is the
reason that, over the course of time, a bed requires maintenance i f it is to continue to fiilfil its rightfiil
function. Wear and tear and natural processes occurring in the material of which a bed is made, like
rotting, can eventually lead to the damaging of the technical artefact. I n general, terms, the very fact
that certain objects require maintenance or repair is a strong indication that one is actually dealing
with technical artefacts. From that point of view, biological objects such as gardens or dairy cattle are
also technical artefacts.They are a direct result of human intervention and cannot remain i n existence
without continued human maintenance and care. Natural objects do not share the same need for
maintenance and care. For those objects, it is rather the case that all forms of human intervention
may be directly seen as having a disruptive effect on their very nature.
For Aristotle, not only living (biological) objects have a nature, but physical objects also have a
nature; i t is their intrinsic principle of motion that endeavours to realise the ends of physical objects.
The motion principle of a heavy stone, for instance, is to gravitate to its natural place, and the natural
place for all heavy bodies is i n the centre of the universe.^ Whenever one throws a stone up into
the air, one forces it to make an unnatural upvrard motion when, i n fact, the intrinsic principle o f
motion of any heavy stone is to perform a downward motion towards its natural place i n the centre
of the universe.
I t is not easy to relate the Aristotelian concept o f nature with the notion o f nature that lies
at the basis of our modern natural sciences. That is mainly because the idea that physical objects
may conceivably have an intrinsic end is one that has been abandoned. Yet curiously enough, a
core element of the Aristotelian notion is still present i n the natural sciences. The natural sciences
are devoted to the study of natural objects and phenomena, i n other words, objects and phenomena
whose properties are not human-made but are rather determined by physical, chemical and biological

1.2. TECHNICALARTEFACTS AND NATURAL OBJECTS 9


lavre. Those laws maybe seen as the intrinsic principles of change for those objects and phenomena
and thus as their nature (or as the natural laws governing them). Examples o f such natural objects
and phenomena are electrons, the chemical reactions that turn hydrogen and 03cygen into water,
hurricanes, and so forth. The properties and the behaviour of these objects and phenomena are not
the result of direct human action.

I
;
;

W i t h this modern view of natural objects in the b ick of our minds, i n which physical objects
are also included as natural objects, we return to our analysis of the similarities and differences
between technical artefacts and natural objects. We wUl concentrate upon two categories of natural
objects, namely physical and biological objects. First, we w i l l compare technical artefacts with physical
objects. The most striking difference between an aeroplane and an electron is that the first has a
function and use plan whilst the second does not. Physical objects, such as an electron, have no
function or use plan; there is no place for functions and use plans i n the description o f physical
reality. This does not mean that electrons may not perform fiinctions i n technological equipment. I t
just means that from a physical point of view, such a function is irrelevant because that fimction has
no consequences whatsoever for the properties and behaviour of an electron as a physical object. The
function fidfiUed by a pkne, by contrast, is an essential property o f that thing as a technical artefact
i f we ignore the relevant function and the use plan, then we are merely left w i t h a physical and thus
natural object and not a technical artefact.
Perhaps, at first, i t looks strange to say that an aeroplane is a natural object when one abstracts
from its function and use plan because even i f y o u simply view it as an object without a function, the
pkne is sometliing that is fabricated by humans and thus an artefact. However, it is a natural object
in the sense that all its properties (the mass, form, et cetera) and its behaviour may be traced back to
the laws of physics and to the physical (natural) properties of the atoms and molecules of which i t is
composed. Whenever we take a random material object and contemplate and study it as a physical
object, then the history of that same object becomes irrelevant. I t is, for instance, irrelevant whether
the object was manufactured for a certain purpose or that i t has come about spontaneously. Neither
does the technical fimction o f that material object, indicating for what and how people may use it,
alter any of its physical properties. M u c h the same can be said of physical phenomena. Many of the
phenomena that physicists study nowadays do not occur 'in nature' but are instead produced by the
physicists themselves i n their laboratories. Nevertheless, they remain natural phenomena.

1
;
'

A second important difference between technical artefacts and physical objects, and one that
is closely aUied to the first, is that technical artefacts, unlike physical objects, lend themselves to
normative claims.' This is a good/bad aeroplane' or 'This aeroplane should be i n good working
order' are sensible assertions, but the same cannot be said o f assertions such as 'This is a good/bad
electron' or 'This electron shotild work'. Normative assertions about technical artefacts indicate that
they function to a greater or lesser degree or that they simply fail to fialEl their fimction. Normative
claims about technical artefacts must be carefully differentiated from normative claims, pertaining
to the way i n which they are used. A technical artefact can be said to be incorrectiy used i n an
instrumental sense, which amounts to the claim that the artefact is used i n a way that does not

;
'
:
)

^For Aristotle, the centre of the earth is also the centre of the universe.

10

1. TECHNICALARTEFACTS

correspond to its use plan (which typically means that the fimction is not realised). Alternatively, i t
can also be said to be wrongly used i n a moral sense. I n this second case, the normative claim is,
however, not one about whether or not the use plan is followed, i t rather is a claim about the goals
that are realised with the technical artefact and the moral status of those goals.
Let us now consider biological objects. H o w can an aeroplane be said to differ firom a bird
'in the wild'? Here, too, the answer seems obvious: a plane has a fimction, a bird does n o t Yet
the differences are more subtle and complex than i n the case o f physical objects. As a rule, i t is
true to say that biologists do not indeed attribute fimctions to plants and animals. W h a t they do
is to attribute functions to parts of plants, to organs of animal or to specific behavioural patterns.
This is where we encounter natural objects and phenomena with a biological fimction. I n a number
of aspects, though, biological functions are clearly different from technical fimctions. I n the first
place, biological functions are usually ascribed to the component parts and behaviotural patterns o f
biological organisms, not to the organism itself.^ Technical fimctions are ascribed to the parts of
technical artefacts but also to the technical artefacts i n question as a whole. I n the second place, the
biological fimctions of organs, for instance, are not related to use plans as in the case o f technical
fiinctions: a bird does not use its vrings or have a use plan for its wings. A third point o f departure
(linking up to the first point) lies i n the fact that it is impossible to make normative assertions about
organisms as a whole. The wing of a bird may be said to be malfimctioning but not the bird itself,
as a biological organism. To conclude, insofar as fiinctions do arise i n nature, these fiinctions would
appear to be different from technical fimctions.
Generally speaking, neit'ner physical objects nor biological organisms have fimctions, which
is why in the case of these kinds of objects, unlike i n the case of technical artefacts, i t does not make
sense to pose the question ' W h a t is i t for?'The differences between technical artefacts and biological
organisms diminish when, for practical purposes, specific varieties or organisms are cultivated or
engineered by means o f genetic manipulation (like for instance the Harvard OncoMouse o f the
1980s that was developed for cancer research or, i n more recent times, transgenic mice such as the
'super mouse' that has four times as much muscle mass as a normal mouse, thanks to a couple of
alterations i n its D N A ) . Just like technical artefacts, such organisms do have a technical function.
They, therefore, more closely resemble technical artefacts and, consequentiy, just as with technical
artefacts, endeavours are made to obtain patents for such organisms (natural organisms cannot be
patented). I n turn, such endeavours give rise to much controversy, which partly arises f r o m the fact
that, i n cases of this sort, i t is not so clear whether we are dealing with technical artefacts or simply
with biological objects that have been modified by humans.
The foregoing brings us to a general remark about the distinction betvreen technical artefacts
and natural objects. There is no sharp 'natural' dividing line between both kinds of objects. Certain
natural shells can, without i n any way being changed by people, be used fli drinking cups. Howevei^
the mere fact that a shell is used i n such a way does not mean to say that i t is a drinking cup or
^Ecologists attribute functions to plants and animals but only as components of ecosystems, and Geologists, again, do not attribute
fiinctions to the ecosystems as a whole.

1.3. TECHNICALARTEFACTS AND SOCIAL OBJECTS


i

1.3

11

a technical artefact I t is not a technical artefact because i t is not a htiman-made material object.
Undoubtedly, though, one could caU it a technical object A tree can be deliberately planted i n a
certain place so that when i t reaches maturity, i t serves .to shade you from the sun. That does not
make the tree a parasol, but it gives i t a technical aspect. Similarly, i t is difficult to say how much
a stone has to be altered by humans for i t to be turned into a hand axe, or a plant or animal for
it to be termed a technical artefact. There is no unequivocal answer to that question. The dividing
line between the natural and artificial worlds is a sliding scale; there is no clear-cut division between
t i e two. Yet tbat does not mean that there is no clear difference between paratiigmatic examples
of nattiral objects and technical artefacts. As we have seen, those differences do exist and, to sum
up, those differences relate especially to the status of having a fimction and a use plan, and to the
accompanying possibiHty of maldng normative assertions.

TECHNICALARTEFACTS AND SOCIAL OBJECTS

Also the social world we inhabit is, to a high degree, one o f human making. Just as an aeroplane
is the result of direct human intervention i n the material world so are a new traffic rule or a new
legal body like a firm the results of deliberate interventions i n the social world. Like i n the case of
technical artefacts, such social 'objects' have a fimction. The primary function of a new trafHc law is,
for instance, that of regulating the rights and obHgations of people participating i n (public) traffic.
I n this section, vre shall compare technical artefacts vrith social objects. Such a comparison is
not simple because of the wealth and variety of different kinds of social objects. A law, government,
state, marriage, border, road user, driving licence, driving offence, traffic policeman, organisation,
contract,'and so on, are all part of sodal reality. Roughly speaking, all these things play a role i n
ruHng the behaviour o f humans, their mutual cooperation and the relationships between humans
and social institutions. A l l of that is done by a fabric of formal and informal rules. Here the main
focus w i l l be on a certain kind o f social objects, namely objects such as money, driving Kcences or
passports. From a technological perspective, they are interesting because usually technology tends to
play a prominent part i n the producing of these objects. A t a first glance, one might thus be tempted
to categorise them as technical artefacts. But, as we shall see, they are not 'real' technical artefacts
but rather social objects. We shall take as our example the ten-euro banknote. Its social fimction is
that i t constitutes legal tender - everyone can pay with i t
I n order to explicate the difference between a ten-euro banknote and an aeroplane, we shall
conduct the following thought experiment. Imagine that you are seated i n a good functioning
aeroplane, that is to say, i f we use (or operate) the aeroplane well, then i t w f l l fly properly ('well'
and 'properly' meaning: according to specifications). Then just imagine that you and your fellow
passengers suddenly become convinced that the plane no longer works, that i t is not ftdfilling its
function (without the good ftmctioning specifications/criteria having i n anyway changed). What
w f l l be the consequences of that for t i e ftmctioning of the aeroplane? W i l l i t instandy no longer
operate?-WiU i t suddenly break dovm? Such conclusions seem absurd. Whether or not aeroplanes
function (meet the specifications) is not somefiing that depends on what the users or any other

12

1. TECHNICALARTEFACTS

parties might happen to think but rather on the plane's physical properties since it is its physical
structures that have a bearing on the aircraft's functioning.
I n the case of the ten-euro note, matters are quite different. W i t h that note, precisely the
same fate could befall it as that which befell the Dutch ten-guilder bank note on 1^' January 2002
(and the German ten-mark note, or the Italian 10.000 lira note, et cetera) when i t lost its status as
legal tender. Despite the fact that the physical properties of the note had remained unchanged, i t
simply lost its power to serve as legal tender from one day to the next (indeed, much the same may
be said of a driving licence or passport that expires; without undergoing any physical change, it loses
its function). Evidently, such means o f payment do not depend, for the fulfilling o f tlieir fijnction
as legal tender, upon their physical properties. Naturally, such bank notes do, however, need to be
provided with ultra modern security systems i f forgery is to be avoided, and that is why technology is
so vital when i t comes to the producing of such monetary units. Nevertheless, the physical features
which, from a practical point of view, are necessary i f forgery is to be prevented are not in themselves
sufficient to realise the function of legal tender.
Unlike an aeroplane then, a ten-euro bank note does not fulfil its function on the basis of its
physical properties. O n the basis ofwhat does it perform its fimction? I n a broad outline, the answer
amounts to the following. I f a ten-euro bank note is to fulfil its function as legal tender, i t has to be
generaUy accepted as legal tender. Exactly whether the latter actually holds is all a matter o f whether
or not people see i t is being legal and official I n other words, i t is all down to their believing that
it is legal tender. I f they do not view i t as such, then the ten-euro note is unable to fiilfil its legal
tender fijnction and becomes relegated to the status of being merely a valueless piece o f paper'. I t
is therefore only on the grounds o f collective acceptance that a ten-euro bank note is able to f u l f i l
its function of legal tender. As soon as such collective acceptance disappears, it is no longer able to
fulfil its fimction (as was seen to occur in the case of the Dutch ten guilder note when the euro was
adopted).
We may therefore conclude that there is an important difference i n the way iri which technical
and social objects fiilfil their functions. Technical artefacts fiilfil their function by virtue of their physical properties whilst social objects depend for their fimction upon their sociaVcoUective acceptation.
I n the case of social objects, such as money, the actual physical form taken is really immaterial which
is why money may talce very many different forms (from salt to digital information, e.g., 'zeros' and
'ones' printed on a chip card). I n the case of a social obj ect, such as money, the question ' O f what does
it consist?' does not really make sense i f one bears i n mind that i t does not depend for its fimction
on a given physical manifestation. The foregoing explains why, when i t comes to designing social
objects (laws, institutes, rules, et cetera), i t is vital to possess knowledge of people's social behaviour
(including matters such as; what gives rise to social acceptation and how can that be promoted). By
contrast, when i t comes to designing new technical artefacts, knowledge of the physical phenomena
is required.

',
\

13. TECHNICALARTEFACTS AND SOCIAL OBJECTS 13


either technical or social. We shall fiirther elucidate this point by talcing the example of the problem
of regulating vehicular traffic at a crossroads junction. That may be resolved i n a purely 'social' fashion
by endowing somebody with the function (status) of traffic policeman or woman and allowing them
to direct the traffic (in the way frequentiy done i n the early years of the car). Another social method
is that of introducing traffic lavifs. Such methods only work properly i f all or virtually every vehicle
driver either recognises the authority of the traffic poHceman or woman or abides by the traffic laws
that have been laid down. I n both cases, i t is a form of collective acceptance that is required. The
problem can also be resolved i n a totally technological fashion (as i n the way attempted through
the developing of automatic vehicle guiding systems) by equipping cars and road junctions vrith
information systems so that the traffic can be regulated without the intervention o f any form of
action taken by drivers or whomsoever. Collective acceptation is thus no longer required but rather
technological systems that are completely reliable and possess the necessary physical properties.
Certain practical problems may therefore be tackled from either purely technological or purely
social angles, but a mixture of the two approaches is also conceivable. The way that traffic Hghts
operate is a case in point. When we describe the fimction o f traffic lights by stating that they serve
to regulate traffic at a crossroads, then i t is clear that traffic lights can only fiilfil that fimction i f
there is evidence of both the relevant technological and social systems fimctioning properly. The
technological (or material) system comprises such components as switch boxes and lamps mounted
on poles and positioned at junctions w i t h red, amber and green colour filters sections. The switch
boxes have to ensure that the lamps situated in the three different compartments go on and off in the
correct sequence. Even i f that technological system works perfectly well, one still cannot guarantee
that the traffic light is really fulfilling its function. For the whole system to work perfectly well, road
users also have to adhere to traffic regulations in which such matters as the significance of the red,
amber and green lights are legally fixed. The function of the traffic light system may therefore only be
said to have been fulfilled when both the technological and the social processes i n question operate
i n the correct way (and are properly attuned to each other).
I f one asks whether traffic lights, together vrith the rules concerning the behaviour of road
users i n response to the different colours, are essentially a technical artefact or a social object, there is
no clear cut answer to that question. I t is not entirely a technical artefact but by the same token i t is
not a straightforward social object either. I t is a mixture ofthe two; traffics lights are an example of a
sociotechnical system (see Chapter 5). Given that a solution to the regulation of traffic at crossroads
may be found to a greater or lesser degree in the technological or social areas, i t again seems that
there is rib sharp dividing line between technical and social objects. Here again a word of warning:
even i f there is a seamless transition between technical and social objects, the distinction has its
significance. Even i f a set of traffic lights cannot be unambiguously classified as either a technical
or a social object, i t is perfectiy feasible to characterise different aspects of the whole as being either
technological or social.

Despite the fact that there is a crucial difference between the ways i n which technical and
social objects fulfil their functions, i t is not always possible to unequivocally classify objects as being

14

1.4. T E C H N I C A L FUNCTIONS

1. TECHNICALARTEFACTS

1.4

TECHNICAL FUNCTIONS

We have described technical artefacts as physical objects that havebeen designed and made by human
beings and that have both a function and a use plan. Moreover, we have noted that the function
bears some relationship to the physical structure of the technical artefact and to the use plan. Now
we shall more closely examine the question of what a function is, and vre shall do that by analysing
just how engineers use the term 'function'.
I n technological practice, there are two particular ways of describing technical artefacts that
are important. Those ways are descriptions from a structural point of view and from a functional
point of view. A structural description of a technical artefact simply describes i t i n terms of physicalchemical and geometrical properties. That is how a physical scientist, who knows nothing about the
technical artefact's fimction, would describe the thing after having analysed i t i n detail (in answer
to Question 2 i n Section 1.1). A fiinctional description looks, in contrast to a stracmral description,
at what the technical artefact is intended for without saying anydiing about the physical-chemical
properties (in answer to Question 1 above). A typical example o f a structural description would,
for instance, be: 'Object x has such and such mass, form, colour, and so on'; a typical functional
description would be: 'Object x is for y' i n which y is a relevant activity. Both kinds of descriptions
are indispensable i n technological practice, especially when i t comes to the matter of designing
technical artefacts. Schematically speaking, the designing of a technical artefact commences with a
functional description of the object to be designed and ends with a structural description of that same
object (see Chapter 2). A complete structoiral description has to be given i f the technical artefact is
to be subsequently produced.
Both descriptive methods are essential i f a technical artefact is to be fiilly described. A structural description only describes a technical artefact from the physical object viewpoint; i t does not
take into consideration tiie functional properties whilst, conversely, a functional description omits
all the structural features. They may not therefore be termed rival descriptive modes; they are complementary because they supplement each other.
Regarding the question as to what a technical function is, two interpretations can be broadly
distinguished. The first Interpretation closely Hnks functions to the objectives of human actions, to
what a technical artefact is expected to do, to 'the goal served'. I f one goes into more detail, fimctions
are then described i n terms aVhlack boxes' for which only the input and the output is described (see
Figure 1.1). The fimction of the technical artefact, of the blackbox, is thus to transform input into
output. One looks, as i t were, at the technical artefact merely from the 'outside' and describes what
precisely i t should do. The fimction of the technical artefact can only be said to have been realised
at the moment the goal is achieved. This is typically a user's view of technical artefacts and one i n
which fimctions are closely allied to use plans and the related reported goals that users have i n mind.
I t is a view of functions that actually plays a crucial role in the early phases of the design process
when all the functional requirements o f t h e artefact that is to be designed are estabhshed. This is a
thoroughly normative characterisation o f a fimction; i f the technical artefact, represented as a black
box, fails to transform the input into output, then it may be said to function badly or not at aU.

15

Output

Figure 1.1: The blade box interpretation of function.

The black box representation is insufficient for the designing, making, maintaining or repairing
of technical artefacts, aU of which belong to the engineer's central task. To that end, the black box
must be opened and viewed firom the inside. I n designing, the blackbox stiU needs to be given content.
What then comes to the fore is the link between fimctions and the physical properties and capacities
of the technical artefact as a whole together vrith all its component parts. I n these activities, we see
that engineers often interpret a function as a desired physical property or capacity o f t h e technical
artefact. W h i l e functions are viewed from this internal perspective, the goals of users disappear from
sight, and the emphasis comes to lie on the structural aspects o f the technical artefact i n question.
As long as a technical artefact has a desked capacity, i t fulfils its fimction regardless of whether
the alms of human actions are realised (see also Section 2.1). Take note tiiat although this second
more internal fiinctional description is hnked to the structural description of the artefact, they are
not the same. This second functional description is a description of desired structtiral properties of
the artefact whereas a purely structural description focuses on strucmral properties that fhe artefact
actually has.
Just like the stractural and fimctional descriptive methods, both interpretations o f functions
are indispensable i n technological practice. A technical fimction is inextricably intertwined with the
physical quahties and capacities o f a technical artefact, tiiese quahties and capacities ensure that a
function is realised i n practice. A n d the function of a technical artefact and tlie artefact itself, cannot
be detached from the use plans and f r o m the goals of human actions. I t is only in relation to those
goals that-technical artefacts have functions and that normative claims, pertaining, for example, to
the good or bad fimctioning of technical artefacts, make any sense.
Since the functions of technical artefacts cannot be contemplated i n isolation o f t h e goals of
human action and i f one presumes that from the moral angle those goals can be evaluated as good
or bad, one may well wonder whether technical artefacts can, i n themselves, be seen as morally good
or bad. We are not tahdng here about technical artefacts being good or bad i n an instrumental sense,
that is, about technical artefacts reahsing their technical fimction; we are bringing up the question
of whether i t is meaningful to assert that they are good or bad i n a moral respect This is where we
corne up against the problem of the moral status of technical artefacts.

16

1. TECHNICALARTEFACTS

1.5

T H E MORAL STATUS OFTECHNICAL ARTEFACTS

'Guns dont killpeople, people kill people' This slogan, once produced by the American National Rifle
Assodation, is perhaps the most succinct way of summarising what is known as the neutrality thesis
of technical artefacts,^ What this thesis asserts is that from a moral point of view a technical artefact
is a neutral instrument that can only be put to good or bad use, that is to say, used for morally
good or bad ends, when it falls into thc hands of human beings. People can use weapons to defend
themselves from robberies but also to mount armed robberies. The weapon in itself can never be
qualified as either good or bad in the moral sense of the word. The vray in which the neutrality thesis
specifically comes to the fore is in the notion ofthe 'dual use' of technical artefacts. I t is a term that is
used to indicate that technologies that can be used for peaceful ends, can often equally be deployed
for military purposes. A case in point is radar systems that can be used to foUow aeroplanes! On the
one hand, radar can be used to make civil aviation safer, but they can equally be used in times of war
to track and shoot down enemy aircraft
The neutrality thesis has direct consequences for our views about the moral responsibility of
engineers as designers and as makers of technical artefacts. The thesis asserts that technical artefacts
do not in themselves have any moral impHcations, thus implying that engineers are involved in
merely designing or making morally neutral instruments or means. As engineers can generally bring
no influence to bear upon the way in which the technical artefacts they design are actually put to use,
they cannot therefore be held morally responsible for the way in which their artefacts are ultimately
used.
There are various arguments that can be levelled against the neutrality thesis, a few of which
we shall briefly examine. In the first place, it may be remarked that in the design phase, engineers
already anticipate not only what form their product might take but also how it may be used. In other
words, they do not just design something that can be randomly put to good or bad use, but they
design rather for a specific use. The whole idea that during the design phase engineers anticipate
use is something that, for instance, emerges from the use plan notion. Whenever engineers design
any artefact, they simultaneously contemplate the goals of the kind of user they have in mind, who
will subsequently have to be able to fit that artefact into a certain use plan. Such use plans are
invariably not morally neutral. A macabre example of a morally reprehensible use plan is that of
the gas chambers of the Second World War in which a vast number of Jews were killed. In such
circumstances, it is hard to defend the argument that such purpose-designed gas chambers can be
construed as a neutral instrument that only became morally horrible in the hands ofthe Nazis.
In reply to that kind of argument, one might claim that whilst engineers are perhaps able to
anticipate certain kinds of uses, they cannot in fact determine them. An artefact can always be put
to different use to that which was originally intended, be misused or simply not used at a. Though
this may be true, this does not mean to say that a designed artefact is always suitable for any given
form of use. A radar system that has been designed for civil aviation purposes has to meet other
requirements and will possess other technical features to one designed for military ends. Not all

1.5. T H E M O R A L STATUS OF TECHNICAL ARTEFACTS 17


techniques can be easily implemented for dual use. Certain techniques are often so designed that
it is presumed, stipulated or tempting to believe that they will be used i n a certain way. A good
example of this is the traffic slowing down devices i n the form of speed bumps. Such speed bumps
are there to 'encourage' traffic participants to slow down i n an endeavour to raise traffic safety levels.
I n this case, a certain degree o f moral behaviour may also be said to have been built into the design
of the technology in question. Obviously, there is good reason why, in English, speed bumps are also
sometimes alternatively termed 'sleepingpolicemen'. I f such kinds of behavioural influencing can be
built into the technology, we then speak in terms of scripts. I n Chapter 3, we shall examine the moral
aspects of scripts i n a littie more detail.
A second argument against the neutrahty thesis lies i n the fact that the phases of designing and using cannot always be clearly differentiated. This applies especially to situations i n which
sociotechnical systems, such as the civil aviation system, come into play. The functioning of soclotechnical systems is not only dependent upon the proper operating o f technological systems but
also upon properly functioning social systems (see Chapter 5). A company such as Airbus designs
a component of this system, namely the aeroplanes, whilst the existing civil aviation system is firlly
i n use, and this system is one of the elements that determine the constraints for the planes that
are to be designed. Sometimes sociotechnical systems can be so all-embracing that the 'use' thereof
becomes almost unavoidable, like, for instance, i n the case of the electricity system. The built-up
environment or a city, which may also be seen as a sociotechnical system, has become so commonly
accepted in everyday hfe that i t would be strange to speak i n such cases of'use'. Likewise, i n the
case of sociotechnical systems, the notion of design can sometimes also be problematic; all kinds o f
components of sociotechnical systems are designed, but often, there is no single organisation that is
responsible for designing the system as a whole. O n top of everything else, the way in which use is
effected can sometimes also change the system itseE I n these kinds of cases, where the dividing Hne
between use and design becomes hazier, i t is difficult to sustain the thesis that technology is only
invested with moraHty when i t is put to use.
A third argument that can be levelled against the neutrahty thesis resides i n the fact that new
technical artefacts invariably lead to new options for acting, i n other words, to ways of acting that
did not previously exist and which could give rise to ethical questions. Vol instance, the invention o f
the aeroplane made i t possible to speedily transport goods to distant destinations and that, i n turn,
gave rise to new moral obHgations when i t came to the matter o f dealing w i t h disasters and famine
in distant regions. From time to time, new options for acting that are opened up by technology can
be very controversial, precisely because of the inherent moral considerations - like i n the case ofthe
cloning o f animals and people - and can therefore constitute grounds for wishing to oppose certain
technologies.
I n such cases, supporters of the neutrahty thesis could argue that i t is up to the users to
either avail themselves of the new options for acting or not. However, i n certain situations, the mere
existence of new options to act is, i n itself, something that is moray relevant. A good example of
this is prenatal testing of the sort that can Ladicate whether an embryo has some kind o f defect.

^For a defence of ttie neutrality thesis, see Pitt, J. [2000].

18 1. TECHNICALARTEFACTS
The detection of such defects can be reason enough for certain people to decide for an abortion, a
decision that may be questioned morally. However, apart from any moral issues related to abortion,
tiie mere possibihty of carrying out a prenatal test will force parents to make a morally relevant
decision, namely to either go for testing or not. I n these kinds of cases, i t is simply the creating of
new options for acting that is moraUy loaded.

1.6. CONCLUSION:THEDUALNATUREOFTECHNICALARTEFACTS

19

I n our final argument against the neutrahty thesis, we might point out that technical artefacts
do not just fijfil functions but that they also bring vrith them a whole host of undesired side-effects
and risks. The use of aeroplanes leads, for example, to noise hindrance, environmental poUution and
sometimes even to accidents in which people are IdUed. Such kinds of side effects and risks clearly
have moral significance. Adherents to the neutrality thesis could assert that also these side effects
primarily depend upon the manner o f use. That is not, however, always the case. The amount of
noise hindrance created by an aeroplane does not, for instance, only depend on how it is used but also
upon how i t is designed. The occurrence of side effects also indicates that the designing of technicai
artefacts is not only about their efficacy and efficiency. One should also bear i n mind that i t is not
just through being used for a certain purpose that technical artefacts infiuence the world but also
through their side-effects. Such side-effects have to be accounted for i n the design phase. The sorts
of issues one may think of i n this coimection are safety, health, sustainabUity and privacy. These are
aU moral values that can already, as i t were, be built into technical artefacts i n the design phase. I f
such moral values are already inherent to technical artefacts, then one could level this as yet anotlier
argument against the neutrahty thesis.
AU i n aU, proponents of the neutrahty thesis are very much inclined to detach the fiinctions
of technical artefacts from the specific aims and objectives o f human dealings and to conceive o f
technical artefacts as objects with particular physical properties or capacities. I n other words, they
conceive of technical artefacts as physical objects. Though i t is true to say that these physical objects
are designed and made by people w i t h a view to their particular physical properties, i t is also true to
assert that just Hlce fhe physical properties of a natural pebble or electron these particular physical
properties cannot be evaluated as good or bad i n a moral sense. I n other words, technical artefacts
i n themselves cannot be seen as either good or bad. Given such a view of technical artefacts, the
neutralitjf thesis may be said to be apphcable to them. However,ffwe think of technical artefacts as
physical objects that have been designed and made by human beings and that have both a fiinction
and a use plan, as we propose, then the neutrahty thesis can no longer be said to hold. The function
and the use plan hnk technical artefacts inextricably to human goals, and since such goals have moral
significance, the same has to be said of llie technical artefacts to which they are related.

1.6

CONCLUSION: T H E DUAL NATURE OF TECHNICAL


ARTEFACTS

Our endeavours to conceptuaUy analyse the notion 'technical artefact' have resulted i n the foUowing
three key notions: 'physical object', 'fimction and 'use plan'. The characterisation of an object as a
technical artefact has to refer to a physical object, a function and a use plan (symboHsed i n Figure 1.2

Figure 1.2: A concepmaJ anatomy of the notion of technical artefact.

by the arrows with continuous Unes).We have fiirthermore estabhshed that the fiinction ofa technical
artefact is, on the one hand, related to the physical object and, on the other hand, to the use plan
(symboUsed by the arrows w i t h dotted hnes). We have also concluded that technical artefacts are
not raoraUy neutral because their fiinctions and use plans pertain to the objectives of human actions,
and those actions are always moraUy relevant.
Our conceptual anatomy o f the notion of technical artefact leads us to the conclusion that
technical artefacts must be a very special land of objects. We have akeady estabhshed that technical
artefacts are different from physical (natural) objects and social objects. According to the interpretation given above, technical artefacts are hybrid objects that incorporate characteristics o f both
physical and social objects. A n aeroplane is, on the one hand, a physical object with aU kinds of
physical features and capacities required for fiilfiUing its function. O n the other hand, though, the
function o f a plane may not be termed a purely physical feature because i t also pertains to a use plan
or, i n more general terms, to a context of human action. I n that context of human action, goals have

55
20 1. TECHNICALARTEFACTS
a crucial role to play, and it is only in relation to those goals that physical objects can be said to have
fiinctions. Just like the fiinctions of social objects, the functions of technical artefacts are related to
the purposeful (intentional) actions of people, but they cannot be termed social objects because the
realisation of technical fiinctions is something that comes about in a completely different way. To
conclude, it may be asserted that technical artefacts have a dual nature-? they are objects that belong
both in the world of physical (i.e., natural) objects and in the world of social objects.

1.7

A F E W M O R E ISSUES

We have compared and contrasted techrcal artefacts with natural objects and social objects. Other
comparisons are also possible, so we would ask you to consider the following questions. What is
the difference between technical artefacts and waste, such as the sawdust that is created when wood
is sawn or the carbon dioxide produced from burning? What is the difference between technical
artefacts and art forms such as sculpting? What is the difference between technical artefacts and
chemical substances? Is there, for instance, a difference between artificial vitamins that are chemicaly
produced and the natural vitamins that are obtained from fruit and plants? With the aid of the terms
physical object', 'function' and 'use plan,' you can clarify your answers to these particular questions.
A question that is niore difficult to answer is the question of whether the objects that animals
produce can be described as technical artefacts. You may perhaps be inclined to maintain that certain
objects are technical artefacts hke, for instance, the dams created by beavers or the twigs fashioned
by apes to get ants out of anthills. Simultaneously, other products, such as cobwebs, might seem less
likely candidates. Where will you draw the line? Is it possible to draw a vague line? Does it make
sense to assert that animals allow objects to slightly fulfilfiinctionsby implementing use plans for
those objects?

CHAPTER
..

4
.,

. ,

Technological Knowledge
In this chapter, we return to engineering practice; to discover what forms of knowledge are relevant
in that area. The idea underscoring much philosophical work on this subject is that technology is
nothing other than applied science. We shall demonstrate that this idea is wrong: engineers do more
than simply use scientific or applied scientific knowledge, they develop own forms of Imowledge. We
will give a number of examples of such specific technological knowledge and present a number of
relevant specific features and characteristics.

4.1

ENGINEERS AND ICNOWLEDGE

In the previous chapters, we stressed how hnportant knowledge is for engineers. In Chapter 2, we
described what kind of knowledge is needed to be able to design a suitable artefact: one has to know
what the user vrants, what is feasible, how artefacts are generally used, what other artefacts are used,
how materials behave, et cetera. Lack of such Icnowledge is one of the many reasons why a design
can be doomed to failure.
Engineers therefore need to have knowledge and to know how to apply it, but it would
seem that accumulating new knowledge is not thek main objective. Sometimes, this has led to the
conclusion that engineers do nothing otiier than find practical applications for Icnowledge amassed
by others. This is the image of technology as appUed science: scientists gather knowledge and create
theories, and engineers apply that knowledge in order to design artefacts.The result of all that effort
is often practically usefiil or even indispensable, but it does not lead to new knowledge about the
world. We label this the 'applied-science-view'.
A particularly arresting description of that view is encapstdated in the slogan for the 'Century
of Progress' eschibition staged in Chicago in 1933: 'Science Finds-Industry Applies-Society Conforms'.
This conveys the hnage of a perpetual flow of Icnowledge trickling dovm from the realms of science
tb'industry, where the engineers create products and sodety adapts its behaviour to those artefacts
The slogan is essentially an ideal that is presented as a description: industry need not consider
the demands and behaviour of users, just as scientists must be free to develop knowledge without
burdening themselves with potential apphcations. Engineers are supposed to be consumers, not
producers of knowledge, just as all other members of society are supposed to be consumers, not
producers of new devices.
The appHed-science-view cannot be separated from the way in which engineers perceived
themselves. From the dawn of the Industrial Revolution until far into the last century, traditional
engineering disdplines such as dvil and mechanical engineering shifted increasingly from contin-

' K T O K and Meijers [ 2 0 0 6 ] .

56 4. TECHNOLOGICAL KNOWLEDGE
nations of practical and traditional craftsmanship to the scientific end of the spectram. Even the
relevant education was, in the process, reformed. Instead of providing practical, profession-oriented
training, engineering curricula were organised in such a way that students were taught, above all else,
just how to apply the theories gained from applied sdentific research. The underlying notion was
that academically trained engineers were not craftsmen but sdentists, and tiiat they should thus be
formed as much as possible in the mould of such academics.
After about 1960, a clear reaction to this school of thought could be detected. Increasmgly
fewer people, engineers and others, believed that engineering disdplines and practice could be or
even needed to be reshaped, according to some kind of scientific model. Herbert Simon influentially
pointed out that engineering discipUnes are 'sdences of the artificial', just like for instance computer
sdence, and that they differ inherentiy from the natural sciences.^-'^ According to Simon, the core
competence of natural sdentists is their abihty to understand, describe and explain reality whereas
sdentists of the artificial were in the business of changing the world for practical purposes. An
even stronger reaction to the 'scientification of engineering was voiced by Donald Schn.^'^ Schn
argued that the emphasis on formahsation and expHdt procedures - also by Simon - creates a false
impression of the work of many professionals, including engineers. He stressed the importance of
personal experience and training, especially in the design process, because cradal skUls could only
in that vray be learned. To a large extent, Schn therefore wanted engineering to go back to its
craftsmanship roots.
After 1970, the apphed-science-view had been abandoned by most authors who reflected
on the nature and methods of engineering. One important counterargument to the view was that
there had been all kinds of technological developments which only much later - if at all - could be
scientifically substantiated. The first generations of steam engines were, for instance, buUt without
sdentists even knowing how to describe how those particular artefacts vrorkcd. Similarly, transistors and, more recentiy, high-temperature superconductors were created before scientists were able
to precisely comprehend what physical processes were responsible for the successfid operation of
such innovations. In these cases, revolutionary industrial applications came long before sdentific
understanding.
For this reason, it is impossible to defend that engineers alvrays apply what was first theoretically mapped out by scientists. That does not mean that science is not a good or even optimal basis
for technological developments. Very recent developments, for example in the fidds of nano- and
biotechnologj', cannot be understood in isolation from fundamental sdentific knowledge; and tie
curricula of most engineering courses, espedaEy at technological universities, are to a large extent
made up of core scientific subjects such as mechanics and thermodynamics.
Moreover, it may be an exaggeration to say that engineers only apply scientific knowledge, but
that does not imply that engineers actually develop knowledge themselves. There are, nevertheless,
enough reasons to presume that there is such a thing as 'technological knowledge', that this form of
21Simon,H.[19671.
22Sch5n,D.[1983].

4.2. WHAT ENGINEERS lOVJOW: TWO EXAMPLES 57


knowledge differs in certain respects from scientific knowledge and that engineers make an important
contribution to the development of technological knowledge. In the next section, we will provide
two examples of technological knowledge that has come to play a part in different engineering
disdphnes. In Section 4.3, we shah then go on to describe a number of important features of
technological knowledge.

4.2

WHAT ENGINEERS KNOW: T W O EXAMPLES

One way in v/hich the appHed-sdence-view can be successfully undermined is by showing in detail
how artefacts are designed and technically developed, and by explaining which kinds of knowledge
are required in the process. Some classical case studies of this type have been conducted by Walter
Vincenti. His book What Engineers Know and How They Know It has formed the basis for many
stuches, philosophical and otherwise, of technological knowledge since 1990.
Vincenti describes various types of engineering knowledge, especially knowledge which is
used and de\'eIoped during the design process. In Section 2.2, we already referred to this in our
characterisation of technical designing. Two particular examples described by Vincenti show how
broad and varied the Imowledge of engineers is: in both examples, a different type of knowledge
is used and devdoped - the first fairly theoretical, the second more practical - and both types are
different from sdentific knowledge as well.
The first example is a calculation method, 'control-volume analysis', that is used widely by
engineering scientists and designers. In this method, any spatially well-defined volume may be
chosen, through which fluid or heatflows,or work is transmitted. Applying the laws of physics to
this volume and its surface then yields integral equations for both internal changes in the volume
and the transport of hquids, heat or work across its boundaries (see Figure 4.1).
Since the equations are derived more or less directiy from the laws of physics, control-volume
analysis seems just a usefiil instrument for engineers who wish to keep their physical booklceeping
in good order - the method generates no new knowledge about the physical world.
Ultimately, though, this method involves much more than handy bookkeeping. The choice
of the control-volume is of crucial importance for engineering purposes, although it is completely
arbitrary from the perspective of physics.Vincenti, W. [1990, pp. 113; 115] quotes from the Reynolds
^and Perkins textbook on thermodynamics, which advises engineers to put the boundaries of the
volume 'either where you know something or where you want to know something'. In other words,
the method produces knowledge: no knowledge that hes essentially beyond the bounds of physics
but certainly knowledge that is of direct relevance to engineers and irrelevant to physicists. The
importance of that knowledge is revealed in actual apphcations of the method, in which the volumes
usually follows the contours of various artefacts or components, such as steam engines, propellers,
turbines and mbes. By choosing the control-volumes in this way, engineers are able to keep technicaMy
relevant records of, for instance,fluidflovrein and out of a tube system, without having to worry about
all kinds of internal physical factors like turbulence. Control-volume analysis therefore operates like
a filter for usefiil, technological knowledge.

58 4. T E C H N O L O G I C A L KNOWLEDGE

4.3. FORMS AND FEATURES O F T E C H N O L O G I C A L KNOWLEDGE

59

had increased. Around 1940, a large number of relevant quantities had been defined on the basis of
that knowledge. One of these, 'stick-force per g', was to become the most important specification.
Vincenti adds that nowadays aeroengineers find it hard to imagine that other specifications were
ever used, and that it did, in fact, take years to recognise the importance of the particular quantity
they use. That proves just how important it is to have comprehensive and precise specifications: once
they have been estabHshed, they are as 'natural' and 'self-evident' for engineers as Maxweh's laws are
for physicists. In addition, the example demonstrates that identifying a new specification amounts
to developing new knowledge. Such knowledge cannot be simply deduced from physical knowledge
about the designed artefact, and it is indispensable to successful designing. Even if aeroengineers in
the early 1930s would have described their plane design in all possible physical quantities, they would
not have realised that 'stick-force per g' was the most relevant factor for designing a manoeuvrable
aeroplane. One might even go as far as to posit that 'stick-force per g' is not a.physical quantity but
rather a technical one. I f one views an aeroplane as a physical object, like a stone, then that factor
has no special physical significance whatsoever. It only becomes of interest when one regards an
aeroplane as an artefact that serves a certain goal and that people want to use it in a certain way.
Figure 4.1: The control-volume analysis of a propeller [McCormick, B., 1979, p. 343].
The method is furthermore unique to engineering science. It is presented, together with all
kinds of apphcations, in virtually all courses in and textbooks on engineering thermodynamics. But
whereas students in appBed physics or chemical technology are supposed to master the technique,
it is unknown to most students in physics or chemistry. Vincenti gives the example of Zemansky's
classical Tiermcdynamics.The only edition of that work that contains a description ofcontrol-volume
analysis was especially written for engineers and co-authored by an engineer; theie is no trace of the
technique in other editions.
A second form of knowledge that Vincenti describes at length, reveals the importance of
technical specifications in the design process. How, as a designer, do you arrive at these specifications?
Ifyou want to design a car that performs just as well as a current model but consumes 10% less fuel,
the description of the specifications is relatively simple - even though they will undoubtedly present
you with some difficult trade-offs later in the design process. But not every design process can be so
exactly described; sometimes, it is a major problem to draw up the specifications in the firet place.
Vincenti illustrates this with an example from aviation. In the early days of aeronautics, the
primary concems were direct performance features such as lift, speed and maximum altitude. After
the 1920s, however, the attention of designers shifted to other, more quaEtative aspects, such as
stability and manoeuvrabihty I t vras no easy feat to formulate specifications for those characteristics. Vincenti describes, in detail, how engineers searched for-suitable variables. The task was to
convert subjective perceptions of pilots, notably that of having control over the plane, into concrete
specifications. Engineers did that in cooperation with test pilots: by ensuring that certain variables
could be influenced by the pilot, the engineers could check whether the feeling of being m control

O
O

60

4. T E C H N O L O G I C A L KNOWLEDGE

an operational principle is, simply put, knowledge of the way in which a particular thing works, that
is to say, how it fulfils its fimction. One operational principle of an aeroplane may, for instance, be
described as creating upward force or 'Mft' by moving through air. A good knowledge of operational
principles is indispensable to engineering practice and is always oriented towards a certain type of
artefact. Much the same goes for knowledge of normal configurations, that is to say, the organisation of components in a given artefact with a certain operational principle. Anyone who sets about
designing an aeroplane will base his ot her ideas on existing designs by thinking, for instance, in
terms of a structure that has two engines attached to the wings. This is not to say that an engineering
designer can not in anyway deviate from existing patterns, but he will have a repertoire of successful
configurations. In combination, operational principles and normal configurations dictate the normal
design context, the way in which a certain problem is typically resolved. Such solutions have often
been able to prove their merits in practice and, therefore, also reflect designers' and users' experience.
h. Technological knowledge is directed at usefulness not at truth.
You cannot specify technological knowledge just in terms of its domain, although it may
certainly be helpful. Most artefacts are also physical objects, so part of our knowledge about them
is not exclusively technological but also scientific. Engineers are not interested in all this scientific
knowledge. This, for instance, becomes clear in their use of control-volume analysis: they ignore
practically irrelevant phenomena such as turbulence in a water pipe system. They furthermore introduce Icnowledge that is not part of or derivable from any natural science, as emerged from the
discussion on technological specifications.
The selection of relevant knowledge is closely linked to the aim of acquhing technological
knowledge. Technological knowledge is not, like scientific knowledge, directed at findhig the truth
or increasing understanding. I t is more directed at usefiilness. In Vincenti's words: '...the objective
of the mission of the engineer [is] to design and produce useful artefacts. By contrast, the scientist
seeks to gain a knowledge of the wortdngs of nature' [Vincenti, W., 1990, p. 131].
Underlying this is the idea that engineers are never concerned vrith knowledge-for
knowledge's-sake. Many engineers and philosophers who contemplate the nature of the work of
engineers subscribe to this idea, which seems more defensible than the idea that for engineers the
developing of knowledge is immaterial (see Section 4.1).
Three compHcations arise for the idea that there is a fundamental distinction between
"usefulness-first" technological knowledge and "truth-first" scientific knowledge.
The first is that it is customary at many technological universities to distinguish engineering
researchfiromengineering design.t^s distinction demonstrates that there are engineering discipBnes
in which the central concern is not to design artefacts, but rather to generate knowledge about
artefacts through research. Perhaps all such knowledge ultimatelyfindssome practical appheation,
but there need not be a direct hrjc with a particular useful artefact To return to an earlier example,
control-volume analysis is a general method that cannot be directly judged on the basis of, for
example, its usefiilness for designing propellers. Rather, one of the reasons why control-volume
analysis is deemed so useful is that it is based on physical knowledge that we take to be true; one

4.3

FORMS AND FEATURES OFTECHNOLOGICAL


KNOWLEDGE

Compared to scientific knowledge, technological knowledge has not received a lot of attention
from philosophers. Nonetheless, it has a number of distinctive and interesting features. We shall
discuss just four of them here. In so doing, we shall systematically examine one or more specific
forms of technological knowledge that show these features. In this way, we demonstrate not just the
uniqueness but ako the diversity and scope of technological knowledge.
a. Technologicalkno-wledge is artefact-oriented
Knowledge is often specified on the basis ofthe subject or, more precisely, the domain to which
it pertains. Many descriptions of scientific areas are similarly domain oriented. Physics, for instance,
is described as (the system of) knowledge of matter and energy, and economics as knowledge of the
production, distribution and consumption of goods and services.
An obvious vray to characterise technological knowledge would therefore be to specify its
domain. Technological knowledge could, for instance, be said to be all about artefacts, technical or
otherwise, or about objects designed by engineers for practical purposes (see Chapters 1 and2). Some
technological knowledge might b confined to one aspect of an artefact whilst other knowledge wi
relate to a whole collection of artefacts. Control-volume analysis can essentially be apphed to any
technical system that contains fluids. Knowledge about the navigability of aeroplanes vras, initially,
only related to the one model type on which the tests were carried out. Later, especially through
defining a relevant quantity, it was extended to other types of aircraft.
Two kinds of technological knowledge that clearly concern artefacts are knowledge of operational principles and knowledge of the standard or normal configurations of artefacts. Knowledge of

4.3. FORMSANDFEATURES O F T E C H N O L O G I C A L i m O W L E D G E

61

might say that it is, in effect, a tool for making tme knowledge useful This holds for many more
examples of useful knowledge, which are based on - but not identical vrith - true claims about
the world. Knowledge can, of course, be useful without being true, as all sorts of unreahstic but
convenient models demonstrate. But it is undoubtedly true that engineers strive after knowledge
that is both useful and true, even though they might find usefulness more significant if they cannot
have both usefulness and truth.
A second comphcation is that usefulness also plays an important role in scientific knowledge.
According to so-called 'instrumentalis.ts' in the philosophy of science, scientific theories are not
accepted because they might be true but rather because they are usefiil for predicting and possibly
describing and explaining the results of measurements. Even those who do not want to go this far
cannot deny that nowadays possible apphcations play a major part in assessmg scientific knowledge,
even of the most fundamental kind. When vrating proposals for research grants, scientists are
currently required to indicate the scientific and societal significance of the proposed research project
Allegedly, only some 25 years ago, mathematicians could fill in 'not applicable,' but in this day and
age, such disregard for possible apphcations .would result in rejection of the proposal.
I f you are stiE convinced that there is a fundamental divide between scientific and technological
knowledge, then here is a third reason to doubt that idea. Science and technology have, over the
last two centuries, become ever more closely intertwined. One of those intimate hnks is seen in the
area of experimentation. In the year 1820, Oersted was still able to produce pioneering results with
a compass needle and a battery. Perhaps low-tech groundbreaking research is still possible, but the
vast majority of present-day experiments are multi-mihion feats of engineering with many of the
compHcations that also plague, for instance, large-scale mfrastructural projects (think, for instance, of
the Large Hadron Collider, the CERN particle accelerator in Geneva). Another place where science
and technology are hard to teU apart is in the storing, processing and distribution of information,
and in the communication and cooperation between academics. Here, an ever-increasing use is made
of ICT, which has sometimes been especiahy developed or adapted for these particular purposes.
Beyond this, an increasing number of technologies are being used to resolve scientific problems or
at least facitate the search for solutions. Some examples are simulation techniques that are used to
test hypotheses and models, and approximation methods that can only provide sufficientiy accurate
results if they are processed on povrerfiil computers. Institutionally, too, science and technology are,
to .an increasing degree, becoming intertwined. A case in point is the practice of applying for patents
for scientific outcomes.
In philosophy, history and sociology of technology, many chose not to write of'science' and
'technology' anymore but instead of 'technoscience'.^ The term refers to any kind of scientific
research that is conducted in a technological context, and that cannot be understood in isolation of
that context; and to any technological achievement that cannot be separated from scientific research.
Work on technoscience has, for instance, charted its history and recent development into large-scale
^Influential descriptions of'tecinoscience' are to be found in the woric of tlie philosopher of technology Ihde, D. [1979,1991] and
in the writings of the sociologist Latour, E. [1987].

62

4. T E C H N O L O G I C A L KNOWLEDGE

technoscience projects such as particle accelerators; others have examined how advances i n science
after World War I I were made possible by financing from mihtary institutions and how they, i n
w r n , contributed to advances i n mihtary technology. There is no fimdamental distinction between
scientific and technological knowledge i n this research; i n fact, trying to make such a distinction is
regarded as counter-productive.
A l l the same, the idea that truth and usefulness play different roles i n technological and
scientific knowledge is very credible. Still, the above-mentioned problems show how hard it is to
formulate the idea as a one-liner. Technological knowledge might be less concerned with truth, which
is why engineers are prepared to settie for knowledge that 'works' whereas scientists are supposed
to pursue knowledge that is 'true'. I n addition to this, technological knowledge has a more direct
hnk w i t h practical concerns than scientific knowledge. I f scientific knowledge is connected with a
certain artefact or has some direct practical implementation, one might describe i t as technological
Icnowledge ' i n disguise'. That i t was developed by a scientist does not make i t scientific Imowledge,
just as physicists might attempt to solve problems in economics. Control-volume analysis shows that
the connection with a specific artefact or a specific practical goal can be loose. The best conclusion
might be that there is a sHding scale vrith nuance chfferences, depending on the domain and goal.
A n inventory of the theories and models that engineers use provides evidence for this shdingscale view. A certain portion of those theories, such as classical mechanics and thermodynamics, is
purely scientific. Other theories, such as control-volume analysis, are derived from scientific knowledge but simultaneously reveal interests i n practical apphcations that are the province of engineers.
These interests are, for mstance, notable i n certain details that tend to be ignored. StiU further away
from the scientific end of the spectrum, one can find phenomenological theories. Such theories are
based on presumptions about a given system that are indispensable for calculation purposes but
which are clearly incorrect. Engineers (and scientists) turn to phenomenological theories especially
when no 'correct' theories are available. That is, for instance, the case i n situations where turbulence
plays a prominent role. Finally, some technological knowledge is systematic but can no longer be
described as theory. I n propeller design, i t is impossible to rely on information drawn from theoretical models. Still, designers need to have information about the behaviour o f propellers i n order
to make design choices. Such data is obtained from e.xtensive tests done w i t h real propellers and
with scale models i n which use is made of parameter variation and dimension analysis i n order to
'translate' the test data to real data on the actual performance of propellers. Here, more prominently
than vrith phenomenological theories, practice dictates how knowledge is accumulated. A scientist
would probably not wish to stick his or her neck out by making claims about how a propeller might
behave under complex circumstances, but an engineer does not have a choice. I f a scientific theory
were to produce practically apphcable data no engineer would, of course, refuse to use such a theory.
As it is, as long as no such theory is available, design choices have to be based on other sources. I t
would be sheer scientific chauvinism not to describe such alternative sources as 'knowledge'.

4.3. FORMS AND FEATURES O F T E C H N O L O G I C A L KNOWLEDGE

63

c. Technological knowledge has a 'know-how'facet.


Philosophers, following Ryle, G . [1949], often dlistinguish two types of knowledge. One is
the kind of knowledge that can be expressed entirely in terms of assertions; the other is not or, more
to the point, cannot be expressed i n such a way. This differentiation is formulated i n at least two
different ways. I n many languages, the word 'know' is used to express two different kinds of things.
First, there is 'know that' as i n 1 know that a plane flies because its wings create l i f t ' and 'know how'
as i n 1 know how to construct an operational plane'. This Hnguistic distinction is often extended by
proposing that 'know how' assertions cannot be entirely traced back to 'know that' assertions. The
standard example of this is cychng. Perhaps you are reasonably good at describing, i n language, what is
required to ride a bicycle, but could you teach someone to do it via a correspondence course? Probably
not. Cycling is typically something that is learnt byrfofng.This su^ests that'know-how'or'practical
knowledge' cannot be completely reformulated i n the theoretical or propositional knowledge^'* that
fills textbooks.
A close cousin of know-how, perhaps even the same tiling by a different name, is imphcit or
tacit knowledge.^^ This knowledge is also gained from personal experience and cannot be verbally
conveyed to others. Imagine, for a moment, that you are an experienced pilot who has flown aU
types of aeroplanes. You can partiy make those experiences expHcit, also i n the form of assertions,
but communicating them precisely across to others is difficult and will always remain sketchy and
firagmentary. Teaching someone else how to fly wiU only work by personally supervising him or her
- but perhaps even that is not enough, as you can only become an experienced pilot by flying a
lot yourself. This partiy explains why it was so hard to draw up technological speciflcations for the
navigabihty of a plane: pilots found it hard to verbaUse their experience i n this field.
The ideas on 'know-how' and imphcit Icnowledge have drawn a great deal of attention, also
in the hterature on knowledge management. I f you vrish to transfer the Icnowledge that is available
within an organisation or company from one member to another, for instance, because of staff
changes or because information has to be shared with other branches, then you would do well to
know how such knowledge transfer works and how hmited vnritten instructions might be as a means
of communicating aU the relevant knowledge. The role o f tacit or imphcit knowledge could clarify
these hmitations.
Psychologists, management scientists, economists and philosophers therefore enter into extensive debates on the part played by 'Icnow-how' and tacit knowledge i n all kinds of processes.
There have also been high expectations for clarifying the 'tacit dimension' of technological knowledge. Some authors, such as Schn, have emphasised that designers perpetuaUy make use of highlj^
personahsed and difficult to transmit forms of knowledge and experience (see also Section 2.3).
Besides, just as the example of cycHng makes exphcit, the use (and design) o f artefacts contains an
element of'know-how,' which people rightiy accept cannot be completely put into words.

^"^A proposition cmounts to tile content of an assertion, for instance that 'the earth rotates around the sun'. TCnow-how" cannot be
termed propositional imowledge if it cannot be completely expressed in terms of assertions.
^This idea was elaborated by the chemical analyst Polanyi, M. [1966], who also stressed the importance of operational principles.

64

4. T E C H N O L O G I C A L KNOAVLEDGE

There are at least two forms of technological knowledge that have a large imphcit component.
The first is the type of knowledge that hterally cannot be expressed i n words because it is captured
i n images. Means of visualisation, from sketches to C A D simulations, are vital to engineers and
cannot be replaced by text. I t is unthinkable that a new aircraft might be designed vrithout the aid
of visuahsations i n every step of the process. Another type of knowledge that is highly implicit is
common sense or the capacity to make practical judgements. Since engineers solve practical problems
they are involved i n complex, changing circumstances, they work under time pressure and they face
uncertainties. Artefacts cannot be tested under ah conceivable circumstances; and sometimes there
is no theory to adequately describe, let alone predict, their behaviour. Implicit knowledge, gained
from practice in the field, can help engineers - and is often essential - when i t comes to deciding
what risks to take or uncertainties to accept instead of carrying out further tests or developing more
accurate models.
This is not only true for engineering: 'know-how' and practical judgement play a major role
i n virtuaUy aU. terrains, also i n the natural sciences. Just lUce engineers, scientists gain all lands of
practical experience during their education and when conducting research. That experience is put
to good use when they set up experiments (which can be seen as a technical aspect o f scientific
research) but also when i t comes to solving mathematical problems, and constructing models and
theories. Research conducted into scientific problem solving strategies has, for example, proved that
scientists very often visualise their problems and draw on analogies wdth more farahiar problems.
i Technological rules and use plans
The final feature builds on the previous two and adds a further crucial element.Technological
Icnowledge must be primarily usefiil (Section 4.3 b), and i t is i n part practical 'know-how' and
knowledge gained from experience (Section 4.3 c). So perhaps i t can be partially described i n terms
of the way i n which i t contributes to successful actions. Just think, for example, about who would
count as a technological expert, as someone who possesses extensive technological knowledge. Those
who possess scientific knowledge are predominantiy highly educated scientists who pass on bits and
pieces of their knowledge (often without the necessary substantiation) to scientifically uneducated
'laypersons'. W i t h technological knowledge, the roles are different. Designers and makers of artefacts
need to possess this knowledge, but i t is also indispensable for users. Moreover, the role of designer,
maker and user are defined i n terms o actions and not primarily i n terms o f whether or not they
possess knowledge or a particular method of acquiring knowledge - unlike the role of scientific
expert. This suggests that actions might also constitute the content of technological knowledge: such
knowledge is all about the means (actions) required to achieve certain ends.
One way of fleshing out that idea is by asserting that the engineering sciences produce technological rules, that is to say, instructions to carry out a certain series of actions i n a certain order wdth
an eye to achieving a certain goal. Aviation speciahsts produce, for example, instructions on how to
fly and maintain a plane by carrying out certain actions.
People perpetually give each other instructions. For instance, you can explain to someone else
how they can get home by means of the pubhc transport system or how to get rid o f a mole i n the

4.3. FORMS AND FEATURES O F T E C H N O L O G I C A L lOMOWLEDGE

65

garden. W h a t might be special about the rules that engineers produce? Perhaps it would be useful to
bring back the apphed-science-view and defend that technological rules, unhke common-or-garden
tips or the contents of the railway timetable book, are Indeed based on apphed scientific knowledge.
The idea is not very viable, though, partly because many technological rules lack such a scientific basis.The designers of the first steam engines could not depend on scientifically grounded rules;
i n designs where turbulence has to be accounted for, that is still impossible. Furthermore, engmeers
use ah kinds of rules-of-thumb whereby interim results are construed and checked. Sometimes such
rules are later substantiated or replaced by substantiated rules but definitely not always.
Another way to distinguish technical rules from more everyday instructions is offered by the
use-plan analysis that we presented i n previous chapters.The characterisation of technological rules
that was given above is very similar to our earher characterisation of use plans: goal-directed series
of actions, which include the manipulation of a technical artefact A n d as was previously outhned,
designers do not just produce artefacts, they also develop use plans which they communicate to
potential users. Every artefact is embedded i n such a plan. Knowledge o f use plans is of crucial
importance to users: without a plan you might know what an artefact is for but not how you must
operate it to achieve that goal. A p i a n therefore has an instructive (prescriptive) character: i t contains
suggestions for actions that are, to put i t mildly, strongly recommended to users.
By communicatmg use plans, designers therefore transfer practical indispensable knowledge
to others. This is no factual Icnowledge but knowledge about rules, required or recommended actions.
Designers do not need to ground such knowledge i n scientific theories. Engineers can test prototypes
and adapt them on the basis of parameter variations or trial-and-error; i n that way, they might design
a 'tried-and-true' combination of artefact and use plan that is spectacularly successful without being
able to explain this success with scientific knowledge.
Use plans show that, to a certain extent, technological knowledge must be procedural or
prrescripti-ve. As a designer, you have to do more than just describe an existing simation or a new
artefact. You have to know how that artefact should work and what actions people should undertake
to reahse their goals with the help of that artefact Instractions on how to land a passenger aircraft
do not constitute descriptions o f actual landings, though most landings will hopefully be in hne
w i t h the-instructions. Instructions dictate how something should happen and therefore constitute a
standard for assessing the behaviour of artefacts and people. A n engine that misfires does not operate
optimally, just like a pilot who does not maintain a safe distance f r o m other aircraft. Even rules-ofthumb are often described i n prescriptive terms: i f a design does not comply vrith rules-of-thumb,
then i t is usually concluded that a calculation mistake has been made or that a wrong decision has
been taken i n the design process.
By contrast, natural scientists produce no instructions or rules, and the standard examples
of scientific knowledge are not prescriptive. Nevrton's laws o f physics do not dictate how bodies
should move, they describe what they actually do. Technological rules and use plans could thus well
be a unique component o f technological knowledge, which helps to distinguish i t from scientific
knowledge.

66

4. T E C H N O L O G I C A L ICNOWLEDGE

1 4.4

CONCLUSION

We have shovra that engineers develop knowledge and that this technological knowledge has a
number of specific features and forms. Technological knowledge is partly artefact-oriented, directed
at usefulness, tacit and prescriptive. It contains elements that resemble scientific knowledge yet differ
in subde ways, such as control-volume analysis, and elements that cannot be found anywhere else in
die natural sciences, such as knowledge about operational principles and rules for use and design.
This demonstrates that fhe traditional 'apphed-science' image is wrong: engineers can not
and do not simply apply scientific knowledge; they are loiowledge producers as well as consumers.
Yet it is difficult to make a fundamental distinction between scientific knowledge and technical
Imowledge. The similarities seem to be too great for that. Moreover, science and engineering are
nowadays so interwoven that it would not be reahstic or even possible to keep apart the knowledge
produced in these activities. Still, it might be fruitful to analyse in more depth and detail the features
of technological knowledge, such as its 'know-how' nature, and the part played by rules and use
plans: if all our knowledge had all those features, it would only become more necessary to improve
our understanding of these features and thus of our own knowledge, technological or scientific.

4.5

A F E W M O R E ISSUES

We have examined a ntraiber of the features of technological laiowledge, such as its connection with
actions and with usefuhiess, yet we do not fundamentally differentiate it from scientific knowledge.
EstabHsh whether that latter idea is right by looking for concrete examples of scientific knowledge
that share some of the characteristics of technological knowledge. What knowledge about artefacts
can, for instance, be said to be scientific? What kind of scientific knowledge has a large tacit component but at the sametimea clear link with rules and actions? In what kind of scientific knowledge
does usefiilnessplay a major part? Deterrnine whether these examples of knowledge could be termed
'scientific' or would it be more apt to call them 'technological' (even though it is perhaps part of
physics).
A more difficult issue is whether the particular characteristics of technological knowledge
can be emphasised without implying a fundamental distinction with scientific knowledge. One way
this might be done is by placing the two forms of knowledge on a shding scale going from 'very
scientific' to 'purely technological' without drawing a sharp dividing Hne somewhere on the scale.
Try to see i f that actually works by placing the foUowing types of knowledge on such a shding scale:
fluid dynamics, the knowledge of Vincenti's pUots, the knowledge of a good model plane buUder,
the knowledge of a team of aeroplane constructoi^ at Boeing and laiowledge about fibre materials
such as Glare. Do these kinds of knowledge feature on points of the scale or in intervals? Is one scale
sufficient?
Furthermore, think about why it might matter to make a difference between scientific and
technological knowledge or sdence and technology. Why might people want to do this? Do they
really need a fundamental and sharp dividing Hne or would a sliding scale work just as weU?

Vous aimerez peut-être aussi