Vous êtes sur la page 1sur 6

Epistemological Grounds for Cybernetic Models

Yves J. Khawam
Ecole de Bibliothkonomie
et des Sciences de Ifnformation,
Mont&al (PO), Canada H3C 3J7

This study addresses the problems involving the adaptation of cybernetic models to operational realities.
More precisely, three epistemological views are in turn
investigated so as to determine the problems regarding
information transfer between a model and the real
world. Of the three epistemologies under investigation:
realism, a priorism, and phenomenology, the latter
demonstrates the most promise in terms of opening up
operational possibilities for the model, but introduces
problems involving the adaptation of the model to the
reality.

Preface

The purpose of the present work is to discuss concepts underpinning the building of truly intelligent machines: to offer meaning to biological systems in
psychological settings. Such an endeavor is ipso facto
within the scope of Artificial Intelligence (AI), yet it is
an area which remains largely ignored by AI researchers, efforts being instead concentrated on the development of faster and larger rule-based systems.
In tracing the emergence of AI-related paradigms
one finds that world views have shifted from the Newtonian mechanics model of explaining phenomenon,
to information theory, to the information processing
level of modeling which presently characterizes cognitive science (McCorduck, 1979). Indeed with each shift
in paradigm, cybernetics/AI research has turned to
philosophy in order to secure a new paradigm. Once
secured, however, the research strangely ceased further
investigation into the philosophical grounds of the work
at hand. Even though not sustained by research, it is
this authors contention that such alienation hinders research in AI.
The information community would benefit greatly
from such exposure since it may lead to rethinking
some of the basic aspects of intelligent systems, which
Received July

20, 1989; revised

January

26, 1990.
0 1991

by John

Wiley & Sons, Inc.

31, 1990; accepted

March

Universith de Mont&al,

C.P. 6128, succursale

A,

in turn would bring a fresh perspective to areas of stagnation where progress is sought solely through the development of more efficient algorithms.
This study attempts to readdress philosophical
grounds for artificial intelligence, based heavily on the
work of the last school to have systematically investigated such issues: the genetic epistemologists.
Introduction

All sciences at one point have to confront a problem


which is usually refuted as belonging to metaphysics or
ontology: the relationship between subject and object.
Taking cognitive science as an example, this problem is
particularly obvious in the study of perception. Even a
very objective cognitive scientist who would strictly
record observations without ever interpreting them,
would retain in experiments the systematic deformation
that perception inflicts on the objects perceived. If
from the experiments, and still without making the
slightest hypothesis as to what the subject perceives,
the cognitive scientist reconstructs an image of the universe, only the subjective geometry used to organize
measurements on objects would be obtained. Even if
the differences between these two geometries appear to
be negligible, in seeing that one emanates from the perception of the measurements in the subject, and the
other from the measurements obtained from the object,
one can only ponder over which of these two images is
the most adequate. Which of these two images corresponds to the real object? Such investigations on the
origin, structure, methods, and validity of knowledge
are the constituents of epistemology.
Through time, an array of potential solutions have
been offered of which none can be demonstrated to be
correct since truth cannot be approached beyond disintegrating semantics; still, one has to understand the
limitations and different views in order to use the
medium based upon these first principles of knowledge.
This is especially true with cybernetics where one will
try to situate a model within a reality where perception
takes place. Crucial to developing such a model is in-

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE. 42(5):372-377,

1991

CCC 0002-8231/91/050372-08$04.00

vestigating epistemological prospects in order to shed


some light on the manner in which to build its internal
structure, the purpose of which is to react within a defined reality.
In order to approach possible solutions to this fundamental problem, the present study in-turn discusses
prospects for intelligence based on the three most
popular epistemological views of the occident: realism, a priorism, and phenomenology.
Prospects Based on Realism
As the first view addressed, realism postulates that
the inner image is a true reflection of the exterior reality. This implies total passivity on the part of the subject who should not transpose any inputs through
reflection for fear of destroying the balance between
his or her knowledge and reality. From this solution
which centers on the object, it follows that in the domain of intellectual knowledge, thoughts, numbers and
space are defined as existing materially in the universe
(i.e., Aristotelian realism), and are simply read by the
subject in the same manner as the objects perceived.
This interpretation however is unrealistic in the sense
that no reader or instrument of measure is at the same
time sensitive to all aspects of the medium, and completely free of error.
Retrieving information from the world implies that a
choice has to be made between stimuli, that is to say a
filtering and translating operation, which assigns to
each stimulus or class of stimuli, a symbol belonging to
the internal language of the machine in which the image is constructed. Therefore, a completely passive system is only conceivable at the price of a total paralysis:
it is not even capable of producing an image. Also, all
activity contains a probability of error which destroys
the fidelity of the image. Even machines, which deal
much more accurately with information
than their
human counterparts, have a built in system which constantly checks the inputted information by temporal or
spatial redundancy. This reconstitution of information
in a functioning system is a problem which has preoccupied many theoreticians
and practitioners in recent
years: Shannon (1948), von Neumann (1958), Pierce
(1964), etc.
The dynamic epistemology of realism, where the
subject is left to reflect passively on the exterior reality,
is termed empiricism. In this hypothesis, the subject
is modified by the active induction of the medium: a
constructive role is attributed to the experience in the
sense that knowledge becomes the result of a process,
for it no longer contains constituents innately. The primacy of the object, however, persists since it is the
medium which generates the process, modifying the
system by experience and allowing it to store these
modifications in a more or less durable manner. The
image constructed by such a system would probably be

very faithful in reflecting reality. It would contain only


the minor errors introduced by the internal mechanism
in processing the information. However, it would never
permit itself to override the experience in any other
form than by statistical extrapolation, and would therefore be powerless in dealing with contradiction. That is,
logical necessity would be alien to it and transformations within the medium would only result in maintaining its simple laws (on this subject see Piaget (1970)
where all these questions are dealt with in detail,
whereas only a few notions essential to establishing the
signification of a biological system in a psychological
setting are introduced here).
Grey Walters analogical machine CORA (Conditioned Reflex Analogue), is a good example of such a
system. It demonstrates that even a passive recording controlled by the medium will entail a far-fromnegligible operating activity within the subject (the
machine), and that the knowledge obtained will have
no way of acquiring meaning with respect to the medium if the machine cannot use it. No recording of
information is therefore passive, and such information
cannot have any meaning unless it can be read in a
certain manner.
The first operation that the machine must undertake
on the signals given to it from the medium is one of
filtering, which-as was indicated-is
performed by its
sensory organs. The sensory organs have yet another
function which is to translate (code) the signals into a
language acceptable to the memory of the machine.
Since the memory is material and therefore discontinuous and finite, the signals have to be adapted to this
format: even if they are continuous, they have to be cut
into discrete units, the machine having to decide when
a unit of memory is full and then go on to the next
available one. One also has to plan for what will happen
when all the memory is filled. For example, a possible
solution could be a sequential process which would systematically erase a unit for a new recording. From the
measurements of variations in the medium inputted,
the machine will produce an animated internal picture
which will reflect them in an incomplete but faithful
manner (due to the filtering process), that is; void of
initiative (if the mechanical and operational errors are
rare which can be achieved by the application of redundancy into the circuits or the codes. On this subject see
Winograd and Cowan (1963)).
For the exterior observer who would be able to dissect the machine, the image would be found as existing
in the form of electric charges located in the memory.
This dissection and exploration can be paralleled to
taking an output from the machine and yet the machine takes no output into its memory, which means
that to it, the image does not exist. This is a little troubling in terms of a theory of knowledge (the machine
does perform an output from its memory, but the information involved is strictly regarding whether or not

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-

June 1991

373

there is the presence of a charge, the content of the


registers being overlooked). Therefore the machine is
aware of its memory, but it does not have any access to
the content. One could deduce that a simple solution
would be to build into the machine a reader of the internal image, but this new machine would be in the
same position as the first since it would only translate
one image into a second one which would still remain
as inaccessible to it (this being true even if the regression is infinite). The empirical machine is therefore incapable of giving meaning to the information
it
absorbs.
To bypass this problem, one can obtain a slightly less
pure empirical machine by introducing reflex into it. As
with CORA, reflex means an activity on the part of the
subject which will allow the model to act within the
medium. This article will not go through the steps of
how reflex can be achieved (on this subject see Grey
Walter (1953), but it is quickly seen that one is far from
simple associations. What is gained in psychological
simplicity, is lost in the structural complexity of the empirical model.
One last point to raise prior to discussing the a priori
concept is the distinction between the significant and
the signified. If the machine is to function effectively,
it has to have a link tying its internal world to the external one which would then allow it to manipulate the
relationships (this does not mean that the machine itself has to make this distinction). For the machine, all
inputs and all treated symbols are of the same nature,
even though some may originate from its internal structure. These are all treated alike-as
objects-by
the
machine that does not discriminate between them. For
example, in CORA the mechanism which avoids an obstacle after a collision contains a feedback between the
amplifier of the photoelectric cell (the eye of the machine) and its input. That is to say that even a signal
not caused by an external object (i.e., without signification), is still interpreted as an object. All signals are
translated into CORAs language where no distinction
can be made as to the origin of the signals; which is to
say that to an empirical machine, everything is considered to be an object. The machine does not build meaning for itself, but does so only for the observer.
Prospects Based on a Priorism
The next approach investigated is the one suggested
by a priorism or conventionalism. This view which is at
opposites with empiricism states that it is the internal
structure of a subject which determines the image created of an exterior reality. Piaget cites Poincare with
respect to the construction of space, the mind elaborating on a three-dimensional mathematical continuum:
but it does not build with nothing, it needs materials
and models. These materials and models are preexistent
within it. However, there is not a unique model which

374

imposes itself on it, it has choice; it can choose between


three-dimensional or four-dimensional space. What is
the role of experience? It is that which offers the indications by which it makes a choice. (Piaget, 1962,
p. 191). Poincare states that the subject chooses only the
most convenient models; that is, those which mesh
the best with experience. Therefore, one is to make of a
successful action, a criterion which will guide its
choice: X convention being only useful if it facilitates
the accomplishment
of an action. (Piaget, 1962,
p. 196).
In order to establish the type of machine described
by conventionalism, it is convenient to make a first distinction. So that conventionalism is not reduced to a
pure a priori, the choice of a model for a determined
experience has to be totally free, that is to say, equiprobable within all possible models. This is what Poincare
suggests when writing on the notion of a group: This
notion preexists, or rather what preexists in the mind, is
the power to create this notion. To us, experience is
only one method of affirming this power. (Piaget,
1962, p. 193). The notion of a group should therefore be
established in the structure of the machine as are the
relationships between response and stimulus which suggest an a priorism. Therefore, in conventionalism all
possible responses from one stimulus will bc equiprobable, and it is only in the process of functioning that
the machine chooses its convention (according to the
criteria employed to ponder the probabilities of responses) from which it then defines the relationship between its inputs and outputs.
However, the a priori can reappear at a superior
level: if the criteria of choice depend on the structure of
the machine, it would lose ability to choose. Therefore
conventionalism defines a machine having all its outputs as convenient to use, the only possible structure
being one permitting it to produce all possible cases
which have to be equiprobable at all levels of analysis.
The output of the machine is therefore independent
from the input, the same output being able to correspond to any input. The internal structure of the
machine having to define itself by a progressive construction (while functioning), which gradually restrains
the general combinations.
At this point one sees the inverted problem of empiricism. The empirical machine was stable and capable
of accommodating the medium, but since its activity
was not reflected upon, it was incapable of restructuring a contradiction: the machine was not versatile. On
the other hand, the conventionalist machine is too versatile, and being independent from its input cannot call
upon the regularities of the medium to stabilize it.
Since this hypothesis excludes all internal a priori
structure which would guide the machine, there is no
aleatory source (internal or external) to which it could
apply its outputs. The purely conventionalist structure becomes quite useless since nothing obliges the

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-

June 1991

machine to conserve the same rules during a specified activity, nor can it distinguish between different
activities.
Friedberg who is cited by Green (1963), has shown
that a computer cannot resolve a problem of this type if
it works randomly. His experience consisted of a computer producing from certain instructions, a combination which permitted in turn another computer to
produce a unique output for every two or three inputs;
for example: to invent rules of binary addition. After
10,000 attempts, the appropriate program had still not
been produced even though the combinations were
free. Everything occurred as though the input did not
exist to the machine. Its output being aleatory, the
image created of the universe was constantly changing
and therefore had no constant relation to the configurations of the inputs at a given time. As was seen with the
empirical machine, meaning can only be established by
considering symbols and objects in the same manner.
This method cannot be applied here due to the nature
of the initial hypothesis: it is a principle of conventionalism not to bring meaning to objects. For the machine,
this means that all manipulations are performed on insignificant symbols: due to an object (an input), the
machine will invent an undefined quantity of nameswithout ever conserving one-during
the sequence of
operation.
One could build a set of rules of stability into the
machine. This is what the conventionalists
suggest
when they speak of constituents of structures, a structure being by definition resistant to transformations.
The machine essentially produces combinations, and in
the measure that some are repeated or varied, it could
produce rules for the combinations (programs). From
these programs, it could produce one that would stabilize the rules during a certain activity. However, this
stabilizing would itself be submitted to the activity of
the machine and consequently, it would have to be regulated by a superprogram, . . .etc. Here one finds again
the same undefined regression that was the case with the
empirical machine, and it rises from the same attempt at
introducing into the system, a property that the system
itself has to build up in order to exist in that manner.
Another approach would be to make the conventionalistic machine reliable. To impose a structure upon
it that would not allow it to transform an input into a
random output: from the concept of this structure, the
transformation would make it either an a prioric or an
empirical machine.
It has been indicated that in order to establish a link
between the subject and the medium, empiricism has
had to attribute a certain activity to the subject. In
pragmatism (a mild approach to conventionalism since
consequences of actions are considered), the problem is
inverted. It is one of regulating the activity of the subject by itself, the medium being unable to regulate it
from the exterior. If one presupposes these internal dif-

ficulties resolved-which
in the case of psychology,
would mean to turn the problem over to biology-and
if one imagines a machine capable of inventing and following rules, it becomes evident that one has still not
dealt with the adequacy of these images corresponding
to the exterior reality. For a pragmatic machine that
does not have to deal with the effects of its actions on
reality, a criterion is a mere convention, but once chosen, the machine will have to maintain its outputs consistent with the inputs, either by intervention within
the medium or by modifying its internal rules. One sees
that even in its most attenuated form, conventionalism
does not allow for a link between the significant and
the signified: the object not existing independently
from the will of the subject since the idea of commodity
is only part of its internal conventions.
Prospects Based on Phenomenology
The last solution investigated is derived from phenomenology (the descriptive analysis of the subjective
process). This relativistic view links the subject and object together through an interactive process within a
preestablished harmony which removes the inconveniences postulated in realism and a priorism. The relativism or interactionism which Piaget (1970) discusses
resolves these inconveniences by coupling the empirical
and pragmatic machines, that is, recognizing their complementary aspects and insisting upon the need for psychology to base itself on a machine which already
contains an elementary structure with a dynamic inside
(the reflexes tied to the need of the organism); the cognitive construction constantly assimilating past and
present actions. The model for this coupling of the two
machines however, does not stipulate a fusion of the
two preceding theories (the logical addition of two
static systems), but the interactive coupling of two dynamic systems which will involve new properties different from the ones previously mentioned. Intuitively, an
example of this process would be the coupling of a
motor and a regulator which would result in stability
and control which did not preexist in either of the components. It follows that these properties depend on the
type of transformations produced by the linking components, and of the disposition of the communication
channels between the elements of the system. But these
questions involving the nature of the organic needs of a
structure, and of the coordination between the reflex
mechanism and the structuring of the circuits (permitting the elementary operations of the combinative part
and of the sensory organs) will not be dealt with in this
work since their essence is more of a biological nature.
Not analyzing the physical links between the elements of the two machines, one will therefore make the
broadest possible hypothesis: that each element is in interaction with all others. This brings to light some of
the consequences of this interactionism.
From the

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-

June 1991

375

adaptation point of view, the machine contains only


one input and one output which was not the case with
those described in the preceding theories. In particular,
inputs resulting from the activity of the machine within
the medium are reintroduced into the circuits so that
the machine is informed-from
the variations within the
medium-as
to its actions. These internal links also
suggest that the machine could simulate the action internally without applying itself on the medium to verify
the response. This definitely opens up the operational
possibilities.
These new properties are important in two respects:
since the information cycle is closed, the machine becomes capable of regulating and controlling itself. As
Ashby (1960) brings up in Design for a Brain, it would
be a regulating process that would realize itself without
the machine being informed as to the variations of the
medium and the effects of its reactions. This would result in the continuous creation of information, for it
would correctly answer questions to which it does not
know the answer. Now, the theory of information guarantees that it is possible in a finite number of operations to decode all information contained in a message,
but nothing more. On the other hand, the possibility of
internal activity, that is, the ability to react in a different manner to the same stimulus is necessary to creating regulation. Indeed, it is only if the machine can
choose from within different reactions (to one permutation) as to which one is best from a determined criterion, that it can begin to regulate itself. It is this which
Ashby (1963) calls: the law of requisite variety, in Zntroduction to Cybernetics.

One can now see that the effect of interactionism


dissolves the discrepancies in the two preceding views,
the regression of the observations being stopped by the
interactive process. However, the construction of a cognitive system implies the adaptation of the system to
the medium which raises the problems involving the
origin of this adaptation.
Conclusions
Recapitulating, it has become apparent that interactionism has the effect of reestablishing the circulation
of information between the organism and the medium.
This forms a complete circuit, whereas empiricism and
conventionalism only consider each as a distinct and
complementary
channel. However, interactionism
raises problems involving the adaptation of the organisms construction to the constraints (the structure of
the medium). This problem was inexistent in the two
preceding theories since, in their pure form, empirical
adaptation identifies the organism within the medium,
whereas conventionalist adaptation postulates a metaphysical organism totally independent
from the
medium. These theories imply that an organism con-

376

taining a complete sensory system would be perfectly


informed as to the variations in the medium, but being
deprived of a motor system, would be totally paralyzed
when it is time to respond. At opposites is the organism
which is gifted with unlimited motor capability, but is
lacking a sensory apparatus which means that it will
never escape blind groping since it would not be notified-on
the adaptation level-as to the results of its
actions. Finally, in interactionism, sensory and motor
modes are void of any adaptive signification, one being
isolated from the other.
Three different views were here discussed, each of
which has run into a major problem when attempting to
reproduce a living organism using alternative inanimate materials. However, this investigation has noted
the basic limitations of three epistemologies, delineating
a branchpoint wherein subsequent theories can develop.

Epilogue
Due to the difficulty of addressing epistemological
problems, present AI research has by and large opted to
circumvent these first principles of knowledge. Some
have even gone so far as to claim that AI is foremost a
subbranch of engineering and can therefore not be a
philosophy (Putnam, 1988). Others have pointed out
that in order to build machines that are as intelligent as
people, we must first establish a science of cognition
since presently: we have only fragments of the conception, and some of those are certainly incorrect (Waltz,
1988). While Churchland (1986) contends that classical
AI is much less likely to yield conscious machines than
neurophilosophy,
Searle (1990) argues that AI can
never give rise to minds since computer programs
merely manipulate symbols whereas a brain attaches
meaning to them. Nevertheless, it is only upon the systematic expounding of grounds for knowledge that the
field of AI will realize-if
not resolve-its
limitations proper.
If one simply wants to build an expert system which
will draw a few inferences from a knowledge base, such
a system is executable by a few relatively simple procedural steps, but if the goal of AI is to create truly intelligent machines, one cannot simply leap over the
barrier of epistemology. Instead, one has to deal with it
since it is that barrier which eventually dictates the future progress of the system. Creativity in approaches to
the grounds for knowledge-such
as Turkles (1988)
proposed alliance between psychoanalysis and AI-will
be the determinant factor regarding the feasibility of
creating artificial intelligence. Since the mind does
not behave in a series of definable symbols, it may well
do to return to the branchpoint of placing what is
presently known of the symbols within the context of
an epistemological framework.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-

June 1991

References
Ashby, W. R. (1963). An introduction to cybernetics, London: Chapman & Hall.
Ashby, W. R. (1960). Design for a brain, London: Chapman
& Hall.
Campbell,
J. (1982). Gramatical man, New York: Simon and Schuster.
Cellerier,
G. (1968). Cybernktique et epistemologie, Paris: Presses
Universitaires
de France.
Churchland,
P.S. (1986). Neurophilosophy: Toward a Unified Understanding of the Mind/Brain, Cambridge:
MIT Press.
Crosson,
F. J. & Sayre, K. M. (Eds.) (1967). Philosophy and cybernetics, London,
Notre Dame, Indiana:
University
of Notre Dame
Press.
Evans, C. R. (Ed.) (1968). Key papers: Cybernetics, Baltimore:
University Park Press.
Gallie, W. B. (1952). Peirce andpragmatism,
Harmondsworth,
Middlesex: Penguin Books.
George, F. H. (1961). The brain as a computer, Oxford, New York:
Pergamon
Press.
George, F. H. (1979). Philosophical foundations of cybernetics, Kent,
England:
Abacus Press.
Green, B. F. (1963). Digital computers in research, New York: McGraw-Hill.
Grey Walter, W. (1953). The living brain, London:
Duckworth.
Helvey, T. C. (1971). The age of information: an Interdisciplinary
Survey of Cybernetics,
Englewood
Cliffs, NJ: Educational
Technology Publications.
Machlup,
F., & Mansfield,
U. (1983). The study of information: Interdisciplinary messages, New York: John Wiley and Sons.

McCorduck,
Pamela (1979). Machines who think: A personal inquiry
into the history and prospects of artificial intelligence. San Francisco: W. H. Freeman and Company.
Parsegian,
V. L. (1973). This cybernetic world, New York: Anchor
Books.
Piaget,
J. (1962). Introduction a lepistemologie g&&ique, Paris:
Presses Universitaires
de France.
Piaget, J. (1970). Epistemologie g&Bique, Paris: Presses Universitaires de France.
Pierce, W. H. (1964). Redundancy
in computers,
Scientific American, Jan.
Putnam,
H. (1988). Much ado about not very much. Daedalus, 269281.
Searle, J. R. (1990). Is the brains mind a computer program?
Scientific American, 26-31.
Shannon,
C. E. (1948). A mathematical
theory of information.
Bell
System Technical Journal, 27 379-423, 623-656.
Tamine, J. (1970). La cybernetique, Bruxelles, Paris: Humanism
An
2000.
Turkle, S. (1988). Artificial
intelligence
and psychoanalysis:
A new
alliance. Daedalus, 241-268.
von Neumann,
J. (1958). The computer and the brain, New Haven:
Yale Univ. Press.
Waltz, D. (1988). The prospects
of building
truly intelligent
machines. Daedalus, 191-212.
Wiener, N. (1950). The human use of human beings, Cambridge,
MA: The Riverside
Press.
Winograd,
S., & Cowan, J. D. (1963). Reliable computation in the
presence of noise, Cambridge,
Mass: MIT Press.

JOURNAL OF THE AMERICAN SOCIETY FOR INFORMATION SCIENCE-

June 1991

377

Vous aimerez peut-être aussi