Vous êtes sur la page 1sur 9

GameSetMatch II Conference

International conference 2006 on computer games, advanced geometries and digital technologies
TU Delft, March 29th – April 1st, 2006

From e-Factory to Ambient Factory


(Or What Comes After Research?)
Philippe Morel / EZCT Architecture & Design Research

Abstract
This paper presents a few elements and hypotheses regarding the use of mathematics in
contemporary architecture. Its purpose is not directed towards establishing any direct relations
between the discipline of architecture and some precise mathematical fields. On the contrary, it
aims to show that the complexity of the contemporary mathematical landscape does not allow us
to create and use generic links and titles such as “architecture and geometry” or “architecture and
computation” as if these notions were well delimited. The very idea of architecture is of course
already problematic in itself, but no less are the ideas of geometry and computation since
architects have dozens of geometries and mathematical tools at hand. Therefore, this paper
should be considered as a call for a yet to come comprehensive research dedicated to a new
epistemological approach of contemporary architecture. Such an approach would need, for
example, to deal with the impact of mathematical and computational knowledge and technologies
through the idea of “polyformalism”, i.e. the multiplicity of mathematical formalisms in use in every
computer-based technology (including architecture).

“But to a sound judgment, the most abstract truth is the


most practical. Whenever a true theory appears, it will be
its own evidence. Its test is, that it will explain all
phenomena”
R. W. Emerson, Nature, 1836

“Everything is number”
Plato’s quotation used by P. Bézier as a leading sentence in his doctoral thesis

1. Introduction

In the most recent architectural debate, two main themes were explored by architects and
theoreticians. The first one dealt with the very idea of computation, the second one with the idea
of a new kind of organization for contemporary production. While the first theme is deeply
influenced by algorithmic and computer sciences – through the use of, alternatively or
simultaneously, shape grammars, genetic algorithms, parametric and interpolation techniques,
etc., the second one is influenced by economical and organizational paradigms: “on demand
production”, “business to order”, “postponed differentiation”, “e-business”, “e-factory” or “e-
manufacturing”. The issue I would like to address here has to do with an integrative approach for
those new paradigms. This integrative approach for contemporary global production,
contemporary production that I call “Ambient Factory”, has to deal at the same time with the
issues of production and computation through optimization techniques and 100% computational
production. Then, through a small step back into the history of mathematics and science, we will
see how the contemporary global optimization process was made possible in the realm of the
industry.

1
2. Geometry, Transformation of Shapes and “Polyformalism”

When the word geometry is put to you (and in the discipline of architecture we could say that it is
always put to you) your first reaction is to look for its meaning. To search for the meaning of
geometry for an architect, let us first ask what it means for a mathematician. For H. Poincaré, the
answer seemed relatively clear: geometry is the study of the laws governing the displacement
and transformation of objects. When only displacements are involved – that is to say, the
movement of an object that does not at the same time undergoes a transformation – the
geometry is named Euclidian. When, on the contrary, the object deforms itself (or is deformed) as
it moves, we say that we are dealing with non-Euclidian geometry. This distinction H. Poincaré
made more than a century ago is so influential that almost all recent architectural discourse and
practice seemed implicitly defined according to it. Let us take one of the architects who most
challenged the relations between architectural bodies, movement and deformation: G. Lynn. In
New Variations on the Rowe Complex1, G. Lynn envisioned a critique of geometry as the studies
and arrangements of fixed objects in favor of “an alternative mathematics of form; a formalism
that is not reducible to ideal villas or other fixed types but in its essence freely differentiated”. The
main target of this criticism, in fact, is not C. Rowe (who himself rejected his early analyses) but
the first wave of computational architecture and its humanistic combinatory based on a tried and
tested repertoire of forms. G. Lynn’s criticism did effectively open up new paths, but I refer to it for
opening the treatment of the new formalism that it proposed to something different: a
“polyformalism”.2 What could be a definition of “polyformalism”? In concrete terms, polyformalism
concerns the multiplicity of models active in any computational process of simulation. In
processes of simulation, we need not only to deal with mathematical models but also with
complex applications of them. Since in our world everything is the support (e.g., climatology) or
the consequence (e.g., the aeronautical industry) of simulations, simulations that “come from
what we can call an intrinsic polyformalism” and that create this kind of “ecology of objectified
formalisms”3, understanding the very concept of polyformalism can lead to a better understanding
of this world.
What could the meaning of polyformalism for the discipline of architecture be? What does it
concern? Something which is not fundamentally different than what we just saw, at least
something else than variations, either they are New Variations on the Rowe Complex or non
standard variations (besides, it would be a rather indigent way of understanding non-
standardization). It concerns the fact that in a computer and in any computer-based architectural
practice you cannot really speak about “non-Euclidean geometry” unless you use this word in a
metaphorical or analogical way, this way being the one chosen by G. Lynn. More precisely, it
concerns something else than the creation in contemporary practice of a new visual language.
First, polyformalism is of course related to linguistic problems, but the languages it “speaks” are
abstract, being logic-mathematical. Second, if the concept of polyformalism and the one of
simulation are language related, they are not languages by themselves. “simulation is not a
language for its signs are heterogeneous: that means that simulation uses increasingly different
types of mathematical formalisms that it brings together in a non-mathematical way, that is to say,
in a way that cannot be reduced to a shared axiomatic. Simulation is not a discourse or a
language, not only because it cannot be summarized in fact, but also and above all because it
cannot be summarized and therefore thought in right, because of the heterogeneity of its
conceptual tools. The contribution of simulation is due not only to the practical or theoretical limits
of calculability of certain mathematical problems. It is also due to the fact that the IT-based
infrastructure allows for the cohabitation of totally heterogeneous formalisms, that is to say,
formalisms that cannot be reduced to each other or governed by the same axiomatic. […]
Simulation gives rise to the history of an ecosystem of composite and heterogeneous composite
mathematical beings”.4 Then, what is it that allows the “cobbled-together and heterogeneous
coexistence” in a computer of “diverse mathematical and computerized entities”5? Simply, it is
what we can perceive thanks to “discrete states,” i.e., “synchronization through step-by-step
treatment. At each step of the treatment, formalisms cohabit and communicate. That is even the
only way they have of becoming homogenized together: by successive instants, by sampling. The
step thus serves not only to process algorithmic incompressibility – the case of digital simulation –
but also and above all the formal heterogeneity of coexisting objects – the case with computer

2
simulation. Computer simulation, then, is not a language because the signs in it are governed by
diverse axiomatic, which indeed are perfectly mutually contradictory. This is also the reason why
a simulation cannot be expressed mathematically. In this regard, it has no grammar, let alone a
generative grammar – even if simulation can make occasional and local use of such grammars,
as in the case of L-systems it is not a system of signs.”6

3. Geometry and Algebra. Algebra as Industrialization of Geometry

Let us consider that thanks to “discrete states,” i.e., “synchronization through step-by-step
treatment”, the heterogeneous coexistence of different mathematical models is rendered
possible. In this case, what precisely allows geometrical facts to be reduced to non-geometrical
ones? The following: any geometry can be expressed in the language of algebra. Algebra being a
sort of alphabet, any new construction is then possible. “It would seem that geometry can contain
nothing that is not already contained in algebra or analysis, and that geometrical facts are nothing
but the facts of algebra or analysis expressed in another language. It might be supposed, then,
that after the review that has just been made, there would be nothing left to say having any
special bearing on geometry. But this would imply a failure to recognize the great importance of a
well-formed language, or to understand what is added to things themselves by the method of
expressing, and consequently of grouping, those things.”7 (“In mathematical sciences a good
notation has the same philosophical importance as a good classification in the natural
sciences.”8). Me must admit here that it is easy to understand why, if geometries can be reduced
to algebra, logicians at the end of the 19th century should have thought that there was no longer
any need to bother themselves with a language like geometry, and we ourselves, as architects
dealing with computers and formal languages, wonder if it would not be equally advantageous for
us to leave the language of geometry to one side and move towards completely new types of
architectures, “100% algebraic”. This is of course already happening in many cases, for example
with the use of emergence or complexity-related phenomena, but first of all this use does not do
without geometry when it is effected in the architectural context, and secondly, the point is to think
conceptually about the problem of geometry as an unnecessary language, and not in practical
terms. If, practically speaking, we could build anything without the use of geometry (having in
mind that the assembling of laser cut pieces cannot be called “geometrical”) , conceptually in
most of the cases we see little more than deformed platonic solids ending as free form surfaces.
According to that, it seems that a more accurate conceptual context has to be established. To
accomplish this task we of course need to keep in mind the most important conceptual and
practical shift in recent history: through the chain geometry -> algebra -> algorithm -> program,
geometrical facts were able to be implemented into a computer. Then geometry became an
industry.
We also need to keep in mind that there is no movement towards a higher conceptual level which
could be established without going beyond the interfaces and even beyond software, at least until
a certain point. In a computer, and especially in heavy animation or CGAD software, you never
really know (especially if you use software via an interface and if you do not enter the “core” of
the process) the kind of mathematics you are using. This does not mean that you do not know
about mathematics at all or about, e.g., engineering, it only means that the reading of the
mathematical and “formal languages games” “played” by the computer are not revealed. We
could take as a basic example the same line produced by the Bézier function or by a De
Casteljau algorithm, we could more generally think about the multiple mathematical categories
and geometrical fields which always coexist. Therefore we could consider that what already
defines the idea of simulation also applies to the idea of architectural modeling. We could also
recognize that the separation between modeling (which appears as essentially “form oriented”)
and simulation (which, in the discipline of architecture, appears as being oriented towards
engineering) does not really hold either. This separation, at least to me, seems very traditional in
its approach. My feeling is that this is closely related to the fact that architects speak about non-
Euclidean geometries mainly in cases where shapes appear as being non-Euclidean, while
behind the computational process various disciplines such as affine geometry, computational
geometry or discrete geometry could be used. Of course, at this moment, everything is translated
into numbers, i.e. algebra. If I want to bring attention to these facts hereby related to the way

3
mathematical models communicate into a computer, it is because I believe in a better
understanding of contemporary research, industrial production and naturally architecture (all
being based on simulation and optimization).

4. Geometry and Ideal Objects

But let us come back to the discipline of architecture and to G. Lynn, who wrote: “As geometry is
immanent to any architectural interpretation, even a displaced relationship between form and
formalism must pass through a geometric description.”9 In a previous work, called Notes on
Computational Architecture, I mentioned the need to add to any geometrical description of form at
least an algorithmic one. I was trying at that time to define more accurately the shapes produced
by CAGD software and to introduce the idea of polyformalism beyond any kind of idealism;
idealism which has no reality regarding the true nature of CAGD based objects. We could think
like D’Arcy Thompson, or before him R. W. Emerson, that “each creature is only a modification of
the other”, but such a thought implemented in mathematics just doesn’t work. If it works when
implemented into a computer, it is precisely because we can use multiple mathematical models:
the polyformalism makes it work. Finally, it means that beyond the interface of an animation or
CAGD software used in the field of architecture we discover a strong logic that is not yet fully
analyzed on a conceptual level.
Speaking about idealism, G. Lynn precisely operates a junction between a certain kind of
geometry and some kind of idealism (through ideal mathematical objects). If, as mentioned above
by Poincaré, geometry does not deal with “natural solids” but with “certain ideal solids”, with the
latter remaining transcendent and not subject to historical or other contingencies, there is nothing
to say that transcendence and ideals are mechanically and automatically linked to any kind of
reality. What Lynn reveals in Rowe’s thought is not so much an abusive association between
ideal and transcendence as the manipulation of the belief in such a link: “The differences between
pairs of villas are thus marked as the extraneous manifestation of constructional, cultural, or
spatial customs of a certain historical period. The underlying geometric proportions that remain
are timeless.” If “natural’ architectural order is timeless because it refers back to antiquity where
an exact geometry was first distinctly perceptible”, it is also due to the fact that there was an
equivalent of this exact geometry somewhere in nature. The architectural order that came out of
the mathematical order is “natural” only because that mathematical order was “natural” before it.
C. Rowe’s transcendental idealism is thus not, strictly speaking, idealism, but more a realism;
realism meaning here “realism of ideas.” Compared to this and further back in time, H. Poincaré
tells us something about our contemporary architectural practice, not by reining in the ideal (as a
“mental image” of mathematical truth) or giving a critique of its unwarranted moral extrapolation,
but by a recognition of an inevitable multiplicity of new ideals: every geometry has a
corresponding perfect and abstract object. Without those objects, Geometry would not even be a
science but only a chaotic and accidental analysis. For H. Poincaré, mathematical science, like
geometry, is in general “safe from any kind of revision; no experiment, however precise, can
overturn it. If it were possible, it would have been done long ago.”10 Then, the question is not to
criticize past and ideal mathematical models but to criticize what remains idealistic in our
contemporary analysis of the mathematical landscape.

5. Applied Mathematics and Precision

Behind the pure mathematical argument of what is Euclidean or not, behind the metaphorical
argument of what looks Euclidean or not and what looks idealistic or not, are there other
arguments that could be considered regarding the fact that we use multiple mathematical models
in any computer-based design process? If again we follow H. Poincaré the answer is yes. Here is
what the French mathematician said about his preference for Euclidean geometry, introducing the
criteria of simplicity. “Let it not be said that Euclid’s system seems simpler to us because it is in
conformity with some pre-existing ideal which already has a geometrical character; it is simpler
because certain of its displacements are exchangeable, which is not true for the displacements
corresponding to Lobachevsky’s system. Translated into analytical language, that means that
there are fewer terms in the equations, and it is clear that an algebraist who didn’t know what

4
space or a straight line were would look on that as a condition of simplicity.”11 The importance
given to simplicity here will of course remind us of two things. 1) The question of elegance in
present-day programming. 2) The question of the optimization of algorithms. The first, which
corresponds to criteria that are in part subjective, is indeed discussed with great precision by
Poincaré: “[…] the sentiment of mathematical elegance is nothing but the satisfaction due to
some conformity between the solution we wish to discover and the necessities of our mind.
[…]This aesthetic satisfaction is consequently connected with the economy of thought.”12 The
second, which obeys strictly scientific criteria, is also dealt with in the sense that an economy of
thought is, for Poincaré, something that makes it possible to “increase the output of the scientific
machine.”13 Here, each of us will have noted how particular Poincaré’s way of mixing purely
scientific and more personal and philosophical arguments is (e.g. the number of “terms in the
equations”, the “the sentiment of mathematical elegance” or the “aesthetic satisfaction”). This
combination of arguments, but most of all his emphasis on subjectivist arguments, is probably
what explains A. I. Miller’s comment: «Poincaré’s physics is a reflection on his philosophy».
In any case, if we consider our knowledge-based economy as a scientific machine, we have to
recognize the awareness of H. Poincaré trying to maximize the productivity of this machine.
Around the end of the XIX Century, this awareness seems however to be well shared. Poincaré
was not the only mathematician preoccupied by the criteria of simplicity associated with a true
mathematical and machinist rigor. Of course we all know about the concept of Economy of
Thought created by E. Mach, but we know less about the engagement towards new approaches
of efficiency coming from the German mathematician F. Klein. The latter, recognizing that “most
of them (students in mathematics) study mathematics only for their practical applications”, tried to
avoid “the danger of a split between abstract mathematical science and its scientific and
technological applications. Such a split is bound to be deplorable; its consequence would
inevitably be to found the applied sciences on an uncertain base and isolate the scientists who
work only with pure mathematics.” 14 Like Poincaré, Klein agrees on the fact that nothing should
contradict the mathematical laws, but unlike the French and the aforementioned Austrian scientist
he poses the problem through a less philosophical angle and even through a very practical one.
Thus, in Chicago on the 30th of August 1893, after discussing the contribution of S. Lie in a
lecture, via the example “found in astronomy and the theory of perturbations” of the application of
this new knowledge, bearing on “contact transformations”, he evoked a true problem of
engineering and optimization: “regarding the meshing of cogs. Knowing the profile of the tooth of
one wheel, we have to find the profile of the other one.” To make sure that everyone would clearly
understand the importance of the relation between abstract mathematical knowledge and
practical applications, a couple of days ago in Chicago, Klein already evoked this problem “by
explaining it using models sent to the Chicago World’s Columbian Exposition by German
universities”. In fact, in order to facilitate the practical use of pure mathematics without any loss of
rigor, Klein first asked himself if it would not be “possible to create what one could call an
abridged system of mathematics, adapted to the needs of the applied sciences, without the
necessity to go through the whole field of abstract mathematics”15. “Such a system (the abridged
system) should, for example, contain Gauss’ research into the exactitude of astronomical
calculations or the more recent and extremely interesting research concerning interpolation, for
which we are indebted to Tchebychev.”16 No less than Poincaré’s comments on productivity of
the scientific machine, F. Klein’s intuition on the importance of interpolation techniques is
thoroughly remarkable. To verify this, one need only go through the contemporary literature on
applied mathematics to get a sense of the multitude of abridged systems that has emerged.
Another of Klein’s remarks is also of the highest importance: it is related to the very idea of
precision, through which Klein proposed a classification of the diverse sciences through their
capacity and need for/of numeric precision. If we consider the quantum leap operated via the
computer regarding the importance of precision in any scientific and technological field (e.g.
nanotechnologies which are fully transversal, whether biological, whether chemical, whether
physical or bio-chemic-physical, etc.; this quantum leap is what today makes science and
technologies horizontal), if we consider the distinction operated by Von Neumann between
analogical and digital processes which precisely rely on the concept of precision, and if we
consider P. Bézier’s remarks exposed at the end of this paper, we have to admit that Klein’s
intuition is in this case again - beautiful.

5
6. Approximation and Optimization

Coming back to interpolation techniques, whereas Poincaré puts forward a single tool, the
differential equation, Klein puts forward what seems more like a notion, something still in
gestation, which will be of the greatest importance for applied mathematics in general, and in
particular for the ones that are of concern to us architects. Thus interpolation from polynomials
was to develop considerably in the 20th century, often due to the agency of “applied
mathematicians” or engineers. If S. Bernstein was a “pure” mathematician who continued the
work of Tchebychev (he really was a great mathematician who at 24 solved Hilbert’s Nineteenth
Problem in his doctoral dissertation), P. Bézier, who used S. Berntein’s work in approximation
theory was an engineer at Renault. P. de Casteljau also worked as an engineer with Citroën. S.
Coons was a consultant at Ford, J. Ferguson worked for Boeing in the same capacity and W.
Gordon for General Motors. Concerning Schoenberg, he also introduced B-splines applied
mathematics works, for the “smoothing of ballistic tables”17. Of course, these persons had
sometimes very different capabilities in mathematics. P. De Casteljau, who was trained as a
mathematician at the Ecole Centrale, had a much strong theoretical background than Bézier, and
this is probably the reason why when the car manufacturer Citroën asked him to solve the
problem of numerically translating a hand-made model, he came up with an answer earlier than
P. Bézier at Renault (around 1958 vs. 1964), because he directly used the Bernstein polynomials.
As P. Bézier mentions, “Most likely, Citroën people are the first ones in Europe to have worked on
the numerical definition of surfaces […] I understood that the conception of the types of curves
and surfaces representation was born in the brain of mathematicians, namely M. de Casteljau
and M. Vercelli, whose capabilities I admire. Right from the start, they thought to use the
properties of Bernstein functions, while I ignored their existence, instead of doing, as I did, a
heavy analytic study of the properties of the functions I wanted to use for the curves and surfaces
representation. Finally, I ended up at the same result, but by using a very bumpy way”.18
So, alongside the history of mathematics that sees in terms of great discoveries, seminal
theorems and major figures, there is a more local history, one which is grounded in the actual
practice of applied mathematics, one that develops through the effort to surmount temporary
difficulties in this or that branch of industry, efforts which, in going beyond such difficulties, bring
to light a whole host of new applications. It has to be mentioned here that all contemporary
architecture owes its revolution to most of these persons, far more than any theoretical discourse
in the discipline of architecture itself. This history was as we know very influential in post-war
industry; it had the deepest influence on contemporary architecture and made it what we know
today. Here, one may say: “obviously, these mathematical questions of approximations had a
great influence on the contemporary practice, but where is the link between those questions and
the former concept of polyformalism?” The link is everywhere since different models of
approximation exist (e.g. the algorithmic model of P. De Casteljau and the functional model of P.
Bézier). The practical importance of these elements is everywhere as well, since, for example,
using one type of polynomial curve rather than another has strong practical impacts (therefore
economical, social and even political). Here is for example what you can read in CAGD-related
books (not related to De Casteljau): “an important question of performance has been resolved:
Bézier’s form is numerically more stable than the monomonial form. Farouki and Rajan have
observed that numerical inaccuracies, which are inevitable when one uses finished precision
computers, affect the curves in the monomonial form in a considerably more significant way than
in Bézier’s form”19). Therefore, what is present above is the following: the word approximation is
not opposed to the idea of precision, as it could appear in its ordinary definition. On the contrary,
those two notions are working together. An approximation is a mathematical tool, and this is
precisely what can be seen everyday in industrial processes. The whole CADG-related industry
has to deal with the ideas of simulation, optimization and interpolation and/or approximation.
These things are at the same time abstract mathematical concepts and practical notions. They
are also in most cases closely related to one another. As it is implicitly said above, the stability of
a calculation (related to the interpolation technique that we choose) is related to performance,
therefore, it is related to the very idea of optimization.

6
This shift, from an intuition-based practice and a human-based precision concept towards a
computational one is a shift from one kind of “economy” to another. This economy could be
defined as a sort of “objective economy of thought”. Its criteria would no longer be about “the
sentiment of mathematical elegance” and “aesthetic satisfaction” but fully, this time, about
something as objective as the number of “terms in the equations”. If we understand this shift for
what it is, a global movement towards the use of a more and more objective knowledge, then we
could hope for a better understanding of our scientific and technological civilization. There, we
could understand the meaning of scientific concepts in extra scientific contexts, keeping in mind
that these concepts, while becoming cultural, still remain scientific unless they are replaced by
new ones (of course, saying this I believe that concepts are first scientific, then cultural. As F.
Nietzsche said, “the airship alone is enough to toss overboard all our concepts of culture”). If we
want to grasp the complexity of the different uses of mathematical models, if we want to
understand the importance of concepts such as simulation, optimization or approximation in
contemporary production and economy (capitalism), we need not only to criticize past models, but
to understand what a model is. We need to understand into the discipline of architecture the
complexity and the whole “economy” already evoked. This economy, which is at the same time a
“real” one and a “metaphorical” one, a conceptual and a practical one, already seemed present in
this small autobiographical note from one of the godfathers of CAGD methods, P. Bézier. After
working first as a tool-and-die maker, P. Bézier worked at the tool research department. Here is
how he relates his experience: “The department had to choose, design, and implement the
production facilities for mechanical components. All the surfaces that required some precision
were planes, cylinders or cones, i.e. to define them we only needed straight lines and circles. The
only exceptions were the gear tooth sides of the pinions but they were generated by specific
machines, which cut them by using appropriate kinematics combinations. The limits were in
thousands of a millimeter because the tolerance was a matter of hundredths, and even less
sometimes. Generally, arguments with controllers were over one or two thousands, and the
measurement instrument, known as screw gauge or comparator, was referred to in the workshop
jargon as ‘the magistrate’ […]. As for the car body, it was just the opposite: an artistic air
prevailed. The stylist was the referee: his judgment could only be subjective and sometimes
would vary with time. There was no requirement for mathematical knowledge for anybody […].
The drawings were not particularly accurate, and for example, once, a car […] ended up with its
two sides differing by several millimeters. In terms of aesthetics and aerodynamics, it did not
matter. But during manufacturing, it turned out to be problematic: sometimes they were gaps of
several millimeters between parts that would have abutted, and these gaps had to be filled in with
tin welding, which was expensive. The definitions were transmitted from one department to
another in the form of drawings, whose precision, already bad, worsened at each stage […].”
Here, of course, we can easily imagine that a person whose work is about precision could feel
uncomfortable with such a “production” process. And it was exactly the case with P. Bézier, who
then decided to find alternative techniques to hand-drawings and wax models for generating
objects. “The problem though (even with relatively stable organic resin models), was that
accuracy was not perfect, and it could even undergo changes in time, which is problematic for a
standard. […] This state of things tends to offend a mechanic used to uncompromising rigor, and I
felt that an unquestionable definition, which would remain the same and would be easy to pass
on, was required; the definition should be fixed by the industrial designer himself and then passed
on as numerical data to every department […] implied in the process.” 20

7. Conclusion

We could ask, with P. Bézier, “Why, in 1960, did no research worker implied in aeronautics find
this straight away?” The answer is clear: “They were fixed on the idea of reproducing a model
instead of directly creating a shape and refining it”. Today, if we take a close look at the
contemporary architecture scene, it is easy to see that most of the architects are still using a
combination of analogical and digital processes while in fact there is no more need for such an
approach. This approach could be the one of creating a model and then scanning it as a 3D point
cloud. It could be the one which consists of producing stereolithography representational models;
it could of course also be other approaches. At the beginning of this paper, we saw that,

7
mathematically speaking, there was no longer a need for geometry. We then asked the following
question: would it be advantageous for us to leave the language of geometry and move towards
completely new types of architectures, “100% algebraic”? Here we could again ask the same
question since, as we saw with P. Bezier, there is no more need for geometry in the concrete
fabrication process than in the abstract world of mathematics (at least when implemented into a
computer). If we follow H. Poincaré, this is not something which can happen unless you
underestimate the fact that “in mathematical sciences a good notation has the same philosophical
importance as a good classification in the natural sciences.” Possibly. Let us temporarily accept
that geometry taken as a notational system keeps its validity. But let us also accept that a
notational system, as a language, is closely related to evolutionary processes. Therefore, as a
hypothesis we could think of an evolution of our own cognitive capabilities. We could think about
this fact: what seems obvious and handy as a language in a certain historical period can appear
as extremely complex and cumbersome in another. If we accept this fact, we have to recognize
that geometry could completely disappear from the field of architecture. We have to recognize
that this is no longer a matter of which geometry, but something widely different. We simply
cannot say, even regarding any necessary algorithmic description, that “as geometry is immanent
to any architectural interpretation, even a displaced relationship between form and formalism
must pass through a geometric description.” Architecture is not immanent to any architectural
interpretation, and saying this is highly essentialist – it could even be considered as a
foundational discourse – and that is precisely why no modification of the idea of Form or Shape
never really succeed, since the new mathematical context was not considered in its complexity
and for what it is: a landscape filled with thousands of mathematical concepts. As a result, almost
no recent architecture really went out of a metaphorical relation between the new mathematics
(considered as non-Euclidean) and the architecture which is produced via CAGD software.
On the contrary, my point is that we could do without geometry. We could do things in completely
different manners and frameworks. If this is not the case, it is because of a lack of mathematical
knowledge in the discipline of architecture itself. But as R. Musil said (80 years ago), “today you
cannot refuse to be a mathematician”. As for the architecture, if we keep thinking about research
as a human activity while it is already a matter of computational production and global
optimization, it is because we do not understand the implication of the concept of polyformalism.
In fact, we do not yet have a clear understanding of the overall logic that drives the world. But this
logic exists, out there, all around. If in the past it was enough to understand the visible logic of
physical machines to develop a comprehensive reading of the production, it is of course no longer
the case. Today’s computational logic is abstract and linguistic. Things rely on logic-mathematical
bases and the further we go the more mathematical they become. Even standards moved
towards abstraction. Standards did not disappear, they are even more present than ever in the
global industrial logic. The new standards just became abstract and computational, they are logic-
mathematical. They are playing “formal language games”, which is precisely the definition of non-
standardization. No classical values, no categorization and no border remain between what has
to be considered as mathematical and what is not. Everything is mathematical except concepts.
Art does not exist anymore. The artistic man does not exist since “The scientific man is the
ulterior development of the artistic man”. Everything is shaped by the Evolution, by our own
evolution. If there are still few things which remain obvious to us, it could change. We just have to
wait and see “just see how far knowledge and truth can be ASSIMILATED BODILY - and to what
extent a metamorphosis of humankind takes place, if, in the end, his sole reason for living is 'to
know'”21.
1
ANY 7/8, July/October 1994.
2
I am using a F. Varenne’s term to link it to my own hypotheses on “formal languages games” defined as
computational/linguistic constructions.
3
F. Varenne, La simulation comme expérience concrète, in. Le statu épistémologique de la simulation,
actes des 10èmes journées de Rochebrune (février 2003), Rencontres interdisciplinaires sur les systèmes
complexes naturels et artificielles. Supported by the European Conference on Artificial Life (ECAL).
4
Ibid.
5
Ibid
6
Ibid.
7
H. Poincaré, Science et Méthode, Flammarion, Paris, 1908.

8
8
H. Poincaré, Savants et Ecrivains.
9
G. Lynn, Ineffective DESCRIPTions: SUPPLEmental LINES, in. RE: WORKING EISENMAN. Academy
Editions, 1993.
10
H. Poincaré, Des fondements de la géométrie, The Monist, January 1898.
11
Ibid.
12
H. Poincaré, Science et Méthode, Flammarion, Paris, 1908.
13
H. Poincaré, La science et l’hypothèse.
14
F. klein, On the Mathematical status of the Intuition of Space and on the Relations Between Pure
nd
Mathematics and Applied Science, Lecture VI, Chicago World’s Columbian Exposition, 2 of September
1893.
15
Ibid.
16
Ibid.
17
IJ. Schoenberg, Contributions to the problem of approximation of equidistant data by analytic functions.
Quarterly Journal of Applied Mathematics 1946; 4(1).
18
P. Bézier, Petite histoire d’une idée bizarre. Bulletin de la Section d’Histoire des Usines Renault .
1982;4(25).
19
G. Farin, Courbes et surfaces pour la CGAO, Masson, Paris, 1992. Original, Curves and Surfaces for
Computer Aided Geometric Design, Academic Press, Inc. 1990. The translation here is mine, from the
French version. The original sentences may be a bit different.
20
Letter from Pierre Bézier to Christophe Rabut, Paris, 4th of November 1999. In Christophe Rabut, On
Pierre Bézier’s Life and Motivations, Computer-Aided Design 34, 2002, 493-510.
21
F. Nietzsche, Posthumous Fragments on Eternal Return. I am Taking this quotation from the Master’s
Thesis of Felix Agid at the Ecole des Hautes Etudes en Sciences Sociales (EHESS, Paris). His work is
dedicated to a new reading of the architecture of. F. Borromini.

Vous aimerez peut-être aussi