Vous êtes sur la page 1sur 243

Second International Conference on Critical Digital: Who Cares (?

) 1

Second International Conference on Critical Digital:

Who Cares(?)

17-19 April, 2009

Harvard University Graduate School of Design,
Cambridge MA 02138 USA

The dependency on technology in contemporary design practice has raised critical

questions of identity, authenticity, and responsibility at least on the side of the
designer. It may be claimed that the use of digital technologies in design, as well as
in everyday activities, has deep and profound effects not only in the way thoughts
and ideas are conceived, understood, and communicated but also in their intrinsic
value, merit, and validity. Digital techniques have become determinant factors,
perhaps hidden, upon which the designers, practitioners, or critics base their ideas,
thoughts, or even ideologies. Who is the designer today and how important are one's
own ideas versus the techniques provided in an increasingly digitally dominated
world? How has the identity or brand of design firms been affected by the use of
technology? Is it possible to design without a computer today? How important is for
designers to know the mechanisms of software or hardware and therefore the limits
that these technologies impose on design and does that even matter anymore?

The emergence of new social, political, and economic concepts, conditions, and
practices such as that of globalization, ubiquity, outsourcing, or design/social
networking together with their corresponding technologies have shaped our world in
a way that has no precedence. This new realm has emerged so rapidly, globally, and
overwhelmingly, and yet so promising, enticing, and convenient, that very few care
to address its long term consequences. Meanwhile, as it appears from the current
discourse, "anything goes" in design as long as it uses some new technology. Fancy
images generated through computer graphics have replaced reality and, in turn,
reality has been dominated, altered, and adjusted to fit a technological utopia. It is
as if the world of technology is expanding out of control and yet, as humans we
remain the same. Who can take a position in the midst of this situation? Who cares
enough to question the mainstream?

The second critical digital conference is ambiguously titled "Who Cares(?)" as a

gesture of reaching out for critical positions about design in an age of responsibility,
identity, and authenticity. Arguments for and against the theme as well as projects,
designs, essays, or proposals have been used to manifest a critical viewpoint.

Who Cares(?) in terms of work, process, and thought is curated, published, and
debated in an open format at the Graduate School of Design of Harvard University on
April 17 to 19, 2009.
Second International Conference on Critical Digital: Who Cares (?) 2


Critical Digital was an idea originally conceived by Doctoral students at the Advanced
Studies Program at Harvards University Graduate School of Design. Doctoral students (now
graduated/graduating) Carlos Cardenas, David Gerber, Jie Eun Huang, Scott Pobiner,
Yasmine Abbas, and Neyran Turan started the idea of a conference as a means to address
the various and diverse issues raised from the digitalization of architecture. Their efforts led
to a series of symposia held at the GSD in 2005, and 2006. The first international
conference was held in 2008. I would like to thank them all for their invaluable contribution
to the intellectual foundation of this effort.

THis year, some of the current students at the Doctor of Design program took on the idea
and continued it into the form of a recurring conference. Jan Janclaus, Dido Tsigaridi,
Nashid Nabian, Zenovia Toloudi, and MDes student Stephen Schaum are the original
organizers of this conference that have contributed numerous hours on meetings,
discussions, and ideas about organizational matters, administrative issues, and presentation
formats. I thank them all for their spirit, work, and inspiring ideas. Sotirios Kotsopoulos
and Simon Kim both researchers and instructors at MIT, have joined the group lately and I
thank them for his enthusiastic support. Two faculty members of the department of
Architecture Ingeborg Rocker and Ana PlaCatala have been offering fresh and innovative
ideas to the conference and joined the group as moderators and organizers. I would like to
thank them for their time, effort, and inspiration.
Second International Conference on Critical Digital: Who Cares (?) 3

Session 1a: Process

Moderators: Sotirios Kotsopoulos and Jan Jungclaus

Ingeborg Rocker irocker@gsd.harvard.edu

Computation in command? Fading Flamboyant Architectural Aesthetics 9

Thorsten M. Lmker thorsten.loemker@tu-dresden.de

Do you care? About the Fictive Influence of Parametric Modeling on Critical Thinking 13

Orkan Telhan otelhan@MIT.EDU

Towards the Designers Agency 19

Josh Lobel josh@joloinitiative.com

Ends-Means: creating value through digital design 27

Josh Dannenberg josh.dannenberg@gmail.com

Pliancy 35

Session 2 Systems:
Moderators: Simon Kim and Dido Tsigaridi

Timothy Jachna sdtim@polyu.edu.hk

Mediating planning 45

Harald Trapp trapp@tuwien.ac.at

Towards a System of Architecture 51

Torben Berns torben.berns@mcgill.ca

The Model and its Referent 59

Athanassios Economou Athanassios.Economou@coa.gatech.edu

Congruent spaces 65

Arno Schlter schlueter@arch.ethz.ch

Incorporating Reality: From Design to Decision Making 71

Session 3 Digital Condition:

Moderators: Kostas Terzidis and Nashid Nabian

Francisco Gonzalez de Canales Francisco.GonzalezDeCanales@aaschool.ac.uk


Emmanouil Vermisso evermiss@fau.edu

Seeking an inherent historicism in digital processes: who care(d)? 85

Edgardo Perez Maldonado cuajocompike@hotmail.com

Sedated Algorithmia: Five Rhetorical Questions about Digital Design 95
Second International Conference on Critical Digital: Who Cares (?) 4

Robert Flanagan Robert.Flanagan@ucdenver.edu

Wall-Es prophecy: or How to do cleanup of the toxic residues of the Digital Age 101

Aaron Sprecher aaron@o-s-a.com

From Formal to Behavioral Realities 109

Session 4 Philosophical:
Moderator: Kostas Terzidis

Mark Lindquist mark.lindquist@ndsu.edu

Is this what we are so afraid of? Digital Media and the Loss of Representative Power 121

Sha Xin Wei shaxinwei@gmail.com

From Technologies of Representation to Technologies of Performance 129

David Gersten dlgersten@earthlink.net

Material Imagination: In the Shadow of Oppenheimer 137

Yasmine abbas abbas.yasmine@gmail.com

Neo-Vernacular | Non-Pedigreed Architecture 145

Aghlab Al-Attili Al-Attili@ed.ac.uk

The Familiarity of Being Digital 153

Erik Champion E.Champion@massey.ac.nz

Pretty Polygons Or Experiential Realism: 159

Session 5 Identity:
Moderators: Ingeborg Rocker and Zenovia Toloudi

Jose Cabral Filho cabralfilho@gmail.com

Beyond Transgression a playful future for digital design 171

Mirja Leinss mirja.leinss@googlemail.com

Making meaning of technology - technology as a means 177

Han Feng H.Feng@tudelft.nl

Quantum Architecture: An indeterministic and interactive computation 183

David Celento dcelento@gmail.com

(Digital) Rock, Paper, and Scissors and Stork? 193

Roel Klaassen klaassen@premsela.org

Mind the Mainstream! 203
Second International Conference on Critical Digital: Who Cares (?) 5

Session 6: Normative:
Moderators: Anna PlaCatala and Stephen Schaum

Gun Onur ogun@kpf.com

The Handbook for Avoiding Computational Design Fallacies, Vol. 1 213

Paolo Fiamma paolo.fiamma@ing.unipi.it

< Firmitas - Utilitas - Venustas .Digital-as (?) > 221

Shohreh Rashtian shohreh@optimum-environments-for-all.com

Academia Should Care: Moral and Ethical Obligations 227

Rachelle Villalon rvill@MIT.EDU

Digital Design Tools that Talk: Integrating Design and Construction Knowledge 233

Jules Moloney jmoloney@unimelb.edu.au

Time, Context and Augmented Reality 239
Second International Conference on Critical Digital: Who Cares (?) 6

Second International Conference on Critical Digital: Who Cares (?) 7

Session 1 Process:

Moderators: Sotirios Kotsopoulos and Jan Jungclaus

Ingeborg Rocker
Computation in command? Fading Flamboyant Architectural Aesthetics

Thorsten M. Lmker
Do you care? About the Fictive Influence of Parametric Modeling on Critical Thinking


Josh Lobel
Ends-Means: creating value through digital design

Josh Dannenberg
Second International Conference on Critical Digital: Who Cares (?) 8

Second International Conference on Critical Digital: Who Cares (?) 9

Second International Conference on Critical Digital: Who Cares (?)

Working title:

Computation in command?
Fading Flamboyant Architectural Aesthetics

Ingeborg M. Rocker

The paper will be a critical assessment of the role of patterns for the analysis and
synthesis of architecture. The most recent exuberance of dysfunctional affect- and
effect-less pattern-making in architecture has hit a wall with the financial crisis
looming. The digitally generated and produced aesthetics of the past years seems at
the same time flamboyant and flambeaus given the financial crisis we have

My paper looks at the relationship between computation and patterns, drawing a

close link between logics of computation and the information aesthetics that arise
thence at the same time it critically assesses by what methods the evolving
patterns could enable a responsible use of the limited resources at hand. In the
current environmental and economic situation it is not only thoughtless but
irresponsible to merely tack onto patterns which the computers spit out the label
algorithmically generated sustainable in a fruitless effort to imbue senselessness
with reason and sensibility.

Historically, this paper traces the development of patterns as opposed to ornaments

in architecture, and locates the contemporary rise of interest in patterns as the direct
result of computational logics. Patterns, as opposed to ornaments, could be
considered the aesthetics maps imaging the computational process. (I will discuss
von Neumann, Ulam, Cellular Automata, Stephen Wolfram, Langton).

Traditionally, patterns were comprehended as the repetition of space and surface-

constituting elements which generated surfaces, based on either material quality or
fabrication techniques (weaving, knitting, printing). Different patterns occur thereby
through different generative logics (Paisley, Glencheck, Herringbone). In this sense
the pattern, the literal in-formation of material is always already indicative of the
logic of its generation, of the algorithms that generated it.
Second International Conference on Critical Digital: Who Cares (?) 10

Image: Weaving Patterns

Indeed, not only do current patterns flow out of algorithmic logics, but the beginning
of mechanized computation itself sprung from the visual logic of the weave. In 1801
in Lyon, France, Joseph-Marie Jacquard automated the loom that simplified the
process of manufacturing textiles with complex patterns. The loom was controlled
by cards with punched holes, each row of which corresponded to one row of the
design. The Jacquard loom was, therefore, the first machine to use punch cards to
control a sequence of operations. Although it did no computation based on them, it is
considered an important step in the history of computing hardware. The ability to
change the pattern of the loom's weave by simply changing cards was an important
conceptual precursor to the development of computer programming.

Informationaesthetics Past

With the development of computation in postwar Germany, a theory developed

called Informationaesthetics, which drew a close link between patterns and
algorithms. Considering the computer as a laboratory of Informationaesthetics,
algorithms where developed that analyzed and synthesized aesthetic phenomena.
As a result, the first computer graphics and architectures in Germany emerged. Not
only were patterns frantically analyzed and synthesized, but even human perception
was re-conceptualized as a computation of patterns, as nothing other than pattern
recognition. The commonly shared presumption was that human perception requires
recognizable structures, patterns, in order to recognize at all. Consequently
perception had to be based on a minimum of repetition (periodicity) und

It is unsurprising that these assumptions have led to aesthetic repercussions: as

Informationaesthetics (art and architecture) obsession with patterns that could
emerge from an unordered materiality (noise) suggests, artificial patterning
Second International Conference on Critical Digital: Who Cares (?) 11

generates structures that are not only similar to natural patterns, but that suggest
that they both emerge from one and the same principle: computation.

Informationaesthetics Present

Today we see a revival of the architects fascination even obsession with

algorithmically generated complex patterns.

From what does this fascination stem? And more importantly, how do patterns
inform architecture, theoretically as much as pragmatically?

Are patterns produced for the sake of patterns for the sake of affect?

Are patterns the result of a senseless play of signs, code, detached from materiality
and functionality mere grafts with and without symbolic impetus?

Or, are patterns rather stemming from the logic of making, the logics of materiality?

Which role has computation in this discussion does and when does code matter?
And who cares?

Patterns as they are used today can be extravagant and irresponsible, but this need
not be the case: If architects care to imbue their logic with sense, computation
could allow the architect to handle an overwhelmingly complex set of parameters to
arrive at a logic that flows with and derives from its circumstances, instead of merely
being imposed upon it an architecture of the milieu that not only pretends to be
sustainable but actually makes sense.
Second International Conference on Critical Digital: Who Cares (?) 12

Second International Conference on Critical Digital: Who Cares (?) 13

Do you care?
About the fictive influence of parametric modeling on critical thinking

Thorsten Michael Loemker

Technical University of Dresden, Germany

Katharina Richter
Bauhaus-Universitt Weimar, Germany

With the introduction of parametric modeling techniques in the architectural profession
stunning buildings appeared that have been designed by architects all around the world.
These buildings witness the enormous potential of a technique that provides the designer
with sophisticated tools to achieve an expression of form that could not be accomplished
foretime. However, it seems that especially in combination with building information
modeling the capabilities of parametric technologies are misconceived. Even if architects
make use of these chic technologies, many of the buildings erected arent much good. They
suffer from structural damages, cost overruns or faulty designs. Maybe the use of the tools
was aligned to from exploration only, remaining other design aspects out of consideration.
Maybe the software is not a quarter as good as praised. Maybe the designing architects
were simply short on experience in using the software. Or maybe, they just dont care.

1. Introduction

Ever since the beginning of their careers many architects thought that experience is one of
the most important factors in architectural practice. Architects gained experience from their
studies at university, from internships in architectural offices, from traveling or from
discussions with others. They had everyday good and bad occurrences in their offices and
thus learned their lessons. Those who also went to academia tried to pass down their
experiences to students and for quite a while the whole system worked pretty well. Even
with the beginning of the digital revolution, experience was one of the most important
factors for progress in architectural firms. The interconnection of those who were working
with manual techniques throughout their whole career on the one hand and the younger
ones who were experienced in the use of computers on the other, formed a kind of positive
dependency that always aimed at the successful realization of a project. And in fact, the
objective of the integration of the computer was to make the most out of these projects,
even if its capabilities were often not as sophisticated as needed. At the time were
computers were used as drafting devices the idea was born to map the experience of the
designer into the machine. Most of these attempts failed. This might have happened
because of technological grievances but also because of the methodological problem to
define what an experience actually is, what it contains, how it can be encoded and how to
make use of it in a computer program. Nowadays one might think that both the
technological progress and the methodology proved to be a real enrichment in the design
process. But for some reason this has not happened. How else is it allegeable that so many
buildings suffer from structural damages, cost overruns or faulty designs? Is it that the idea
doesnt count anymore or is it due to inadequate technology that architects have at hand?
The latter is hard to believe as latest developments in Building Information and Parametric
Modeling prove the enormous potential of these applications. The former would imply that
architects do not make use of experiences anymore, which is obviously not plausible.
Nonetheless, many up-to-date projects demonstrate a lack of quality that allows only one
Second International Conference on Critical Digital: Who Cares (?) 14

conclusion: Architects dont care. Isnt the fact that the windows of the Gallery Lafayette
drop like leaves in the fall1 an indication for this complacency? Isnt a project like the
building of the new philharmonics in Hamburg, which exceeds the estimate at about 300
percent an example for an architects insensibility (or for his immorality but this is another
story) ? Isnt the fact that cracks have emerged, leaks have sprung, drainage is faulty, mold
is growing, and that snow and ice fall dangerously from the many curved surfaces and sharp
edges2 of a building of one of the worlds leading architects an indicator that something is
going wrong? Indeed everything what has been said appears on many other buildings that
have been built by less well-established architects. It happens everyday and it happens
everywhere. The difference is that the high-flyers make use of sophisticated technology.
And that it appears that their intention of technology usage is restricted to merely achieve
stunning form. To make it clear, we do not argue against breathtaking form nor do we
argue against technology usage. It is quite the contrary, we argue for technology usage that
supports the designer to achieve breathtaking form whilst ensuring a functioning building.
Quite a few examples exist that demonstrate that these demands are not mutually
exclusive. But what is needed is an approach to make the knowledge about the How-To
commonly available. In other words, it is time to learn from the mistakes of the past and to
integrate this knowledge into the systems architects have at hand, let them be named BIM,
CAD or whatsoever. Conversely, the architects demand for originality which seems to lack
all kinds of rationality contradicts the idea to make use of experiences, especially those
experiences that were made by others. But by calling the architects attention to the fact
that systems that are based on experiences support them through nothing more than
through those design strategies that they would have used anyway, it might be possible to
change this contradiction. The use of rules, which consequentially derive from the
exploitation of experiences, a priori implies a mechanistic aspect that might never gain
recognition from designers. This is therefore disconcerting as latterly popular design
strategies, such as generative design, are nothing more than the usage of suchlike rules.
These are however accepted, as they are judged not being dangerous for the creative
design process. The upshot is, fun per contra earnestness, originality per contra verifiable
quality. The potential that lies in the utilization of rules, i.e. the generation of assessable
solutions, is neither detected nor exploited. But because of the fact that it might be possible
to integrate experiences, i.e. knowledge into nowadays systems, it is time to give
architecture a reality check. It is time to ask: Who cares?

The paper introduces two protagonists, each of whom taking a different view and stemming
from a different research background. From these perspectives, current trends in the use of
information technology in architecture will be examined against the potential of traditional
approaches in knowledge engineering.

2. The first protagonist - design rules

Whilst discussion about the architect as master builder currently enjoys a renaissance3,
there is also a considerably noticeable negation of the architectural professions ambiguity.
This manifests itself in terms of a rejection of rational approaches and in favor of the
individualistic and esthetic aspects of the design process. The examination of unpopular
topics like revitalization, cost- and project management or divergences of architecture and
computing demonstrate the omnipresent fluctuation between the dichotomies in
architecture. And since architecture demonstrates itself as science, arts and technology and
in some cases even as mercantilistic action, the planning process of a building is also
subject to suchlike dichotomies. If architecture applies to the principles of Vitruvius then it is
the felicitous aggregation of duality and the mastery of complexity that characterizes the
architects work as well as his primordial position as a generalist4. But how is it possible to
Second International Conference on Critical Digital: Who Cares (?) 15

walk this tightrope and create a balance between the disciplines? What kind of tools do we
have at our disposal and how do we codify emerging digital technologies in this context?

To dwell on the extraordinary writings of the ancient world: Vitruvius5 already described the
first principles of an approach that we name rule-based or generative design. In his work
The ten books on architecture he reflected on the particular social importance of
architecture and proposed rule-based proceedings, whose contemporary reinterpretation
seems to be difficult for todays architects to follow. He also complained about the emerging
dilettantism in architecture and it is conjecturable that the aim of his rules was to enable
less talented master builders to happily resolve their work. Potentially he already treated
the use of rules as a means of quality assurance. Without doubt he made use of a system of
rules and reverted to his knowledge of other disciplines in the design and construction of his

A leap into present time: In his capacity as master builder it has to be expressed that the
application of a countless number of building codes, standards, ordinances or statutes can
hardly be accomplished by the architect. Ancient dilettantism is nowadays opposed by the
architects role as a generalist. Therefore, it is necessary to develop appropriate
methodologies to repeal this controversial antagonism. The architect needs to have tools at
his disposal that guarantee the successful processing of the overall planning. These have to
provide the problem-specific knowledge in terms of rules that leads to functional and
plausible solutions in the planning process6. Generally spoken rules are implements that
derive from experience. It is generalized knowledge that comes with experience that is
mapped into a single rule. Even if many architects negate the existence of design rules it is
pretty obvious that design is geared to the use of rules. Alongside objective rules that
represent indispensable normative elements of the design process, subjective rules most
notably influence the peculiarity of a design solution in terms of esthetical characteristics.
Not only Vitruvius canon of rules but also design theories such as Feng-Shui or Vaastu
document the existence and use of rules7. Many architects also made use of rules
throughout their career. Not only ancient architects like Vitruvius but Renaissance architects
such as Andrea Palladio proved their existence as well. But even today or in recent history
one can find prominent examples. Camillo Sitte8 described them in 1889 as well as Viollet-
le-Duc und Jean-Nicolas-Louis Durand9 who developed systematic and rational design
methods that are based on rules. Quadratic proportions in the buildings of Oswald Matthias
Ungers, curved forms in the work of Zaha Hadid, organic elements in the work of Santiago
Calatrava or the use of material in the work of Shigeru Ban are nothing more than the use
of rules. Todays projects that were significantly developed through the use of computers
also demonstrate the use of rules. Kees Christiaanse uses them in urban planning10 and the
buildings of MVRDV, NOX, Asymptote Architects or Frank O. Gehry are also clearly defined
by the use of rules. How else would it be possible to recognize the work of a specific
architect other then through the extraction of their intrinsic properties in form of rules? Not
all architects admit themselves to the use of rules. An exception is Christopher Alexander
whose Pattern language was beloved and coevally berated11. But it appears that the ideas of
those architects who employed rules not only in the form finding process but also in the
detection of constructive, functional or economical solutions do not exist any longer or at
least do not play a prominent rule anymore.

However, current technology seems to provide all possibilities for architects to define rules
and to benefit from their usage in the design process. A new way of thinking and designing
has been established that is particularly driven by the software industry. These protagonists
of a good idea and their products theoretically allow the architect to integrate building
design, construction and management processes in the planning through the use of a single
tool that has been named BIM. In conjunction with parametric modeling a powerful tool
Second International Conference on Critical Digital: Who Cares (?) 16

emerged that promises to leave nothing to be desired. But is that really so? Indeed, BIM
seems to satisfy the claims of those who want to work on an integrated model that maps all
different aspects of the design process throughout all stages of the planning. But who
integrates the various rules we were talking about beforehand? Where are the rules that
map fire regulations, material properties, lighting conditions, aspects of building services
engineering, planning law, buildings standards, ordinances, guidelines and the many other
items we have to deal with? Indeed all these things can be scripted and integrated into the
programs. But do we want the standard architect to script local building codes into his
software? And do we then have to script these codes for each and every different state
within which we design a building? It is almost cynical, but industry leaves us with a
declaration of intent, provides an empty framework and architects cheer.

3. The second protagonist - design experience

At the beginning of the 80s a simple but plausible idea made waves between researchers
that dealt with the development of knowledge based computer systems. The crucial factor
was the observance that experts of different domains intensely made use of their own
experiences during problem-solving. Researchers came to the conclusion that artificial
expertise could not be modeled solely through a system of rules12,13,14. Thinking and
problem solving processes were considered as reminding processes during which decisions
were made through a comparison of a new situation with one or more concrete instances
deriving from past situations. This assumption reformed the approach of knowledge based
systems with regard to the representation of knowledge. Till then researchers tried to record
knowledge as abstract and general as possible in form of rules or models to be capable of
applying this knowledge to a wide variety of different situations15. With the publication of
Dynamic Memory Theory Roger Schank raised a plea to this course of action16. He argued
that knowledge has to be represented in form of concrete instances of specific episodes of
experience. Thus, a formalism was developed through which experiences were able to be
described in terms of three parts: problem description, solution of the problem and

Rule based systems as well as systems that work with concrete instances of experiences
both rely on the usage of experiences from past problem solving situations, but they differ
in the abstraction of knowledge. Whilst rules are small, independent and primarily consistent
pieces of expert knowledge, concrete experiences contain large pieces of knowledge that are
also often redundant with other experiences usually denoted as cases19. Rule based systems
are applicable for disciplines that are well-known. A small amount of rules is often sufficient
to successfully solve specific tasks. Weak-Theory Domains such as architecture are better
covered through the use of systems that rely on concrete experiences20. Design is obviously
a process that is profoundly based on experiences. Thus, an amazing amount of research in
the 90s tried to aim for the development of digital methods to support this course of
action. Surprisingly the methods that were developed are of no relevance today.

4. Reconciliation and conclusion

Computer programs that aim at supporting the architect in the planning of buildings require
the integration of knowledge that can be formalized. This knowledge rests upon experiences
that to some extend can be implemented in rules. However, many common problems in
planning are too complex to be mapped with suchlike traditional approaches21. This gap
could be closed by means of experience based systems if their practical application would be
in coincidence with the underlying theory22. But negating the fact that the theory claims to
describe experiences, whereas existing applications merely represent solutions of usually
Second International Conference on Critical Digital: Who Cares (?) 17

unknown problems, take the original approach ad absurdum23,24. Rule based systems as well
as experience based systems prove to be interdependent. Their conceptual methods are
closely related to each other. Both methods demonstrate scope and potential for successful
implementation in the planning process. Whilst rule based systems seem to become
accepted on the basis of the fact that they can be applied to support the creative design
process, experience based systems still lead a wallflower existence. This makes it all the
more surprising as exactly these systems might be able to aid the architect in the obnoxious
territories of the design process. Apparently not only technical feasibility plays an important
role in the adaptability of the methods that come along with these systems, but also their
general acceptance. Furthermore the use of suchlike systems also entails debate about a
predefined set of criteria to measure the quality of an architectural design solution.

But what we actually monitor is the exact opposite. Parametric modeling and knowledge
based systems were introduced in precisely those areas that elude a systematical
assessment according to objective criteria. In this regard Rambow remarks that The
architectural society has to satisfactorily show that she performs services that are useful for
the general public. To provide proof of these services is considerably more difficult than
one might think. That is to say it necessitates reaching a consensus what these services
actually are and how to determine them adequately25. Hence architects avoid verifiable
evaluation of their buildings. And even if technological possibilities have never been more
promising, arbitrariness is the problem, or in other words: The necessity to make a choice
without helpful criteria 26. In summary it has to be said that the possibilities that arise from
the use of parametric systems be it rule based or experience based systems are
inadequately utilized. Nonetheless, manufacturers of digital tools and especially BIM-
technologies ballyhoo their programs with concentrated passion as sustainable products,
whereas their appliance degenerates to mere visualization and form finding tools with the
sole advantage of a common building model. A good idea degenerates to meaninglessness
and relinquishes itself to ridiculousness if manufacturers award a prize for BIM usage and
the awardee commends the softwares possibility to visualize the design work in early
design stages27. But not only has the architects general attitude towards new technologies
to be questioned. It applies to the manufacturers product lines in equal measure. The more
so as the amount of tools available to support sustainable planning is manageable to put it
mildly. The architect can only be indirectly accused of this development. Nonetheless, it is
his liability to exert pressure on the industry to develop appropriate tools and it is likewise
his responsibility to define rateable criteria for the outcome of his work. But as one of the
last bastions architecture still hides behind fallacious assertion that no criteria or principles
for architectural quality can be defined. She insists upon an antiquated claim for originality,
which describes buildings as being of good quality only if they apply to the prevalent canon
of architectural form or if they are judged as being significant by those who carry weight28.
Jger puts it more drastic: Success of those architecture stars does not rely on the fact
that they master complexity, it relies on the fact that they negate complexity 29. Therewith
he certainly has a point. Or would you like to work in an ecological high-tech building
without functioning ventilation or mandate an architect who is time and again surprised by
the cost of the buildings he designs? But Who cares?

5. Endnotes
Jger F., Wann ist Architektur gut? in: Der Architekt: Zeitschrift des Bundes Deutscher
Architekten, 1(2), 2002, pp. 52-56.
Faulkner T., http://valleywag.gawker.com/320184/tech/subparchitecture/mit-sues-gehry-for-
negligent-design, 2007, last viewed: 04-02-2009
Second International Conference on Critical Digital: Who Cares (?) 18

Glymph J., Shelden D., et al. A Parametric Strategy for Freeform Glass Structures Using
Quadrilateral Planar Facets in: ACADIA 2002, Thresholds. Design, Research, Education and
Practice in the Space Between the Physical and the Virtual, Conference Proceedings, Proctor,
G. (ed.), California State Polytechnic University, Pomona, USA, ACADIA, Association for
Computer-Aided Design in Architecture, pp. 307-325.
Vitruvius M. The Ten Books On Architecture Dover Publisher, 1960.
Vitruvius M., Ibid.
Loemker T.M., Plausibilitt im Planungsprozess, PhD-Thesis, Bauhaus-Universitt Weimar,
Germany, 2006.
Loemker T.M., Ibid.
Sitte, C., Semsroth K., et al., Der Stdte-Bau nach seinen knstlerischen Grundstzen, Wien,
Hearn M.F., Ideas that shaped buildings, Cambridge, Mass. MIT Press, 2003
Christiaanse K., http://www.kaisersrot.com, 2004
Alexander, C., Ishikawa S., et al. A pattern language : towns, buildings, construction, New
York, NY, Oxford University Press, 1977
Kolodner, J. L., Case-Based Reasoning, Morgan Kaufman Publishers, Inc., San Mateo, 1993
Schank, R. C., Dynamic Memory A Theory of Reminding and Learning in Computers and
People, Cambridge University Press, Cambridge, Mass., 1982
Schank, R. C., Dynamic Memory Revisited, Cambridge Univ. Press, Cambridge, 1999
Kolodner, J. L., op. cit. 12
Schank, R. C., op. cit. 12
Kolodner, J. L., Improving Human Decision Making through Case-Based Decision Aiding, in
AI Magazine, 12(2), pp. 52 68, 1991.
Kolodner, op. cit. 12
Kolodner, op. cit. 12
Heylighen, A., In case of architectural design - Critique and praise of Case-Based Design in
architecture, PhD-Thesis, Faculteit Toegepaste Wetenschappen, Department ASRO, K.U.
Leuven, Leuven, Belgium, 2000.
Gritzmann P., Brandenberg R., Das Geheimnis des krzesten Weges: ein mathema-tisches
Abenteuer, Berlin, Springer, 2003
Richter K., Donath D. Towards a Better Understanding of the Case-Based Reasoning
Paradigm in Architectural Education and Design - A Mirrored Review, in: Communicating
Space(s) [24th eCAADe conference proceedings], Volos, Greece, 2006, pp. 222-227.
Richter K., Donath D., Augmenting Designers Memory - Revisal of the Case-Based
Reasoning Paradigm in Architectural Education and Design, in: Electronic Proceedings of the
17th International Conference on the Applications of Computer Science and Mathematics in
Architecture and Civil Engineering, Grlebeck K., Knke C. (eds.), Weimar, Germany.
Richter K., Donath D., op. cit. 22
Rambow R., Moczek N., Nach dem Spiel ist vor dem Spiel - Evaluation und Baukultur, in:
Deutsches Architektenblatt, 3, 2001, pp. 24-25.
Fromm L., Baukultur Ein architektonischer Diskurs in: Der Architekt: Zeitschrift des
Bundes Deutscher Architekten, 1(2), 2002, pp. 38-41.
Autodesk, http://www.autodesk.de/adsk/servlet/item?siteID=403786&id=11996804, 2009,
last viewed: 04-02-2009
Franck D., Franck G., Von der poetischen Kraft der Architektur, in: Der Architekt: Zeitschrift
des Bundes Deutscher Architekten, 1(2), 2002, pp. 42-47.
Jger F., op. cit. 1
Second International Conference on Critical Digital: Who Cares (?) 19

Towards the Designers Agency

Software Mannerisms for Computational Design

Orkan Telhan
MIT Design Laboratory, USA.


Given the broader rhetoric of design paradigms, methods and tools, the literacy of
computation is increasingly determining the capacity to shape the thinking process behind
architectural design and the designers agency in comprehending his/her toolset.

In this paper, I intended to identify a need for studying the culture of design software by
bringing into attention a number of perspectives that shape software as a social, cultural
and political artifact. Three software mannerisms are presented to suggest an experimental
and speculative way of thinking on the nature of computational design to bring into
attention a broader social, cultural and political discourse that implicitly or explicitly shapes
not only design tools but also their underlying technologies.

Instead of prescribing different roles for the designer, (e.g., designer as tool-maker,
designer as hacker, designer as tool-consumer, etc.) here, the primary focus is on
diversifying the computational processes and conceive different roles for different kinds of
designers as they maneuver their thinking in critical, speculative or purely technical means
in architectural design.

1. Introduction

The lure of the digital undeniably challenges the familiar relationship between the designers,
the design practice, and the tools that shape the process. We witness an abrupt transition
from the times when designers were comfortably aware of the capacity and the limits of
their tools, to the uncanny times, when designers finds themselves continuously negotiate
almost every step of the design process, from creative thinking, fabrication, to control and
documentation within the language of computation. And like all languages, once you are not
born into one, it is always hard for the non-native speaker to comprehend the language of
the other in a comfortable way, let alone figure out what he/she can really demand more
from it.

Matthew Fuller, in his editorial work, calls it Software studies, where he unpacks the
stuff of software through a number of critical texts that investigate digital objects,
languages and logical structures that form the lexicon of computation (Fuller 2008).
Algorithms, functions, variables, operations, code, control, memory, compression, interfaces
and many other terms become object of this study while their cultural, social, political and
technical uses are articulated by artists, hackers, cultural theorists and computer scientists
at the same time.

Here, in a similar vein, I intend to follow a strategy to unveil some of the thinking behind
computational design process. My scope will be much limited and perhaps be regarded
speculative as it will not point towards immediately utilizable strategies that will allow us to
conceive critical design tools.
Second International Conference on Critical Digital: Who Cares (?) 20

However, by exposing some of the implicit technical and political agenda behind
computation, it might be possible to start identifying ways to penetrate this thick language
and perhaps contribute to the level of literacy in comprehending our own design tools, which
eventually will produce the built environment that communicates a critical and speculative
nature via their designer.

After articulating on what it may mean to have a culture of design software study, I will
discuss three mannerisms regarding the uses of software that expose how strategies of
critical thinking can utilize the medium of computation and fundamentally challenge the
assumptions we project onto our computational design tools.

3. Critical Thinking, Speculative Tools and Software

If what is at stake is the notion of agency, gained or lost, while we seek more literacy within
the software culture, it is worthwhile to question the role of designer and identify ways that
he/she can demand more to overcome this intrinsic alienation between the worlds of design
and the tools and paradigms that are designed for it.

If todays top-down design tools are a brief response to the immediate, prescribed needs of
computational design which impose a particular socio-economical framework onto the
designer, from representation to fabrication; it is a pressing need to move on to alternative
styles of thinking, perhaps invent new mannerisms to propose alternatives to the existing
paradigms, techniques, processes and/or machinery prescribed by software makers.

Instead of casting stereotypical roles to designers, (e.g., designer as tool-maker, designer

as the hacker, designer as tool-consumer) and impose a technocratic evaluation on the
relation between designer and his/her tools, it is worthwhile to explore the possibility to
form a culture of design around the study of its software, where varying degrees of literacy
in computation can be combined with social, cultural and theoretical studies articulated by
different kinds of designers.

At the first stage, diversifying the thinking process by computational might even be more
important than the way a particular design tool may be conceived and used in critical or
speculative ways. Thus, the practice of crafting algorithms for flamboyant forms and
extravagant geometries, scripting graphics on programmable facades and building
interventionist robotic architectures can identify their own territory within the broader
rhetoric of computational design culture.

3. Software Mannerisms

The following is a discussion on three peculiar cases about computation that utilize a level of
experimental thinking to explore alternative experiences in the design process. While the
speculative nature of the cases inherently does not allow easy answers or yield definitive
remarks for software studies, the real intention behind them is to bring into attention what
can we study towards a quest of critical design software.

Here, the primary intent behind these selections is to reflect on the abstract foundations of
software and show how this abstraction implicitly veils a number of issues from its designer.

With the first case, I look at the basic software construct looping, to simply ask, why we
loop, iterate, and generate within the computational design process. Is this because of an
intrinsic property imposed by the language of computation or as designers, we deliberately
Second International Conference on Critical Digital: Who Cares (?) 21

do utilize our agency and iterate, generate and loop due to a call from the design process

The second case investigates a confrontation between alternative ways of using

computation, in this case for activating building facades. By referring to two radically
different architectural interventions, I intend to bring into question the role of symbolic and
physical transformation and their subtle difference in negotiating the meaning
communicated by the building.

The third case is about questioning a general neglectance towards the social, cultural and
political foundations of computation and put into question the generalness or universality of
computational design tools to reflect on the possibility and impossibility of vernacular

3.1. Looposcope

Being one of the fundamental concepts of computation, the loop is the very underlying logic
of iteration and recursion. It is at the very core of every programming language as the
theoretical construct that allows us the execution of a set of instructions repeatedly for a
given number of times, until a condition is met or run forever.

A close look at what is being generated, repeated, tiled, patterned, and modulated in
computational design exposes the subtle yet intimate relationship between looping and
algorithmic architecture where through repeating its object the loop weaves an abstract
topology of symbols within the space of the computer memory and builds the generative

But why do we loop when we design? Do we loop because it is intrinsic to the language or is
there is any motivation coming from the experiential domain that guides our imagination?

While what the loop does is well understood in the symbolic and representational domain, it
is not very clear what it maps onto in the experiential domain in architecture: What does it
mean to loop one or many times? Where does the urge of iteration and repetition come

While the questions remain unanswered, Wilfried Hou Je Bek identifies a need for a mighty
theoretical apparatus, a looposcope, a computational aid to reveal this unearthly beauty
and the way it maneuvers within the time and space (of the computer memory). This
hypothetical device would recreate the (mnemogenic) experience of looping in another
medium and give us perhaps a glimpse of what is going on at the background beyond what
we know as symbolic manipulation (Fuller 2008, 179).

Once conceived, it should not be very difficult to call this tool on duty to shed more light on
the paths of our cyclical imagination that shapes the large portion of the algorithmic world
around us as if we experience looping (Fuller 2008, 179).

Perhaps being fundamentally detached from such experience (of iteration), the very
material of the generative process, we are quite desensitized against looping and really lost
contact with why we iterate, repeat, tile and pattern in the design practice. Perhaps the
looposcope and others would be the indispensable for future generations that would feel a
stronger need to gain access to the less familiar experience of language of computation
where an abstract sense of materiality awaits to be experienced.
Second International Conference on Critical Digital: Who Cares (?) 22

3.2. The Crowbar for Turning the Place Over

Gordon Matta-Clark is not Mies van der Rohe. For Fuller, an idiosyncratic formalism that is
concerned with the structural activation of severed surfaces, ranging from cuts, ruptures
and punctures of buildings does not easily align with the functionalist urban sublime of
glass and steel skyscraper practiced by the latter architect (Fuller 2003, 42).

A procedural, structural and material investigation of the malfunctions of the urban space
complexifies space in terms of its placehood, as an object, and within its social,
chronological and economic contexts (Fuller 2003, 42). This is a different kind of
complexity that is based on the experience of time, human habituation or the lack of it, and
based on connotations of life embedded inside the structure of a building.

Matta-Clarks crowbar, which functions like a symbolic, strategic, and systematic tool that
makes it possible to conceive such (complex) malfunctions, does not easily translate into
the dematerialized world of software where complexification becomes symbolic and
geometric manipulations within a dry-clean synthetic space (Fuller 2003, 43).

However such structural and material investigation, in lieu Matta-Clarks work happens in
Richard Wilsons Turning the place over. This is a surface intervention where a circular
section of a building faade can rotate 360 degrees and plug back into its existing place
(Wilson). As the piece rotates around itself, it raises questions about the nature of the
procedural transformation and where it operates. Does the disruption perform within the
physical space, symbolic domain, or at the expectations of the viewer?

Figure 1. Views from Turning the Place Over, at the Liverpool Architecture Biennial.

Flare, on the other hand, is a modular, programmable, kinetic ambient reflection

membrane that performs surface animations for any building, faade or wall. Acting like a
living skin, it allows a building to express, communicate and interact with its environment
(Flare-facade). Flare transforms the surface pictorially where what is at display is the
imagery of the functionalist digital sublime that is becoming increasingly prevalent in
billboard-like faade architecture.
Second International Conference on Critical Digital: Who Cares (?) 23

Figure 2. Flare, by WHITEvoid interactive art & design.

Turning the Place Over is not Flare. While the software that performs the actuation and
control in both interventions is intrinsically similar in nature, it is radically different in
objectives. As both designs are systems of display and animation, either can be read as
functional or malfunctional disruptions regarding their communicative nature. The site of
transformation, on the other hand is inherently different for each. While Wilsons piece is the
rupture, the Flare continually awaits for its, while in search for its crowbar.

If I repeat Fullers question: What is the crowbar within the lexicon of the computational
design toolbox? Or how long do we have to wait to see a Matta-Clark intervention into
programmable surface culture. What will be the material or immaterial hack to the surface
that will destabilize our conformist believes about the nature of the digital ornament that
beautifully flows on the surface and decorates the underlying geometry?

3.3. Cornrow Curves and Vernacular Software

While architecture has a long tradition in negotiating the meaning and use of the vernacular
(tools, materials and methods of construction) in the design process, it is quite rare to see a
similar (critical) attitude towards computational design tools that question their ethnic,
social, cultural and political foundations, which have been shaped in different times and in
different geographies.

Tedre and Eglashs work discusses a suite of culturally-situated design tools that are used
to model the computational aspects of indigenous designs (e.g., iterative patterns in
weaving, beadwork, and basketry, etc.), simulate geometric transformations and allow
creating new designs (Fuller 2008, 95).

Framed under the vision of ethnomathematics and ethocomputing, this work provides
interesting perspectives on a different kind of anthropology of computing that studies the
ways how similar mathematical and computational problems were addressed in different

Cornrow Curves Design tool, for example, teaches transformational geometry with cornrow
hair braiding techniques with its African origin that dates back to the 500 B.C. The software
not only allows a number of operations to simulate and observe different braid geometries
to explore alternative curve equations but also couples the technical endeavor with the
Second International Conference on Critical Digital: Who Cares (?) 24

underpinning cultural context to explain why such kind of braiding has been politically
important as a sign of resistance among the African-American population during the Civil

Being exposed as a cultural artifact, with this software, we not only witness computation on
abstract geometry, but also a procedural intervention into the political agenda of the design
that is often hidden before the assumed Western Knowledge Worker (Englash 2001).

Neither your favorite CAD program, nor the script that performs a geometric expression in it
often cares about what the object of the design is, its material embodiment, its fabrication
technique, or the political agenda it is responding to. Computation often safely operates
under deep layers of abstraction that decouples the artifact from its context and passes the
responsibility for criticality to the user, the designer.

Figure 3. Cornrow Curves Software,


While the distinction between Western and Non-western may or may not hold for your
desired critical agenda, it is still valid to ask if there can be such thing as vernacular
software, which utilizes non-western techniques, logic, algorithms and data structures?

What would it really mean to conceive a non-western house in a western-tool? Does it make
any sense? If it would, would different layers of so-called western representation techniques
allow you to confront with different values, logics and geometric interpretations? Do we
have to wait for the computational process to be over to revisit the social, cultural, ethnic
agenda of the work what is been left behind by the abstraction process?

Does the dichotomy between the genericness, generalness and fictitious universality of the
computational design tool and its content need to hold forever (Fuller 2008, 155)?

Or ethnocisizing design computation is inviting a new kind of naivity to architecture software

that will inevitably dwell on an intrinsic exoticism of the non-western, to study the so-called
others way of computing that will affirm a critical distance from the subject of the study,
which was argued to establish (safer) western grounds?
Second International Conference on Critical Digital: Who Cares (?) 25

4. Conclusion

Given the broader rhetoric of design paradigms, methods and tools, the literacy of
computation is increasingly determining the capacity to shape the thinking process behind
architectural design and the designers agency in comprehending his/her toolset.

In this paper, I intended to identify a need for studying the culture of design software by
bring into attention a number of perspectives that shape software as a social, cultural and
political artifact.

Borrowing a number of strategies from Matthew Fullers editorial work Software Studies, I
investigated three perspectives, called software mannerisms, which can contribute to the
literacy of computational tool design and diversify the thinking behind it. Such design
software studies primarily address a discourse to extend the agency of the designer and
contribute to a critical design thinking process as opposed to impose a technocratic agenda
that would only cast stereotypical roles on to designers such as designer as tool-maker,
designer as the hacker, designer as tool-consumer, etc.

Given the limited scope of the paper, a follow-up with future work is needed to identify
more points of intervention to design software studies and form the basis of thinking for
new kinds of computational tools that can address related issues during the design process.


1 Matthew Fuller (ed.), Software Studies, Cambridge: MIT Press, 2008.

2 Matthew Fuller, Behind The Blip: Essays on the Culture of Software, New York:
Autonomedia, 2008.
3 Richard Wilson, Turning The Place Over (accessed April 2, 2009).
4 Flare: kinetic ambient reflection membrane by WHITEvoid.com (accessed April 2, 2009).
5 Ron Englash and J. Bleecker, The Race for Cyberspace: Information Technology in the
Black Diaspora. Science as Culture, vol. 10, no.3. 2001.
Second International Conference on Critical Digital: Who Cares (?) 26

Second International Conference on Critical Digital: Who Cares (?) 27

Exploring the implications of digital technology on traditional notions of value in design.

Josh Lobel

As design practices struggle to evaluate their roles within an industry dependent on the
production and distribution of digital information, it is important to critically reflect on what
constitutes value in contemporary design. Traditional sources of value were provided by
individuals and design teams as hand-drawn, paper-based deliverables. The promotion of
digital technology to define, store, and integrate the parameters and rules of a design
process suggests that this may no longer be the case. The value proposition of
contemporary digital design systems is to reduce the amount of time designers spend on
repetitive tasks by providing the means through which to abstract and integrate various
sources of design information and define logical rules for the management of that
information. If we agree that this is the correct course of development, then it is imperative
that the parties involved understand how these systems are structured in order to
effectively and efficiently manipulate the data they contain. The difficulty in achieving these
ends is not technical, but perceptual. A series of experiments in the expression of geometric
forms through natural language algorithms were conducted to explore this argument.

1. Introduction

The need to reestablish standards of value is implicit in the negotiations among design and
construction practitioners regarding the impact of digital design technology on their
traditional roles and responsibilities. Traditionally the designer1 was the originator, main
repository, and manager of design information. The designer carried out the necessary
processes of interpreting design ideas and translating those ideas a graphic design
documentation. I will refer to the various transformations of information during this
translation process as the conceptual states of information. The various conceptual states
and methodologies employed during these translation processes represented a primary
source of value in the design process. A designers professional value was based upon
his/her knowledge of contractual and professional roles and responsibilities and his/her
ability to incorporate this knowledge during the creation of design documents in accordance
with a set of graphic standards2. The flow of information effectively went from the head to
page via the hand, one drawing at a time. Prior to the widespread use of CAD systems the
instrumentation used in the production of design documentation held relatively little
commercial value. Once the use of CAD systems became viable in professional practice, the
perception of the value of instrumentation changed, and value propositions in the design
industry became more techno-centric. CAD systems have since replaced traditional hand-
drawings as the standard format for the production of design information and
documentation. However, the potential added-value of these systems promoted by
developers and practitioners alike has changed very little3.

The most recent development of CAD systems is organized around a conceptual framework
that is meant to provide a comprehensive digital simulation of the physical and functional
Second International Conference on Critical Digital: Who Cares (?) 28

elements of the built environment. Arguably any digital representation which makes
reference to these criteria is generically referred to as a Building Information Model (BIM).
Such systems are meant to function as digital repositories into which geometric, financial,
scheduling, analysis, and other types of project-relevant information can be fed,
manipulated, and regurgitated. These systems are also meant to act as managers of these
digital repositories. The flow of information is now meant to go from head to BIM, and from
BIM to everything, acting as what Dennis Shelden referred to as a, catalytic forcefor
directly repurposing information through various stages of project definition and
execution4 The general goal is to integrate the various sets of information produced by
project teams and reduce the amount of time spent reproducing those sets information at
each moment of exchange between team members or at each phase of a design project.
Many of the impediments to this goal are assumed to stem from the incompatibility of data
formats created by the various proprietary CAD systems used by the designers, consultants,
fabricators, and contractors that comprise project teams.

A series of informal, empirical experiments in the communication of design information were

conducted to explore the intermediary processes that effect the translation of information
between conceptual states. I propose that an understanding of why a particular method of
translation was selected is an important next step in reassessing the value of design.
Following in section two, I describe the basis of the experiment and how it was carried out.
In section three I present the results of the experiments. I conclude in section four with
speculations on the results, and opportunities for future work.

2. Experiment

To explore how people involved in a design process understood design information, I

created an exercise aimed at testing the efficacy of communicating geometric information as
a procedural logic in the form of a natural language algorithm. The experiment is composed
of two parts and involves two separate groups of participants. Each group only participates
in one part of the experiment. Each individual in the first group is given a hard-copy
document containing a three-dimensional geometric figure along with several orthographic
projections of that figure (see Figure 1). The individuals in the first group are instructed to
create a written procedure in natural language (an algorithm) from which the given figure
can be derived. These procedures are then distributed to the second group who have not
seen the initial shape. Assuming an equal number of participants in both groups, each
participant in the second group is given a procedure created by someone in the first group.
Each participant in the second group is asked to derive a shape based on the procedure
he/she is given, following the procedure explicitly. Participants in the second group are
asked to note any assumptions they feel they must make in order to complete the
derivation as a result of what they perceive to be insufficient information. Participants in the
first group are instructed that they may assume CAD software may be used in the derivation
of their procedure, but that the use of such software by the second group is not required.
Second International Conference on Critical Digital: Who Cares (?) 29

Figure 1: Geometric used in experiments.

The experiment was initially conducted in March 2008 with a group composed of graduate
students in the Architecture Department at MIT, information technology (IT) staff in the
Architecture Department at MIT, and a few practicing architects from various firms in the
U.S. A second experiment was conducted in November 2008 with professionals at Front
Inc., a design consulting firm in NYC specializing in faade systems. The participants in the
second experiment included architects, engineers, and IT professionals within the firm. A
third round of the experiment was conducted in March 2009 with graduate and
undergraduate students in the Architecture Department at Cornell University. The
methodology of conducting these experiments was organic and informal. No claim is made
that rigorous, scientific conclusions can or should be drawn from the results. However, the
results are provocative enough to provide sufficient grounds for speculation and further

3. Results

The initial experiment at MIT involved six participants in the first group and nine participants
in the second group. Not all of the procedures from the first group have matching
derivations due to a lack of participation by members of the second group. The six
completed procedures ranged in length from 125 words to 1047 words, with an average of
366.5 words and a standard deviation of 339 words. Discounting the minimum and
Second International Conference on Critical Digital: Who Cares (?) 30

maximum the average was 257 words with a standard deviation of 46. Five of the nine
derivations were completed by graduate students and four by practicing architects. The
derivations completed by graduate students were all done using 3d CAD software,
specifically AutoCAD and Rhinoceros. Of those completed by professionals, two were drawn
by hand, and one was a 2d AutoCAD drawing of a 3d projection, and one was a 3d AutoCAD
model. Based on a visual inspection, three of the student derivations were somewhat similar
in appearance to the original geometry, and two were dissimilar. All four of the
professionals derivations were very similar to the original geometry (see Figure 2).

Figure 2: Derivations from MIT experiments (Top row: derivations by students; Bottom row:
derivations by practicing professionals).

The experiment conducted at Front, Inc. resulted in seven procedures and seven
derivations. However, as with the previous experiment, since all those solicited for the
experiment did not return derivations, not all procedures were carried out. The completed
procedures ranged in length from 137 words to 452 words. The average length was 237
words with a standard deviation of 114 words. Discounting the minimum and maximum
length procedures the average was 214 words with a standard deviation of 70 words. Seven
of the derivations were completed with CAD software, six in 3d and one in 2d. The software
used included AutoCAD, Rhinoceros, SolidWorks, and CATIA. Based on a visual inspection,
five of the derivations were very similar to the original geometry, one was similar, and one
was somewhat similar (see Figure 3).

Figure 3: Derivations from experiment at Front, Inc.

Second International Conference on Critical Digital: Who Cares (?) 31

The experiment at Cornell was conducted slightly differently from those previous. Because
the experiment was conducted as part of an assignment for a design theory seminar, it was
necessary to have all students participate in both parts of the experiment. To accommodate
this a second shape was created for use in the experiment (see Figure 4). The students
were divided into two groups; one with six members, the other with five. Each group was
given a different shape from which to write a set of procedures. The groups then swapped
procedures and carried out the derivations. Again, because the groups were unequal, not all
the procedures from the six member group were derived, and one procedure from the five
member group was derived twice. The length of the procedures ranged from 203 words to
1454 words. The average length was 480 words with a standard deviation of 365 words.
Discounting the longest and shortest procedures, the average length was 403 words with a
standard deviation of 177 words. Nine of the derivations were drawn by hand and two were
completed with Rhinoceros CAD software. Based on a visual inspection four derivations were
very similar to the original shape, two were similar, one was somewhat similar, and four
were dissimilar.

Figure 4: Alternate geometry used at Cornell University.

Second International Conference on Critical Digital: Who Cares (?) 32

4. Conclusion and Future Work

Traditionally the successful materialization of design concepts did not necessitate or rely on
a direct link between the processes of designing and the processes of making. However,
success did rely on establishing an effective means of communicating between these two
processes. The process of design and the process of construction were related by a third
process of translation. The intent of these experiments was to explore this process of
translation. As suggested by the results, the process of translation in design is fraught with
assumptions, interpretations, and ambiguity.

The first part of this experiment required participants to deconstruct the geometry into
constituent elements. These elements were not predetermined and therefore reveal how
each participant deconstructed the shape into a set of constituent parts based on the goal of
communicating the assembly of these elements into a comprehensive whole through a set
of written instructions. The results of this phase of the experiment suggest that for a given
geometric shape there are indefinitely many descriptions. In addition, without a more
specific context for the work, other than the general communication of geometric
information, it is difficult to assess the value of the various descriptions. The context of the
work is imperative to understanding the appropriateness, or relative usefulness of one
description versus another. This suggests that attempting to evaluate specific
methodologies outside of the context in which they were implemented can prove

The second part of the experiment required a separate group of participants to interpret
those instructions and construct a geometric shape through a series of discrete steps,
without knowing what it was they were generating, or whether or not the resultant shape
was correct. The results of this phase suggest that any form of communication which
requires interpretation is necessarily fraught with ambiguity. Similar arguments have been
proposed by Reddy5 and Goodman6. Further, that lacking any means of validation via some
feedback mechanism, it is impossible to determine the acceptability of the results. These
exercises highlight the dynamic relationship between process (the creation of a procedural
specification) and product (the derivation) in the communication of design information. In
general, the majority of the derivations were graphically similar. Each specification however
was unique. I believe this explicit juxtaposition of a similar product from a set of dissimilar
processes is the most interesting and important result of these exercises.

CAD systems have since replaced traditional hand-drawings as the standard format for the
production of design information and documentation. The traditional flow of information
from the head of the designer to the page via the hand has been re-routed through the
keyboard and mouse. Where the value of design instrumentation once represented
relatively little commercial value, CAD systems and the data constructs they promise to
deliver have become a market necessity for contemporary design firms. However, the
potential added-value of these systems promoted by developers and practitioners alike has
changed very little7. It is the conceit of the design industry to believe that technology can
create value on its own. As the results of these experiments suggest, the means of
achieving certain ends is not simply technical, but perceptual. Design intent is made evident
by understanding not only how a particular goal reached, but why one particular path out of
many was chosen to get there. It is the incorporation of this information as a key aspect of
design documentation which I believe presents an important next step in reassessing the
value of design.
Second International Conference on Critical Digital: Who Cares (?) 33

It is important to acknowledge that design work is often the result of a group of designers
working together within a design team. The term designer will be used throughout this
paper to refer to both a single individual and teams of individuals.
see Ramsey, Charles George, and John Ray Hoke Jr. Architectural Graphic Standards,
Tenth Edition. 10th ed. Wiley, 2000.
In 1975, Vladimir Bazjanac noted one of the widely held beliefs about the advent of CAD
systems was that, Computer applications will free the designer from distracting and
unproductive activities and allow him to devote more time to design. Bazjanac V. The
Promises and Disappointments of Computer-Aided Design, in Reflections on Computer Aids
to Design and Architecture, Negroponte N. (ed.) New York:Petrocelli, 1975, p.18. Thirty-two
years later in 2007 a textbook on Revit Architecture, a newly released BIM software, the
authors, who were also involved with the development of the software, stated, the intent
of BIM is to let the software take responsibility for redundant interactions and calculations,
providing you, the designer, with more time to design Krygiel E., Demchak G.,
Dzambazova T. Introducing Revit Architecture 2008, Sybex, 2007, p.7
see Shelden D. Tectonics, Economics, and the Reconfiguration of Practice: The Case for
Process Change by Digital Means, in Architectural Design, vol. 76, issue 4, 2006, p.83
Reddy, M. The conduit metaphor: A case of frame conflict in our language about
language. in Metaphor and Thought. 2nd Edition, Ortony, A. (ed.), Cambridge University
Press, 1993.
Goodman, N. Languages of Art. 2nd ed. Hackett Publishing Company, 1976.
In 1975, Vladimir Bazjanac noted one of the widely held beliefs about the advent of CAD
systems was that, Computer applications will free the designer from distracting and
unproductive activities and allow him to devote more time to design. Bazjanac V. The
Promises and Disappointments of Computer-Aided Design, in Reflections on Computer Aids
to Design and Architecture, Negroponte N. (ed.) New York:Petrocelli, 1975, p.18. Thirty-two
years later in 2007 a textbook on Revit Architecture, a newly released BIM software, the
authors, who were also involved with the development of the software, stated, the intent
of BIM is to let the software take responsibility for redundant interactions and calculations,
providing you, the designer, with more time to design Krygiel E., Demchak G.,
Dzambazova T. Introducing Revit Architecture 2008, Sybex, 2007, p.7
Second International Conference on Critical Digital: Who Cares (?) 34

Second International Conference on Critical Digital: Who Cares (?) 35


Josh Dannenberg
Asymptote Architecture

It must be worked toward anew.

With recent changes in the economy and the academy, certain digital techniques are at risk
of losing legitimacy in the field of architectural design, especially those related to digital
surfaces. Yet the solution might be found as a trait of the digital surface itself. Pliancy, a
quality once celebrated by Greg Lynn, Peter Eisenman and others, offers the ability for
technique to adapt under different circumstances. Pliancy in surfaces is described as an
internal flexibility anda dependence on external forces for self-definition, a description
that suggests a method of digital technique that transforms under challenging external
forces while maintaining an internal structure for meeting academic and market demands.
In this paper, mathematics is proposed as potentially the most universal pliant technique,
where the equations and calculations of a digital surface grants control over the digital
medium, and discloses the mediums influence as the agency for continuing techniques
transformation. In support of mathematics as a pliant technique, the paper offers two
design projects showing the comparative range of mathematics in digital design and the
effect of pliancy as a transformative apparatus.

Defining technique through the lens of the digital age can only belong the present
generation. Technique as it is known today appears in journals, books, essays, thesis
projects and symposia. It dominates conversations in the halls of academic institutions, and
captivates our imagination with a range of applications for digital tessellations, structural
modules, transformations and design-based algorithms. Popular use of the term seems to
Second International Conference on Critical Digital: Who Cares (?) 36

have reached at a record high. But times are rapidly changing. With the recent global
economy in flux our understanding of technique is ever more at odds with the scarcity of
intellectual curiosity and financial capital, both of which fund its present-day popularity.

Like the economy, the future relevance of architectural technique is a difficult factor to
predict. In terms of its past relevance, the term has its earliest associations with the
Vitruvian principle of firmitas, meaning sturdiness or technical assembly, an association that
persisted more-or-less intact until the twentieth century1 when the influence of linguistics
and philosophy shifted the focus from technical assembly alone to the idea that technical
assembly itself could be assembled. By design this reflexive assembly could be the subject
of intellectual and aesthetic appreciation. The digital generation inherited this idea and
shifted it yet again, taking the notion of a reflexive assembly into digital media which were
capable of producing far more intricate assemblies and complex configurations, and this is
still the defining concept behind technique today. It seems that each generation has formed
their definition of technique by taking a specific principle from the past and transforming it
according to the dominant issues of the present. This implies a reliance on the past and the
importance of current constraints for shaping the present definition of technique, which
suggests that our present understanding of the term already holds the precursor for its
future transformation.

The present understanding of technique is dominated by topics of adaptability, order and

emergence. These are qualities that we also associate with digital surfaces, which, for the
digital generation, are a critical aspect of our present understanding of technique. Because
the digital surface relies upon external forces to give it shape, it is often described as
pliant. Like D'Arcy Wentworth Thompsons diagrammatic drawings showing the
transformation between species of fish, surface forms can assume almost any geometric
configuration depending on the presence and power of external forces. Techniques have a
similar flexible capacity, and thus might also be considerably pliant. They adapt to various
programs, forms and functions and can apply to virtually any flexible, digital surface. But
we have yet to see if the combined effects of the declining economic intellectual interest in
technique will force technique to adapt like a pliant object. If history demonstrates that
techniques definition is embedded in some feature of its recent past, we might then
speculate that adaptability, order and emergence will have some effect on techniques
transformation in the next generation. The question then is whether this will effect
techniques adaptation and thus produce a truly pliant transformation, as will later be

In the digital era, technique is either qualitative or quantitative in its application.
Qualitative technique is associated with an extensive dictionary of terms, most of which
refers to a similar qualitative ideal. Ross Lovegroves supernaturalism, for example,
proposes that qualitative design exceeds quantitative performance and achieves an excess
of form. Farshid Moussavis affect similarly projects that material construction can surpass
technical assembly and communicate a bias for its own interpretation. Performalism implies
the capacity of form to sustain its function over time. While elegance dictates a holistic
composition, where each part is networked to the others and local changes produce global
results. Supernaturalism, affect, performalism, elegance - all allude to the same qualitative

application of technique, which in aggregate suggest a sensorial and aesthetic architectural

experience. Quantitative technique, meanwhile, is more succinct category addressing issues
such as cost saving, effort reduction and energy conservation.3 As the demands for savings
and conservation peak, this quantitative brand of technique is effectively immune. But the
Second International Conference on Critical Digital: Who Cares (?) 37

same cannot be said of qualitative techniques, which typically require cost, effort and
energy: capitals that are not readily available at present, even in markets such as China and
the UAE where qualitative extremes produce a desired qualitative national representation.4
At the same time, the intellectual capital for qualitative technique seems depleted as well.
After at least two decades of developing and distributing digital skills within the present
generation, the abundance of digital methods, softwares and processes have resulted in a
kind of technical monotony.5 As Jane Jacobs once observed, when a place gets boring,
even the rich people leave,6 a sentiment that suggests that technique is overly applied in
architecture today, and as both capital and creativity have been bled from the system,
techniques once pervasive voice is going to grow increasingly silent.7

Of course, it is unlikely that qualitative technique will be abandoned, as we cannot avoid it

in some way as a fundamental architectural value. But it is possible that its application will
only serve a pixilated rebirth of paper architecture. Kept within this speculative realm,
technique will continue to exist but will be confined to the weightless and networked
environment of the computer, devoid of financial, structural and intellectual constraints.
Technique as such cannot evolve. Like a business plan or invention, it requires application,
testing, refinement and retesting.8 Assuming its capacity for transformation truly relies on
outside forces, we must ask ourselves whether technique should be relegated to the
category of speculation or forced to contend with new constraints and and transform like a
pliant surface.

To address this issue in detail, we need to know more about pliancy and its technical
capabilities. Based on Greg Lynns writing and advocacy of the philosophy of Gilles Deleuze,
the term pliancy describes an objects internal flexibility anddependence on external
forces for self-definition.9 Pliancys central theme is the relation between external forces
and changeable internal orders, a one-to-one relationship where a pliant object adjusts its
overall configuration to changes in the environment, but like a flexible surface its internal
construction remains intact. Pliancy as such describes a flexible, interactive object that
assumes new and unpredictable configurations without sacrificing its internal order. It also
implies an interaction with other objects through reciprocity by process of feedback.10 The
changes taking place in one object, for example, can affect neighboring objects, producing
effects in that object that can then return to the first object. Antoine Picon refers to this in
architecture as something similar to an event.11

An architectural event, not unlike a social happening, occurs when an external force (such
as an occupant) encounters ordered form (a ledge on a wall). The external force actualizes
the form by giving it shape (interpreting the ledge as a chair), and reacts to it by
responding to its actualization (sitting on it). By reciprocity, the actualized form
reciprocates by providing force for other actualizations (the chair starts to look more like a
bench, which attracts other occupants to join). This process continues until the reciprocal
interactions dissipate or inspire other patterns to develop. When we understand pliancy as
something more than a flexible object, but as a reciprocal event that generates new and
unexpected patterns of behavior, we might recognize that similar developments can take
place within our understanding of technique. Pliancy is therefore a degree of change that is
far more significant than a linear process of adaptation and more in-line with a holistic,
emergent event.

By applying the conceptual model of pliancy we can adapt our understanding of technique
and its relationship to the current realities of the financial crisis and the over-saturation of
Second International Conference on Critical Digital: Who Cares (?) 38

digital skill, where the logic and flexibility that exists within our current applications of
technique inspire us to find unanticipated results for its future application. Our reliance on
technique throughout history has established its necessity in architecture, as buildings have
always required a status of structural stability, and as such, technique has maintained its
own internal order over time, just as the pliant object maintains itself through adaptive
contortions. But more importantly, we can consider the adaptation of technique an
architectural event, where the reshaping of its precepts engenders subsequent reactions
and counter-reactions, eventually producing a new understanding technique, reciprocal with
other architectural principles. Pliancy thereby suggests that we can effectively redefine
techniques conceptual associations, much like a pliant object in a reciprocal event. So in
the case of Vitruvian firmitas, a newly revised technique would exert a force on our
understanding of the Vitruvian triad as a whole, meaning that utilitas and venustas, utility
and beauty, take on new meaning - and thus the event continues to redefine our
relationship with technique and to engender the development of future reciprocities.

Pliancy, as described in this paper, is a universal term describing a transformative strategy,
and is thus generic, while its application is particular to each and every instance in which
technique is effectively transformed. This complicates the process of identifying a single
pliant strategy that is universally effective. Perhaps the most fundamental example can be
found in differential mathematics: the numeric structure that binds the configuration of
surface forms and techniques. Within the computer, all surface forms are defined by a
similar mathematic structure of variables and equasions (including NURBS to polygons to
sub-divisions). By virtue of their variability and sensitivity to external forces, surfaces
exhibit a similarity to digital techniques in that both are dependant on differential
mathematic calculations for their application and representation. By acknowledging
mathematics as a transformative mechanism for both surfaces and techniques, we
maximize the potential events in which our understanding of technique evolves.

When we refer to mathematics in architectural discourse, we elicit images of unconventional

and abstract design tools, such as text files, algorithms, code blocks and genetics, which
allow us to transgress the modeling environments graphic interface, which, it has been
argued, introduces new architcural procedures that enhance our focus on fundamental and
analytical aspects of architectures inner-workings, or what Eisenman commonly calls the
loss of the human eye.12 Unlike the human eye, Eisenmans abstract eye escapes the
graphic interface and returns to a more critical conception of architectures assembly, where
nothing is free from critique or conceptually stable. Thus with an abstract eye it can be said
everything is mathematically variable and always on the verge of transformation.

Lynn, on the other hand, projects a more stable understanding of mathematics, based on
values rather than variables. For his Embryological Houses, he explains that each house is
perfect because they all obey a specific mathematic values within a range of possible
outcomes, and as such they offer a so-called perfect range of spatial and tectonic possibility.
I love them equally, he explains about the houses, stressing the value of the collection
over any kind of analytical hierarchy.13 The Embryological project is indeed an extraordinary
use of mathematics in architectural design, and it documents great acheivements of digital
technique as a design application. Yet as a result of emphasizing value-driven mathematics,
it fails to critique the notion of perfection in architecture, for the reality is that neither
design nor techniques are conceptually stable, and thus they cannot be described as
perfect. The perfection of Lynns project is quite different from the perfection we expect in
todays recession home. Thus the Lynn and Eisenman examples provide an interesting
Second International Conference on Critical Digital: Who Cares (?) 39

difference in their interpretation of mathematics, suggesting that numeric techniqes in

architecture can be based upon variables or values. Still, one should understand that pliant
mathematics require imperfection and change; they facilitate transformations and
adaptations, and they accomplish this with the variable nature of mathematics.

As a demonstration of the potential of this kind of mathematics, two numerically variable

projects can be compared. The first project is a golf driving range in Hong Kong, at the end
of the former Kai Tak Airport runway that extends into the citys Victoria Harbor. The
design takes the existing prescribed model of the driving rage, already considered to be
perfect in its prescriptive organization and dimensions, and adapts it according to the
unusual constraints of the site, culture and potential of digital design tools. For its
technique, the project uses a system of flexible base surfaces that generate mathematic
data as the basis for making architetural form. As the surfaces and data sets interact with
the constraints, they transform the typical driving range sports model into an atypical
configuration. The new configuration seems to violate the precepts of the perfected sports
model, but because it obeys certain mathematical laws established by the base surfaces,
the reconfigured driving range retains its functionality and redefines our understanding of a
sports model previously considered static. What seems simple in concept for this project
would actually require incredible energy and resources to realize as a built project; and
what once seemed like a potentially valid expense might now raise questions. The idea that
technique can be the validation of architectural form, a once acceptable proposition, is more
problematic than ever.

This dilemma provided the stimulus and perhaps the external force that was required to
transform the technique of the driving range for a second application. The project takes
into account the issues of scale and resources, and without sacrificing the techniques
internal mechanisms, it proposes a new application: a kite designed to fly in Manhattans
Central Park. The kites design uses the same mathematic technique as the golf driving
range, using interactive base surfaces and data sets, but it adapts its application to a
smaller scale and a different typology. In lieu of the large sporting hall, using only
mathematics and a few simple rules of aeronautics the technique now produces an object
that participates in the performance of flight. When ultimately tested, if the kite does not
perform as expected, the technique will again traverse scale and typology for future
applications. If it succeeds, however, it could provide the conditions necessary for a pliant
event: establishing new reciprocities between mathematics and design, structure and flight.

In 1978, Stanley Tigerman completed a collage indicative of its time, depicting Mies van der
Rohes Crown Hall sinking into the depths of Lake Michigan, a starkness of order listing into
a watery and textured mass. Tigermans piece is not just an attack on Crown Hall, or even
Mies, but on modernism at large and its supposed short-comings with the realities of
program and accessibility. Just forty years prior, in 1938, after freshly arriving in the United
States, Mies may actually have provided the conceptual ammunition for Tigermans attack,
speaking about the importance of establishing a new sense of order ever commensurate
with the times. Things by themselves create no order, he writes, order as the definition
of the meaning and measure of being is missing today; it must be worked toward anew.
Armed with this credo, the metaphoric sinking of modernism illustrated the transformation
of technique as a method of mere architectural assembly into a conceptual apparatus
realigned with its contemporaneous intellectual agendas.
Second International Conference on Critical Digital: Who Cares (?) 40

Today, technique is misaligned once again with its intellectual counterparts and has lost a
significant amount pragmatic viability due to the cost of complex constructions and the loss
of intellectual energy. Armed with the same Miesian conviction for order renewed, our
current understanding of architectural technique must also be transformed in pedagogical
and practical terms. To secure its future viability as such, this transformation cannot be an
isolated or periodic occurrence. Instead, technique should be similarly pliant to a digital
surface: ever reactive to external force, subject to a constant transformation, and thus kept
in proportion with the changing situation of its intellectual and financial constraints. The
application of technique has the potential to exceed itself as a determinate process and to
assume its role as a critical and revolutionary architectural apparatus, at all times and for
each application worked toward anew.

Pliancy in Action: above, Golf Driving Range; below, a Kite designed to fly in Central Park.
Second International Conference on Critical Digital: Who Cares (?) 41

Notable exceptions include Michaelangelo, Giulio Romano and few other Mannerists.

For a full description of these terms, see: Lovegrove, Ross. Supernatural. (London:
Phaidon, 2007); Moussavi, Farshid & Michael Kubo, eds. The Function of Ornament.
(Barcelona: Actar, 2006); Cohen, Scott, Architectural Acrobatics, Performalism: Form and
Performance in Digital Architecture, eds Y Grobman & E Neuman. (Tel Aviv: Tel Aviv
Museum of Art, 2008); Rahim, Ali, ed. Elegance. (Wiley: London, 2007).
One of the best examples of supernatural form is the Mazda Taiki concept car of 2007.
The form of the Taiki is partly inspired by aeronautic forms, but in actuality its design
supersedes this with a lust for curves and aesthetics.

Florida, Richard. How the Crash Will Reshape America, Atlantic Monthly; March 2009;
Fallows, James, Interesting Times, Atlantic Monthly; April 2009, pp. 54-63; Rose, Steve,
How Dubai's Fantasy Skyline Tumbled to Earth, The Guardian, November 21, 2008.

In a recent interview with the online magazine Arch Daily, Work ACs Amale Andraos
states: I think the obsession with innovation, it's gotten tired. I think the obsession with
technique, I mean for us, we need innovation in ideas. It's a bigger piece. To just sit in
your corner and figure out a new process to cut wood a certain way is maybe interesting but
it's not enough. It's become a little relentless, and I don't think it's very interesting right

Florida, Richard. How the Crash Will Reshape America, Atlantic Monthly; March 2009.
See the recent studio offerings for Harvard Universitys Graduate School of Design. In
place of technique-based studios, we see a rising interest in topics like ordinariness,
revolution and loopholes.
The most successful example of a strategic recession business model is Apple Computers
introduction of the i generation product-line. For a complete account, see: Linzmayer,
Owen, Apple Confidential 2.0: The Definitive History of the World's Most Colorful Company.
(San Francisco: No Starch Press, 2004).
Lynn, Greg, Folds, Bodies & Blobs: Collected Essays. (Bruxelles: La Lettre Volee, 1998).
For a complete analysis of complex and emergent interactions between objects see:
Lewin, Roger. Complexity: Life at the Edge of Chaos. (Chicago: University of Chicago Press,

Picon, Antoine. Architecture as Performative Art, Performalism: Form and Performance in


Digital Architecture, eds. Y Grobman & E Neuman. (Tel Aviv: Tel Aviv Museum of Art, 2008).

Eisenman, Peter. Visions Unfolding: Architecture in the Age of Digital Media, Domus, no.

734 (1992), pp. 17-21.

Lynn, Greg, Embryological Houses, Contemporary Techniques in Architecture. (London,


Wiley, 2002).
Second International Conference on Critical Digital: Who Cares (?) 42

Second International Conference on Critical Digital: Who Cares (?) 43

Session 2 Systems:

Moderators: Simon Kim and Dido Tsigaridi

Harald Trapp
Towards a System of Architecture

Timothy Jachna
Mediating planning

Athanassios Economou
Congruent spaces


Arno Schlter
Incorporating Reality: From Design to Decision Making
Second International Conference on Critical Digital: Who Cares (?) 44

Second International Conference on Critical Digital: Who Cares (?) 45

Mediating Planning

Timothy Jachna
School of Design, The Hong Kong Polytechnic University


In this paper, I discuss recent approaches to the digital mediation of the planning of cities
through a comparative critical analysis of five recent experimental mediated urban planning
proposals. I demonstrate that different applications of digital technologies imply and support
different power structures, different notions of urban public space and different dimensions
of freedom and control. I conclude by positing ways in which applications of digital
technologies in the urban planning process are used to encourage the very leaps of faith
necessary for the legitimization of the principles and claims on which they are based.

1. Introduction the five projects

Planning practices in cities are generally the purview of a select few. Planners actualize the
possibilities offered by digital mediation of urban planning practice in different ways to
different ends. The following pages outline a selective survey of five projects proposing the
integration of digital mediation into the planning of cities, undertaken by various
foundations and institutes of higher education over the past decade. I undertook this study
with the aim of discerning and critiquing the background of ideology and vested interests
that underlie these projects, the ways in which these biases affect each projects approach
to actualizing potentials of digital technologies in the planning process, and the anticipated
consequences for the city and its citizens. Below, I briefly outline the five critiqued projects.

1.1. System of Technical Methods for Digital Urban Planning

2005 (publication), Centre for Science of Human Settlements Tsinghua University, Beijing1
The project sees digital technologies as new components plugged into an existing set of
principles, not implying a re-evaluation or restructuring of the planning process. It presumes
that all base information relevant to city planning can be expressed digitally, combined in a
comprehensive digital model of the city called the primary planning scheme2, updated
constantly using RS, GPS and GIS. This comprehensive model would be used to achieve the
traditional Chinese goals of physical and social planning of the city and the extended goals
of technical and information planning. The scheme is made available to authorities and the
public for commentary via public participation GIS (PPGIS). The final expression of this
process is the production of digital models, maps and documents.

1.2. Milan Visonary Variations

2004 (exhibition), Generative Design Lab, Politecnico di Milano, Prof. Celestino Soddu3
The project is an application of Professor Soddus generative design software Argenia to
design buildings within Milan. The project proposes an artificial DNA for the city that
encodes its genetic architectural and urban design inheritance. Buildings are derived by
application of nine Milan Transforming Codes extrapolated from Milans architectural
heritage of buildings, styles, movements and designers. The location in the city for which
the building is designed affects the process, as do decisions by the designer operating the
Second International Conference on Critical Digital: Who Cares (?) 46

Argenia software. Each building is related to all other buildings in the project yet all are
unique and non-repeatable, analogous to members of a species. This point is underscored
by the proposal of multiple alternative building designs for each site.

1.3.Urban Simulacra London

2006 (publication), Centre for Advanced Spatial Analysis, UCL, Prof. Michael Batty4
A hyper-detailed dynamic virtual model of London allows multiple ways of viewing the city
through iconic models expressing physical structures and spaces and symbolic models
allowing visualization of intangible factors. One can experience the virtual city as an avatar,
survey it from above or extract quantitative information about urban processes.
Consequences of events, scenarios and decisions can be visualized in multiple dimensions.
The model would be accessible to urban design professionals as well as laypeople, and is
indeed proposed as a tool to enable the public to understand the urban built environment
and participate in its design through simulating their suggestions within the virtual city,
providing a public virtual forum in which proposals can be aired and evaluated.

1.4. 10_dencies Tokyo, So Paulo

1997, 1998, Knowbotic Research and ZKM Institute for Visual Media, Karlsruhe5
Digitally mediated abstractions of the city are applied to allow modes of citizen expression
and action not typically possible, revealing undercurrents that are suppressed by the power
structures that control the physical form and organization of cities. Strategies of digital
mediation are chosen based on specific urban forces that distinguish each city. The project
aims to enable people to define new forms of urban agency. In So Paulo, local editors
fed annotated images, texts, video and sounds into a database, creating an alternative
version of the city that could be experienced remotely through an interface at the ZKM in
Germany. The Tokyo version of the project proposed physical interventions on a site in the
city, based on a collaborative, digitally mediated mapping of the site by the participants.

1.5. Serve City

2001/2002 (workshops), 2003 (publication), Bauhaus Dessau, Bauhaus Kolleg III6

A disused industrial site in Sydney becomes a live/work district for urban nomad
knowledge workers with highly mobile and flexible lifestyles, who use communication
networks intensively and productively. The individual is an autonomous work unit. A
service provider linked to the physical site serves as a medium for regulating spatial use and
social/economic interaction in the district. Every resident is both provider and consumer of
digitally mediated services, constituting the social cohesion for this community. Spatial use
patterns shift constantly. The physical structures of this district are standardised units
conceived as generic hardware for which the digital software can be changed to re-
purpose any structure for a new user. Inhabitants use ICTs to locate, book and configure
appropriate spaces for their transitory needs.

2. Genealogy of approaches

Each of the projects can be seen as unifying a historical stream of urbanist thought with an
affordance of digital technologies. The System of Technical Methods for Digital Urban
Planning is an example of applying new tools to the aims of current tools, but the other four
projects achieve new framings of pre-existing urban discourses by bringing them into
contact with digital discourses. Milan Visionary Variations brings together the idea of genius
Second International Conference on Critical Digital: Who Cares (?) 47

loci with generative design through the concept of the DNA of a place. Serve City builds on
the idea of self-organization of settlements and architecture without architects7 with a
digital control level that records social interaction and actuates the city accordingly.
10_dencies is rooted in subversive urban movements such as the Situationists, achieving a
detournement of the city through digital means of collecting and disseminating personal
narratives. Urban Simulacra London applies digital simulation techniques in service of the
discourse of the city as machine.

Each project is based on a different presumption as to what constitutes the urban question,
and each technological solution imposes a bias through what it can and cannot sense and
represent. Thus the choice of a technological means of sensing or expression is either a
decision as to which aspects of the sense-able environment are meaningful to the project,
or a willingness to subjugate design ideology to technical expedience. In Milan Visionary
Variations, the urban question regards a citys identity as embodied in its unique inheritance
of physical culture. Serve City is a paragon of the functionalist city, which does not mean
that the city is necessarily planned in a way that functions well, but rather that the
organization of the city is influenced by the raising of a subset of functions of the city to the
level of dogma and forcing other aspects to follow. 10_dencies espouses a psycho-
geographical view of the city through personal situated narratives of citizens, to which the
physicality of the city is at worst a hindrance and at best a stage. The urban question here
is psycho-sociological. Urban Simulacra London posits the city as a set of complex dynamic
and mechanical processes. The corresponding urban question is how developments in these
processes can be anticipated and proactively accommodated or influenced. The Tsinghua
University study sees the urban question as a question of social and spatial control.

3. Power relations

Each of the five projects applies digital mediation in the planning process in a way that
implicitly establishes different power relations between urban citizens and urban
governance. System of Technical Methods for Digital Urban Planning is very conservative,
using digital technologies to perpetuate a top-down planning mechanism and reinforce the
centrally-controlled state mechanism of urban planning and governance. At the other
extreme, 10_dencies is based on a radically subversive approach, aimed at disrupting such
top-down power structures by enabling bottom-up influence by ordinary citizens, using the
social affordances of digital systems and presenting calculated challenges to existing forms
of urban planning, governance and representation. In Serve City, on the other hand,
freedom is traded for convenience and inclusion, with every aspect of the inhabitants lives
mediated by, and dependent on, the monopolistic provider of the mediating network.
Urban Simulacra London proposes a popularization of planning using digital technologies
capability for simulation, imaging and dissemination of images to give citizens tools to
understand, and propose action on, complex systems in urban space. Milan Visionary
Variations proposes an alternative to cartographic land-use planning to achieve the goal of
constraining the parameters of individual built interventions within the city through a rule-
based system based on an overview of the greater good of the city.

4. Spaces in-between

Any power structure involves dimensions of control and dimensions of freedom in-between.
Milan Visionary Variations and Urban Simulacra London both propose that every site in the
city is rich with latent information, and digital technologies are applied to extract or
actualize this information. In a sense, there is no between-ness from the point of view of
these systems, for which the city is a continuous field of latent information. Nonetheless,
Second International Conference on Critical Digital: Who Cares (?) 48

there would be a distinction between spaces for which each of these systems was used
versus those for which it was not used. It would make a difference whether these systems
were discretionary or mandatory. This raises the same question of legitimacy and privilege
that applies to informal or illegal construction in cities around the world. In Serve City, the
nominally open space between structures is constantly wiped clean by reconfiguration of
units on the site. There is no provision for insertion of elements independent of the system,
as the system relies on the physical city being a tabula rasa, constantly rewritten to mirror
the virtual system. 10_dencies leads one to rethink as background the typically
foregrounded physicality of the city, so that the human lives typically seen as happening in-
between or within structures of the city become the foreground.

5. Form giving

The five projects take different approaches to digitally mediated form-giving. Knowbotic
Research attempted to create forms in an early manifestation of 10_dencies but withdrew
this aspect in later versions. Urban Simulacra London creates a surrogate city from forms
and processes of an existing city to inform decision-making. In both of these proposals,
formative output is in the form of representations of aspects of the city that would otherwise
remain invisible. These representations are not built forms, but they play an important role
in the city in that they influence decision-making and action. The radical element of
10_dencies is the proposal of alternative representations of the city instead of the plan-
based and policy-based representations used by urban planning and governance. In
projecting the city onto different planes of representation, different urban aspects become
meaningful while many of the aspects of the city recorded in conventional representations
find no expression and therefore do not enter into the system of perceived meaning. The
representations in Urban Simulacra London attain authority by appeal to a supposed deep-
structural isomorphism between the representations it uses and the mechanisms by which
the city is maintained and evolved. Milan Visionary Variations formative processes are
adopted from the genotype-to-phenotype relation in biological morphogenesis. The DNA
paradigm of generative processes provides an alternative to the typical ways in which
authorities constrain and guide form-giving, in that the developmental processes of
structures and spaces are constrained from within (algorithmically) rather than from without
(teleologically). The physical units of Serve City are arbitrary and interchangeable. The
arrangement of structures on site is constantly changing, driven by activity in the virtual
social network. The physical urban district is in essence an immense digital readout. The
System of Technical Methods for Digital Urban Planning does not address the forms of the
city per se, but is concerned with maintaining control over form-making decisions.

6. Social aspects

The social dimension of digital networks is an element in all of the projects, albeit based on
different understandings of social action. Prof. Soddus Argenia software would likely only be
widely adopted if it became open-source, such that every designer who used it could
contribute to the DNA and rules. Thus, it would become a social platform for spatial design
professionals, but without obvious provisions for involvement of citizens. For 10_dencies,
the images, texts and videos that made up this collaborative psychogeographic
representation of the city were continuously re-introduced to the city in an exhibition. In the
Tokyo version of the project, participants and visitors used these data to inspire a
collaboratively-constructed alternative plan of part of the city, but in So Paolo these
messages were re-introduced into the public discourse through an exhibition with no
pretence of drawing conclusions or controlling how they may affect decisions on the design
of the city. The social space of Serve City is virtual. Physical space is a by-product of social
Second International Conference on Critical Digital: Who Cares (?) 49

and commercial practice in this virtual pseudo-space. Although in a sense communally

constructed, there is no space-making intentionality of citizens behind the physical
arrangement of the components of the city. Serve City negates the public space of the city
as a designed or experienced thing. Public space is provided in the form of fixed space
providers, which must be rented through the provider, calling into question the legitimacy
of calling these spaces public. The true public space of this city is online, and physical
public disappears as an element of experience and an element of design.

7. Leaps of faith and legitimization

A common factor in all five projects is the use of the digital dimension for legitimization and
enforcement of the version of urban morphogenesis espoused by the proposals authors.
Authority is asserted through the appeal to different potentials of digital mediation such as
speed, efficiency, dimensional transcendence and complexity to obfuscate with a barrier of
algorithmic complexity or technological mystique. In this sense, digital mediation is inserted
at the point of the leap of faith required to legitimize each project. For Urban Simulacra
London, legitimization is supported by the high degree of resolution and complexity of the
digital simulations, which in effect define the future of the city based on assumptions and
parameters that present a mechanistic view of urban evolution. Milan Visionary Variations
seduces through its mimicking of natural processes to generate designs of infinite variety
from finite input of data. The basic principles on which this rely, however, are not natural
laws but a personal view of the architectural heritage of the city and processes of
architectural form-making, externalized in a piece of software an agent for propagating
this point of view. The System of Technical Methods for Digital Urban Planning apotheosizes
bureaucratic efficiency by applying digital technology in its most fundamental way, relying
on the same leap of faith implicit in all technocracies: that technical innovation constitutes
progress in its own right. 10_dencies seeks credibility within a different discourse, within
which subversion of inherited assumptions is valorized in its own right. The leap of faith
from one discourse to another (from memory traces to planning) that was attempted then
abandoned in one manifestation of the project is just as dubious as it is in the other
direction (constructing memory through planning, as in the New Urbanism8). The leap of
faith in the Serve City project lies in the idea of emergence, an assumption that the
information that can be gathered from online action and interaction can provide an order of
information that is relevant and sufficient to generate urban form and organization. The
concept of emergence is expressed or implied in several of the projects, denoting a
characteristic or quality that is meant to emerge from the application of digital technologies
to the planning process. Soddus claim that the close mimicking of genetic algorithms in his
Argenia software will lead to the emergence of combinational and relational qualities
analogous to those in biological species bears some credence, because of the algorithmic
relatedness of the two processes. However, the claim that spaces conducive to public life
will emerge naturally in Serve City as the spaces in between the atomistic habitation units is
more spurious. The term emergence purports inevitability or naturalness of the emergent
characteristic, with digital mediation serving as the enabling agent, when actually it is a
specific bias towards the formative process being imposed through digital mediation.

8. Conclusion

Discourses of the digital city9 herald an age in which the city is being remade by, or in the
mould of, digital technologies. As with any tools, the influence exerted by digital
technologies depends largely on the hands in which they find themselves and the ideologies
and interests that guide those hands. The choice of the mode of abstraction that is used to
represent the city in digital planning systems is a fundamental ideological choice, which
Second International Conference on Critical Digital: Who Cares (?) 50

determines what can and cannot be expressed and discovered through each of these
systems. These representations play the role of metonymies of the city: using an easily
understood isolated aspect of something to stand in for the complex totality of the thing
itself. Just as the one strain of modernist urbanists abstracted aspects of the city that could
be best addressed by rational organizational thought and called the representation thus
created by the sum of these aspects the city, each of the five mediated urban planning
proposals also abstracts the aspects that are best dealt with by certain applications of digital
technologies (though one would hope, not pretending that this constitutes the whole of the
city but rather one player or set of strategies or moves within the milieu). The more
promising aspects of the projects discussed are not those that attempt to apply digital
systems as matrices of totalizing control and constraint, but rather those that take
advantage of digital systems potential as social technologies with which planners assume
interactive and conversational, rather than authoritarian and controlling, roles.

see Dang, A., Shi, H., Han, H., and Wu, L. Study on the System of Technical Methods for
Digital Urban Planning ISPRS Workshop on Service and Application of Spatial Data
Infrastructure, Hangzhou, China, 2005.
see Ding, L. and Sun, J. Bite city: great changes in urban planning Urban Planner vol. 6,
2000, pp. 21-23.
see Soddu, C. Visionary Variations of Milano: Generative Projects Designing the Identity
of Milano Proceedings of the Seminar De Identitate, Rome, 2004.
see McGrath, B. and Shane, D.G. (eds.) Sensing the 21st-century city: close-up and
remote. AD Architectural Design, 75(6). London: Wiley-Academy, 2005.
see Knowbotic Research website. http://www.krcf.org
see Sonnabend, R. (ed.) Serve City: Interactive Urbanism, Berlin: Jovis, 2003.
see Rudolfsky, B. Architecture without Architects: a short introduction to non-pedigreed
architecture (paperback). Albuquerque: University of New Mexico Press,1987.
see Duany, A. and Plater-Zyberk, E. Towns and Townmaking Principles, New York: Rizzoli,
see Castels, M. The Informational City: Information Technology, Economic Restructuring
and the Urban-Regional Process (paperback), Oxford: Blackwell, 1991.
see also Graham, S. and Marvin, S. Splintering Urbanism: Networked Infrastructures,
Technological Mobilities and the Urban Condition, London: Routledge, 2001.
Second International Conference on Critical Digital: Who Cares (?) 51

Towards a system of architecture

Harald Trapp
TU Vienna, Austria

Thomas Grasl
TU Vienna, Austria

The theory of social systems by Niklas Luhmann could define architecture as a subsystem of
society which would integrate design and use into one process. An architectural system
could then connect movement of objects (inhabitation) to the movement of forms
(habitation). This happens through asymmetrical appropriation of architectural forms, which
are closed on one side. An example of such a system could evolve by simulating the
connectivity between players and hide-outs under the restrictions of the game hide-and-

1. Architecture and Society

What has been lost is architecture`s role in society beyond the two extremes of a star-
system, deeply rooted in a society of consumption and spectacle, and an industry based on
the logic of late capitalism. The attempts to use architecture as a means of emancipation
got wiped away with the polemics against late modernism and the advent of postmodernism
with its marketable formalism. There is no such thing as society, only individuals that
should be organized in markets based on the pursuit of personal interests. Social theory and
sociology, although implemented as a mandatory part of architectural education in the
nineteen-seventies, have degenerated with the advent of postmodern theories. Those
favoured time over space and seemed to be proofed by globalization and the ubiquitous
accessibility of a media culture. Only recently the importance of space and spatial relations
is reaching the public interest again, with the increase of migration and the change from
social integration to the difference of inclusion and exclusion, which expresses itself as
spatial segregation.

Using the theory of social systems by Niklas Luhmann could reintegrate society into
architecture. An architectural system consisting of forms that are both object and operation
would connect the movement of objects (inhabitation) in space to the movement of forms
(habitation). Architectural practice already uses generative methods on either side of the
architectural process, to simulate behaviour (agent modelling) or to generate forms (cellular
automata), but has not yet convincingly integrated them into one system. Conceiving
architecture as an autopoietic system would mean to overcome the division architect/user
and to change the focus from an interest in design and control to one in autonomy and
environmental sensibility, from planning to evolution, from structural to dynamic stability.1

see Niklas Luhmann, Soziale Systeme
Grundri Einer Allgemeinen Theorie, 1. Aufl. ed., Suhrkamp-Taschenbuch Wissenschaft
(Frankfurt am Main: Suhrkamp, 1987), p.27.
Second International Conference on Critical Digital: Who Cares (?) 52

2. Social System Architecture

Luhmann defines society as an open, operatively closed, autopoietic system, which does not
consist of its members, but is produced exclusively by the communicative processes
between them. Communication is necessary because the psychic systems of independent
individuals are mutually inaccessible. The basis of autopoietic systems is the "radical
temporalisation of the term element" due to recursive self-production. Systems consist of
momentary elements and would immediately cease to exist if they would not equip these
with connectivity and therefore reproduce them1.

To be able to recognize the other as an-other, autopoietic systems have to produce a

difference, the difference of system and environment. This difference is produced by
operative closure, not by any material boundaries. If architecture is a functional system that
emerged within society, it therefore has to be operatively closed on the basis of
communication. Which are the operations of architectural communication? The process of
designing and building (the movement of forms) could be called habitation; its product is a
building. In traditional understanding, architecture ends here. If architecture is an
autopoietic system, it cannot stop with a built structure. There have to be operations of the
type of habitation that are continued. Inhabitation (form of movement) is the appropriation
of space by the movement of objects (e.g. individuals). An architectural system starts either
with habitation or inhabitation, but has to connect one with the other.

3. Communication/Perception

Communication demands form-generation for two reasons: as a precondition for the

participation of discrete communicative systems and as a guarantee for connectivity2.
Perception operates with formless distinctions, observation wants to pass on information
and therefore has to give them form. Communication in an autopoietic system means to
connect forms to forms; therefore both habitation and inhabitation have to produce forms
that are connectable.

The global space of a social system (social space) primarily consists of communicative
objects, which are able to perceive and move. Is this space devoid of non-communicative
objects (which can transform, but are neither automobile nor perceptive) it has an almost
limitless amount of positions and an equivalent complexity of relations between them
(Fig 1a). Communication demands a reduction of this complexity in the sense that
participants in the communication have to be within a certain range of positions to
participate. Communication therefore depends on the possibilities for moving and
perceptibility or perception of its participants: contact, encounter, avoidance, control.

If a non-communicative object is put into the complex stream of random perception and
movement of a social space, it blocks respectively interrupts certain possibilities to perceive,
move and therefore to communicate (Fig 1b). It changes the structure of perception and
movement - and communicates this as soon as perception and movement respond to it (and
become form). "By giving shape and form to our material world, architecture structures the
system of space in which we live and move. In that it does so, it has a direct relation -
rather than a merely symbolic one - to social life, since it provides the material

see Ibid., p.28f.
see , Die Kunst Der Gesellschaft (Frankfurt am Main: Suhrkamp, 1999), p.50.
Second International Conference on Critical Digital: Who Cares (?) 53

preconditions for the patterns of movement, encounter and avoidance which are the
material realisation - as well as sometimes the generator - of social relations."1

Figure 1. Global space (a) with a non-communicative object (b) and an indication (c).

Architectural design, whether by architects or programs substituting them, is based on a

succession of decisions. Varying Spencer Brown, the design process starts not with a
fundamental and well planned act, but with the order to "draw" a distinction. Architecture
such produces objects in a reference space, whether it is the virtual space of a computer or
a sketch, the space of a cardboard-model or the physical space of reality. A basic definition
of space is that it consists of positions. Space in this sense is a medium made of loosely
coupled positions. Distinctions in space create objects by tightly coupling such positions.
Tightly coupled elements in a medium are forms. Objects therefore are spatial forms.
Positions in space can only be occupied once, by one object at a time. Space enables objects
to leave their positions.

4. Observation

Luhmann suggests to change from what has been traditionally called sensual cognition
(awareness) to the distinction between perception and communication. Perception and
communication are two types of cognitive operations which have in common what he labels
"observation". Observations produce forms by simultaneously making a distinction
(perception) and marking (indication) it. According to Spencer Brown`s logic, a distinction is
made by placing a boundary with separate sides in such a way, that a point on one side
cannot reach a point on the other side without crossing the boundary2.

Architectural design operates in the same way: to draw a line means to make a distinction.
As soon as the line is drawn, it distinguishes between two sides. "Being two-sided, a form
presupposes the simultaneous presence of both sides. One side, taken by itself, is not a
side. A form without another side dissolves into the unmarked state; hence it cannot be
observed. Yet, the two sides are not equivalent. The `mark` indicates this. That asymmetry
is difficult to interpret, particularly if one wants to give it a very general meaning. But this
much is clear: only one side of a distinction can be indicated at any given time; indicating
both sides at once dissolves the distinction."3 A form therefore is the unity of a

see Bill Hillier and Julienne Hanson, The Social Logic of Space (Cambridge
[Cambridgeshire] ; New York: Cambridge University Press, 1984; reprint, 2003), p.ix.
see George Spencer-Brown, Laws of Form - Gesetze Der Form, vol. 4. Aufl. (Bohmeier
Verlag, 1997), p.1.
see Luhmann, Die Kunst Der Gesellschaft, p.109f.
Second International Conference on Critical Digital: Who Cares (?) 54

differentiation, as the outside, the side not indicated, remains available: Distinction is
perfect continence1.

As form can only become perceivable, what differs, and can only sustain what
selfreferentially (for the form) as well as heteroreferentially (for an observer) refers to a
distinction2. Any form must differ from other forms and must be distinguishable by an
observer. If architecture is communication, then architectural forms are generated by
architectural observation. Architectural observation operates in space as a movement across
positions - either by transformation (of a form) or by transgression (of objects). At the basis
of autopoietic architecture lies random movement: perception and objects are endogenously
agitated. Without a random background, architectural forms could not be produced.
Observation is based on the restriction of random movements of objects3.

5. Form

Architecture as a social and therefore autopoietic system consists of momentary elements,

events or operations: movements in space. As the elements of architecture are forms, the
term "form" has to undergo a serious redefinition. Forms are not stable any more, but seen
as something not only exposed to external forces but taking them on, transforming and
displaying them. "Architectural form is conventionally conceived in a dimensional space of
idealized stasis, defined by Cartesian fixed-point coordinates. An object defined as a vector
whose trajectory is relative to other objects, forces, fields and flows, defines form within an
active space of force and motion."4

Objects, according to Luhmann, are forms as a tight coupling of positions in the medium
space. The architectural object therefore is already a form, result of an operation, a first
spatial distinction, but not yet an architectural form. What misses is the indication that
completes the process of architectural observation (Fig 1c). Architectural observation can be
made in two ways. Forming an object by tight coupling positions in space and indicating one
of its two sides, by movement of the form is habitation. The coupling of moving objects,
therefore coupling positions in a bundle of trajectories and indicating one of the two sides of
the resulting global object is inhabitation. As well as a "cell" is an architectural form, a
"swarm" of individuals which couple their individual movements produces not only a form,
but also an architectural object. Hillier cites Thom`s example of a cloud of midges. "The
global form, the `cloud`, is made up only of a collection of individual midges (...) This
global form retains a certain `structural stability` (...) so that we can see it and point to it
in much the same way as we would see or point to an object, even though the constituents
of that global form appear to be nothing but randomly moving, discrete individual midges."5

6. Architectural form

An architectural form is the simultaneity of an object and an operation. It is the result of an

architectural observation, of a distinction and an indication. The distinction is a tight

see Spencer-Brown, Laws of Form - Gesetze Der Form, p.1.
see Dirk Baecker, "Die Dekonstruktion Der Schachtel, Innen Und Auen in Der
Architektur," in Unbeobachtbare Welt: ber Kunst Und Architektur, ed. Niklas Luhmann,
Frederick D. Bunsen, and Dirk Baecker (Bielefeld: Haux, 1990), p.70.
see Hillier and Hanson, The Social Logic of Space, p.35.
see Greg Lynn, Animate Form (New York: Princeton Architectural Press, 1999), p.11.
see Hillier and Hanson, The Social Logic of Space, p.34.
Second International Conference on Critical Digital: Who Cares (?) 55

coupling of positions in space, which forms an architectural object. This object has two
sides, of which one is indicated by an operation called closure. The closure of an object is
the indication of one side of the architectural distinction. It can be either the result of a
form-motion, an object-form called "cell", or the outcome of a motion-form, a form-object
called "swarm". Closure refers to the global situation of unlimited accessibility and
perceptibility in the space of a social system. Any non-communicative object placed into the
space of such global communication reduces the complexity of this global situation by
occupying positions and screening perception and movement. The placing of such an object
is a distinction in space that produces two symmetrical sides. But observation is a distinction
that indicates one of the two sides of a form. Architectural observation does so by closure.
To this end, architectural forms exceed the selective effect of non-communicative objects,
creating an asymmetrical situation. The excess screening of closure indicates the inner side
of the architectural form and produces an interior, an enclosure.

An object-form can be closed according to two principles: transforming a single, continuous

or aggregating discrete non-communicative object(s). The ends of an object-form of any
kind can never form a screen with an angle of 180 degrees or more with anything but a
dimensionless communicative object (180-degree-rule). Never mind how hard one presses
oneself against a straight wall, one will be visible from more than half of the environment.

Figure 2. Forms at varying distance with inscribed angle arcs and closure area.

Closure seems quite obvious for object-forms, but remains more difficult for form-objects.
Ren Thom`s explanation for the formation of a swarm of mosquitoes could be seen as
equivalent to the 180-degree-rule: move randomly until half of your field of vision is clear of
mosquitoes, then move in the direction of mosquitoes1. The closure of a swarm of
mosquitoes is responding to global accessibility and perceptibility in space through a
restriction on the random movement of discrete objects and not by changing the form of a
discrete object. The result is a screen (an enclosure) that exceeds the reduction of
accessibility and perceptibility of independently moving objects.

7. Architectural system

Form theory is not yet system theory. "The problem of system building lies in the
connectivity, in the recursive reusability of events. Operations (conscious perceptions and
communications alike) are only events. They are neither consistent, nor can they be
changed."2 Architectural systems connect architectural forms to architectural forms,
habitation and inhabitation, object-forms to form-objects. The system exists as long as its
volatile operations, the movement of forms, are continuing to connect. A building that

see Ren Thom, Structural Stability: An Outline of a General Theory of Models, trans. D.H.
Fowler (Benjamin, 1975), p.319.
see Luhmann, Die Kunst Der Gesellschaft, p.84.
Second International Conference on Critical Digital: Who Cares (?) 56

becomes inadequate for inhabitation will be abandoned, its object-form has lost its
connectivity to certain form-objects.

The difference of architecture and its environment is produced by its operations, its
observations and therefore its forms. An abandoned building becomes an object of nature.
Operative closure in architecture is coupled to enclosure. Because architectural forms are
the result of closures, the continuing architectural observation connects on the side of their
enclosure, and not on their outside. Closure and the resulting enclosure are necessary to
enable operative closure of social communication in dense, functionally differentiated
societies. Architectural communication is neither before nor next to other processes of social
communication but part of them. It selects space for communication and communicates this
selection through form. To communicate in architecture means to communicate with
architecture. If you want to talk to somebody in the next room, architecture makes you
walk, makes you make a distinction in space.

Architectural distinctions are indicated by forms and the indication selects from an endless
array of distinctions one to find a limitation for the connecting operations. This is what a cell
does. It gives the random movements of a wanderer a direction. It makes him move in, and
with it comes a restriction of movement and perception. The appropriation asymmetry of
architectural forms derives from the closure which creates an interior that provides a
reduction of the complexities of global space. "Forms have to be build asymmetrically,
because their sense is to make their one (their interior) but not the other (their exterior)
side available for following operations (elaborations, increase in complexity etc.)."1

8. Game

According to Hillier, space can operate like a discrete system that produces a composite
object not through spatio-temporal causality, but by discrete entities which follow a rule. On
the other hand, individuals can produce a global structure, whose form is not a product of
their actions, but of a rule independent of these2. If we connect those two modes of form-
production (or architectural observation) to one system and replace "rule" by "restrictions
on the random movement of objects", we start an architectural communication.
Interestingly enough, the formalism of communication theory is close to mathematical game
theory (John v. Neumann, Oskar Morgenstern) insofar as both seek solutions to the
equilibrium problem of behaviour strategies3. If the movement of objects in an architectural
system could be interpreted as behaviour, architectural communication could be designed as
a game. Hillier cited the game of hide-and-seek, but did not fully use its potential as a new
model for a design process: "Given that the child is the active part of the system, it seems
at least as accurate - though still incomplete - to talk of how the environment responds to
the child`s imposition of its mental model of hide-and-seek upon it, as to talk about how the
child responds to the environment." Could "mental model" be replaced by "restricted
movement" of the child, which produces a form, and could "environment"4 be seen as a
"composite object" of discrete entities, both could become part of one system. This system
could design itself by simulating the communication between players and hide-outs under
the restrictions of the game. The design-process would shift from control to autonomy, from
planning to evolution, from structural to dynamic stability.

see Ibid., p.51.
see Hillier and Hanson, The Social Logic of Space, p.36f.
see Dirk Baecker, Form Und Formen Der Kommunikation, Suhrkamp Taschenbuch
Wissenschaft (Frankfurt a.M.: Suhrkamp, 2007), p.70f.
see Hillier and Hanson, The Social Logic of Space, p.38.
Second International Conference on Critical Digital: Who Cares (?) 57

9. Endnotes

Luhmann, Niklas. Soziale Systeme

Grundri Einer Allgemeinen Theorie. 1. Aufl. ed, Suhrkamp-Taschenbuch Wissenschaft.
Frankfurt am Main: Suhrkamp, 1987.
. Die Kunst Der Gesellschaft. Frankfurt am Main: Suhrkamp, 1999.
Hillier, Bill, and Julienne Hanson. The Social Logic of Space. Cambridge [Cambridgeshire] ;
New York: Cambridge University Press, 1984. Reprint, 2003.
Spencer-Brown, George. Laws of Form - Gesetze Der Form. Vol. 4. Aufl.: Bohmeier Verlag,
Baecker, Dirk. "Die Dekonstruktion Der Schachtel, Innen Und Auen in Der Architektur." In
Unbeobachtbare Welt: ber Kunst Und Architektur, edited by Niklas Luhmann,
Frederick D. Bunsen and Dirk Baecker. Bielefeld: Haux, 1990.
Lynn, Greg. Animate Form. New York: Princeton Architectural Press, 1999.
Thom, Ren. Structural Stability: An Outline of a General Theory of Models. Translated by
D.H. Fowler: Benjamin, 1975.
Baecker, Dirk. Form Und Formen Der Kommunikation, Suhrkamp Taschenbuch
Wissenschaft. Frankfurt a.M.: Suhrkamp, 2007.
Second International Conference on Critical Digital: Who Cares (?) 58

Second International Conference on Critical Digital: Who Cares (?) 59

Imaging the unimaginable

Torben Berns
McGill University, Canada

Michael Jemtrud
McGill University, Canada


In this paper the authors argue the following: 1. Political questions are in fact prior to all questions of
substantive and material nature. Humans, in that their world is mediated, accumulate information such
that what they make and do functions both designatively and mimetically in terms of organizing that
information. 2. The nature of our (political) histories is such that we have organized the world according
to 2D vs. 3D models where the image is seen as merely representative of a prior and truer real. This
relationship is manisfest in both the substantive nature of the subject/object relationship as well as the
tautological need to concretize and test the concepts which underlie the substantive subject and the
concretely given world that validates that subject. 3. Computational modeling when it does not seek to
simulate the figurative or natural world, produces surfaces which are neither 2D or 3D but rather
provide a means to dislocate the political from the utopic/geometric to the localized and topological.

1. The Given

The question who cares?, is a political question: Care or concern is a function of interest and implies
relative value. This exists in distinction to validity (an epistemological question of relative truth) and merit
(an ethical question of a relative good or an aesthetic question of relative beauty). The fact that these are
all relative, means that their appearance is given within a community of relations where through the
multiplicity of those concerned, the appearance of things is given its due measure.

The following discussion ultimately addresses the relation between the conceptual foundation of
understanding and transforming the world, and the models which manifest and make operative these
concepts. It asserts that technologically (digitally) derived artifacts may still be interpretive as well as
designative but need not necessarily be mimetic. Such images and models therefore may problematize
the mimetic foundations whereby we commonly act. The meaning of these images and models must be
considered apart from any mimetic (bi-directional) relation to the real. As such, they acquire a radically
different ontological status and potentially offer an equally radical spatial and temporal condition of
possibility to image the unimaginable.

Appearance by definition refers back to the community through which that appearance creates a possible
future. Its meaning is inseparable from the possibility it presents and to whom it is presented. Appearance
as such is constitutive of not only ourselves as participants in this community, but the presence of our
thoughts, our language, our gestures, and most importantly for our purposes, the artifacts that we make.
Specifically we would like to direct attention to the role and necessity of a mimetic aspect to those
phenomena and what happens to the entire set of relations when that mimetic aspect is challenged by
digital models and fancy images. To do this we would like to address the role of the architectural model
(in this case the preferred architectural tool of appearance) as a means of engaging the larger question of
political responsibility. Politics is used here in the broad meaning of the word as in public realm of
appearance as opposed to the limited definition of statecraft or administration (Arendt, 2005).
Second International Conference on Critical Digital: Who Cares (?) 60

The idea of the model in modern political thought as a practical testing ground whereby the ethical
imperative is explicitly linked to the concept of freedomthe foundation of the political or public realmis
explicated in the work of Kant and Rousseau. Its historical explication in the Real is in the birth pangs of
the modern nation state: the countless revolutions whereby each formative nation throws off its colonial
or imperial shackle and declares itself naturally, i.e. concretely, given. Each nation-state sees its physical
existence as a kind of proof of concept in that it both exists and is legitimate to the degree that it can be
recognized within the community of nations as a model of ethical action. Thus the model is judged
primarily in terms of its ability to deliver on a tangible consistent with the concept whereby it is brought
into being.

The implicit roots of this idea however are much older and may be ascertained already in the pre-Socratic
speculations where the worldly counterpart of the amorphous forms were termed hyle (Flusser 1999).
Hyle was a pre-existing Greek term for lumber, essentially the sawed and planed wood to be found in the
carpenters shop, ready for use as particular projects demanded. The pre-socratics took the word to
describe material: a material that lends itself to revealing the immaterial. In effect, the idea of form of
an abstract idea pre-existing its material manifestation not only cast the world as a kind of standing
reserve waiting to fulfill the demands of the concept, but it also rendered any appearance of the world as
a model in principle for the realization or fabrication of the concept a concept necessarily seen to exist
a priori.

Most radical in its consequence was the demotion in status of the image per se to being a mere
representation of the concept. A demotion of the image from being a magical insertion in the order of the
real to a mere symbol or pointer to an absent if prior truth. Conversely the value of the imageas much
as the actual made object to which it might point was less in it what it was, and more in what it might be
as an imperfect surrogate. This is to say, if forms preceded the world of images and things, then things as
models premised a future not revealed in what is or was, but what could be or should be. Everything in
principle becomes a model towards proof of concept. This doesnt become an explicit crisis until the
19thC. but regardless nowhere again will the image have the true status of a divine monster capable of
revelation. Instead it will always be seen as a working model. Predictably when understood this way, the
idea of a single truth perfectly represented in even a divine model (such as the bible) soon gives way to
competing models of truth. It is not long before religion is banished to faith, and the battles play
themselves out in terms of the models own ability to produce relative truths i.e. useful devices. Those
devices, themselves, are models of applicable concepts.

In all cases however the universal concept preceding the model and the empirical testing of the model
serve to expand the pool of materials to be considered as hile or standing reserve: from wood to stone
to metals to chemical products to people. In effect there is nothing concrete or imagined which falls
outside of the concept and the truth of appearance is in fact the truth of concept. Most ironically when the
concept begins to produce models which no longer need to mimetically refer to the world of
appearancesas when the concept allows the technological to become the computational and in turn
produce technical imagesit is the concept which is revealed not merely as insufficient or requiring
work but rather ontologically void.

2. Coding

Gestures, language, thoughts, text, information, knowledge, images, artifacts are as much apart of our
human condition as the given conditions of our natural existence. Moreover the appearance of all these
things must exist coherently (i.e. they must make sense together) through the sheer facticity of their
appearing together. Even such supposedly a priori assumptions of time and space must give way in the
face of an expanded set of conditions, thus revealing those supposedly given and unassailable
categories as merely convenient models for reconciling a prior model of an enduring subject revealed
through enduring artifacts of that subjects own making.

What emerges as a pattern is that all of these modes of being (i.e., gestures, language, thoughts, text,
information, knowledge, images, artifacts) have in common an ability to designate as well as fashion the
Second International Conference on Critical Digital: Who Cares (?) 61

world out of which they are born and to which they refer. And this designative as well as mimetic aspect
of all modes of being lends itself implicitly to a discursive reordering of the world by virtue of the creative
aspect of the mimetic function itself. This is not to say the world dialectically produced as the concept is
by definition true or by the same token, false. Nor is it to say it is good or bad or even beautiful or ugly. It
is only to say that those terms are themselves consistent with the way the relation of model, appearance
and concept play themselves out historically. To paraphrase Nietzsche,Things are this way. That doesnt
mean they had to be this way.

What is relevant to the question posed by the conference here today, is how the mimetic aspect - or lack
of - inherent in the new models, recasts not only the political question (how we recognize these models
and how we think about them) but also their consequence for us. In other words, the question is how we
recognize and think these new models and fancy images as well as how they change our world despite
what we think or cant think about them.

As we mentioned, all forms of mediation serve to both designate and recast the given. We can take these
functions, the designative and the mimetic, and describe them similarly as explain vs. interpret. These
latter terms, for example, are made clear when one distinguishes the material analysis of an artifact vs.
conjecture regarding its symbolism and significance within a cultural context. One replaces one natural
given being with another in its place while the other reveals a world and similarly creates a future into
which we humans can act.

The fact that the human is always mediated means that on the one hand both results occur (namely a
natural given condition replaces another equally natural given condition as well as a symbolic universe is
transformed) but both are mutually independent and neither could exist without the presence of the
human per se. This last assertion is best exemplified trying to imagine the world if one of your forbearers
failed to have progeny. You cant. Nor could they have failed. At that moment you are brought up short,
face to face against the sheer facticity of the world as well as the impossibility of it appearing at all in any
other way regardless of the freak chance that it appears this way and not in some other way: entirely

But this is precisely what we need to do: to somehow imagine or at least image the unimaginable.
As such we would like to take the taunt of the fancy image seriously.

[Fig.1] [Fig.2]
Fig. 1,2. Gabriel Lanthier, Urban Object (McGill University School of Architecture, 2008)
Second International Conference on Critical Digital: Who Cares (?) 62

3. The image qua image

There is still another stumbling block in our path before we can get at the question of the image. This is
the nature of the image itself, specifically the way we still conceive of the image as produced by our

Essentially our use of the mathematical (a clearly designative system of coding), and more specifically
our imaging software, do indeed map the numerical onto the figurative as a kind of Newtonian calculus:
the numerical is still a derivative of the real where the real is in fact made up of mediated and codified
information. This is to say the computationally derived image is still attempting to reveal the conceptual
forms of a metaphysically imbued world.

The image at that level can only serve to reveal a natural (sic existing) concept. We can easily dismiss
the image which does not change the given real since it fails the test of value, as much as validity. It
may have merit as a pretty picture, but we should remind ourselves that pretty pictures, in the absence of
a relative truth are equally ugly or kitsch. Thus we are in double jeopardy by bringing entirely
inappropriate criteria to bear on the image/model. First, we are demanding that it be accountable to the
tautological categories of the concept which already secures us within the concept. Secondly, we are
creating the image in the first place with a use of number which is itself an approximation of the
conceptually derived image. In this case, our use of number then only sublates the conceptual idea of the
world, thus validating through the image only that criteria which situates us naturally within the concept.

Another way of saying this is that human artifice comes more and more to see itself as natural but where
the natural is no more than the human remaking the world according to its own image: a tautologically
given self-fulfilling prophecy or a mutually adjusting/confirming means-ends relationship (Darby, 1986).
Ironically it appears that one way to consider this is to think primarily through the computational tools
themselves; i.e., to continue to operate fundamentally at the level of the image where the image affords
the means of a partial dislocation out of our substantially validated selves and into the consequences of
the image per se. It is in fact to return the image to its direct instrumental role, monstrous and quasi
divine perhaps but only in the sense of being truly other.

But what do these images look like and to what do they refer?

The task at hand is to recognize in the images the possibility, not of the so-called real the one which we
can only imagine as a product of the world in which we recognize ourselves but as a product of the
unreal: those images which are products of the realm they naturally belong: the topological. Flusser
makes the following assertion regarding design.

One should not conceive of the city to be designed as a geographical place (such as a hill
near a river), but rather as a fold in the intersubjective relational field. This is what is meant by
the assertion that the future civilization must become immaterial.

He goes on to say:

This change is not to be underestimated, even if we are getting used to seeing folds in
fields in synthetic images of equations on the computer screen. One must only think how
difficult it was to see the geographic surface as a body surface rather than as a plane.
Strangely, a rethinking in terms of topology rather than geography will not make the city to be
designed utopic. It is utopic (placeless) as long as we continue to think geographically,
because it cannot be localized within a geographical place. But, as soon as we are able to think
topologicallythat is, in terms of networked concrete relationshipsthe city to be designed
allows not only localization, but also localization everywhere in the network. It comes into being
forever and everywhere, where intersubjective relationships accumulate according to a
connection plan to be designed.
Second International Conference on Critical Digital: Who Cares (?) 63

To design of course is to fashion with immediate ends in sight. It is to render real or natural an artifact-
model. This activity is based on an imposed set of ends designed to reorient the given. Note the word
designed in the above sentence could just as easily have been replaced with the word intended. Design
has always been been a means of mapping the designative, natural, and necessary onto the symbolic
and codified realm of meaning. It does this by allowing one to understand both modes of being through
the facticity of appearance. This does not change. What changes here is the assumption that the
facticity need refer to the given in order to gain legitimacy as meaning, but moreover that the given by
definition is geometric of the measurement of the earthand concrete. The conundrum is that the
human at best is a deferral layered on top of the natural but as we saw above, the natural can only take
the form of the concrete or earthly appearance of the natural.

Let us try a very concrete example.

When we build a physical model we create an object which points to a discernible real if at a distance.
Now the model which lies concretely in front of us, is inevitably of a different material and of a different
scale from the real to which it refers (Note the referent is called the real, but it too is a model for another
real and so on and so forth . The notion of difference in both scale and material is important). The fact
that we are comfortable with the model as a model stems from both its difference from the real and the
fact that it opens a clearing for us in our relation to the real. The point is that it allows us to imagine the
real differently while both possibilities are confirmed in the facticity or givenness in both the referred real
and the physical model. This is to say the very concrete nature of the model opens up an impossible or
immaterial space a heterotopia by virtue of the reflexive nature of one reality vis a vis the other.
The irony here is not only the immaterial nature of that relationship but the profound degree of freedom
introduced by the static and naturally given. This freedom implicit in the appearance of the concrete
material underlies the implicit freedom of the immaterial political realm and is underlaid itself by its
mimetic relation to the real.

Now the digital model that strives to simulate the real does the opposite. By relying on the flat
renaissance window of perspective to simulate the depth of a solid model it can at best point to a
unidirectional relation to the real by denying its own facticity in the need to stress the real as prior to the
model. Having denied its own inherent condition the condition whereby it comes to be it erases or
neutralizes the degree to which it stands as a condition of the human per se.

By way of a provisional conclusion we can summarize the preceding in the following way. When the
artifact is produced directly by means of techniques (sic habits) it creates a bidirectional engagement of
the world which by definition is both mimetic and designative. When that artifact is produced
technologically, the designative comes to dominate. Alienation of the human subject results from the
introduction of a division between the means of appearance and the meaning of that appearance thereby
raising a question to the necessity of that meaning in the first place. When the technological means of
producing the artifact is computationally based, that alienation of the subject transforms to raising even
the subject and the autonomous object as questionable entities in themselves. This is to say the
computationally derived field of relations manifest in the images/models become the potential locus of a
radically different ground of appearance. This ground is still human in that it is first and foremost a
political ground and a political concern. The human now has to rethink the assumptions of material
objectness as well as subjective autonomy if only because the mimetic nature of the means of
appearance has categorically changed: an opening is cleared by way of a rupture in the real produced by
the new model. Where mimesis has ceased to be the fundamental means of engaging the world, the
immaterial replaces the material as the ground of the human. The non figurative, non-mimetic digital
artifact does not refer to a geometrically-founded definition of depth. It does not refer to a metaphysical
hierarchy of 2D and 3D. Rather by referencing a concrete network of relations it refers to a political
ground as the rhetorical authority and demands interpretation of those concrete relations. As Flusser
observes in the quote above, the geometrically-based mimetic interpretation can only result in the utopic
while the topologically-based relational field a surface which is both an image and a modeldemands
to be rendered meaningful within the intersubjective realm as a localized condition available everywhere.
It is a ground primarily revealed through the image and not the language and text of the concept. It is not
tautologically tied to a naturally given subject/object relationship.
Second International Conference on Critical Digital: Who Cares (?) 64

It is nothing short of a truly radical phenomenological approach where in the facticity of the numeric is
recognized generatively within the human condition.

Fig. 3. Evelyne Bouchard + Anna Rocki, 00G00S (McGill University School of Architecture,

Hannah Arendt, Introduction into politics, The Promise of Politics, (New York: 2005)
Vilm Flusser, Form and Material, The Shape of Things, (London, 1999) p. 22
Tom Darby, Reflections on Technology, Sojourns in the new world, (Ottawa, 1986)
Vilm Flusser, Designing Cities, Writings, (Minneapolis, 2002) p. 177
Michel Foucault, "Des Espace Autres," Architecture /Mouvement/ Continuit (Paris, Oct. 1984)
Second International Conference on Critical Digital: Who Cares (?) 65

Congruent spaces

Athanassios Economou, PhD

Georgia Institute of Technology, USA

Thomas Grasl
Technical Institute of Vienna, Austria

The notion of congruence in formal composition is briefly examined and it contextualized
within current architecture discourse. A computational tool is presented in the end to
illustrate alternative usages of congruence in analysis and synthesis in spatial design.

1. Introduction

Contemporary architecture discourse is driven by extensive research on issues of three-

dimensional patterns, space packing, non-periodic three-dimensional tilings, parametric
space modules, three-dimensional Voronoi tessellations and so on. This is not an accident;
three-dimensional spatial vocabularies and transformations have already been in the center
of design inquiry in twentieth century architectural discourse and the recent emphasis on
CAAD related three-dimensional descriptions of architectural form could only foreground this
trajectory even more.

A central theme in this discourse is the theory of congruence and in particular the formal
relationship of module to unit (or commensurable part to whole). In two-dimensional space
this relationship is rather straight forward and easy to deal with and comprehend. In three-
dimensional space this relationship becomes considerably more complex: the number of
transformations that repeat a module increases, the complexity of the interactions of these
repetitions increase exponentially, and to make things worse or better depending how one
sees this, there is no vantage point to trace correspondences in the same way that these
could be traced on a two-dimensional plane. What exacerbates the problem even more is
the lack of a body of formal knowledge that could effectively help architects and designers
to explore systematically the possibilities afforded in a given three-dimensional setting.

This problem of congruence in design exploration sets up a whole series of questions that
are all viciously entangled one within another. It is not clear at all to begin with, whether
congruence is something to be desired in design. If it is, or better, when it is, is there any
preferred formalism to address such issues in a constructive way? Does this design space
characterized by congruence provide a good setting for teaching formal composition (formal
in both its spatial, visual sense as well as its mathematical, logical sense) in architecture?
Are shape grammars good to address these issues? Are there group theoretical
computational constructs to support such inquiry? Do structured experiments in design that
are uniformed by any prior understanding of the complexities of transformations and their
interactions have also research value- rather than just aesthetic value?

The work here is situated within this critical framework and sets up a computational tool to
begin to address some of them. More specifically, this work looks closely at a specific set of
algebraic groups the ten abstract groups and their corresponding fourteen geometric types
that can describe the symmetry of any finite three-dimensional shape and provides an
Second International Conference on Critical Digital: Who Cares (?) 66

automated environment to enumerate and represent all their subgroups and their
relationships with lattices in a graph theoretic manner. The key to this inquiry lies in one of
the most interesting aspects of symmetry theory and pattern analysis, namely, the part
relation () of symmetry groups; the elaborate and complex hierarchies of symmetry
groups and subgroups, all nested one within the other, point to direct correspondences with
complex compositional structures of spatial patterns and suggest a constructive
methodology for formal analysis and composition in architecture design.

2. A congruence model for design

Congruence is an equivalence relation between shapes; two shapes are congruent if they
can be mapped one upon the other by an isometric transformation, say a combination of
translations, rotations, and reflections1. An isometric transformation is a mapping that
retains the overall shape properties but changes metric dimensions and/or handedness.
Significantly, this mapping can be extended for any point in space and it can render thus
notions of shapes and spaces as interchangeable. Within this framework, symmetries of
shapes are equivalent to symmetries of space and symmetries of spaces are equivalent to
symmetries of shape. This key idea was elegantly presented by Weyl in his famous lectures
on symmetry in 1951 discussing a theory of space along the lines of Leibniz, Newton and
Helmholtz.2 Still, for the purposes of this discussion, it is immaterial whether this
mathematical framework can indeed provide an interpretative structure for the study of real
space in general; the theory is significant here only as a study of pattern in general and in
the sense that it can provide a generous framework for new inquiries and insights for a
variety of spatial patterns in design and the arts. In this sense the typical differences
between mathematical patterns and patterns of appearance are duly observed here too: the
former always abstract, infinite, geometrical, numerical, the latter always concrete, finite,
corporeal, subjective.

In architectural and spatial design congruence is typically employed to create a spatial order
characterized by the repetition of similar figures. This notion of repetition links congruence
to symmetry and its various construals in architecture discourse. It is a rather interesting
fact that the earliest account of symmetry, at least as it presented on the first surviving
treatise of architecture by Vitruvius3, considers symmetry as a commensurable relation
between a module and a whole; our rather contemporary understanding of symmetry as a
sum of congruent parts a more precise but also somewhat impoverished term- can be
traced in the theoretical exposition of architecture discourse by Alberti4. The Vitruvian
definition foregrounds a relationship between a module and some whole cast in arithmetical
terms, for example, the number of ways that a repeated module measures the whole, and
the Albertian definition suggests a relationship cast in geometrical terms, for example, the
kinds of ways that a repeated module can be mapped upon some whole. Nowadays, the
latter relations are typically accounted in transformational geometry and the former
relations in number theory and its various branches including the theory of proportion, the
theory of means as well as their contemporary equivalents of the theory of modular

Congruence in formal composition in design arises and in many and diverse ways; at its
simplest and most straightforward way shapes and their spatial relations can be repeated
throughout the composition to produce highly repetitive designs and equally standardized
parts. Most of the designs exhibiting overt translational, reflectional, rotational, glide
reflectional structure and so forth are typical candidates for this species of the model. The
next two types of congruence deal with different ways of tackling non-repetition and
complexity, if complexity can be associated with the lack of repetition. The second type of
Second International Conference on Critical Digital: Who Cares (?) 67

designs exhibit repetitive modules with non-repetitive spatial relations while the third type
exhibit non-repetitive shapes and repetitive spatial relations. The former class consists of
designs that feature identical parts put together in many and diverse ways while the third
class consists of dissimilar parts that are connected one to another through repetitive
identical joints. In the end of the spectrum of the model non-repetitive shapes and non-
repetitive spatial relations are typically reserved for highly expressive architecture designs
composed by unique shapes and components. The structure of this congruence model is
shown in Table 1; the rows denote shapes, the columns denote spatial relations and all
entries are structured around the feature of congruence (C) and non-congruence (NC).

Table 1: Interrelationship between congruence and design



Most of contemporary designs can be cast in any of these four classes. Among these four
classes of designs the first three exemplify various degrees of controlled repetition of shapes
and spatial relations or alternatively of parts and joints, and they provide alternative
contexts for exploration of congruence and commensurability in design. It would not be
farfetched to claim that a substantial part of the fourth class of designs that rely on non-
standard parts and non-standard relations can be dealt as well either under parametric rules
based on given symmetry conditions or combinations of layers of symmetry that produce
unique and highly asymmetric compositions.

It is suggested here that the overall emphasis of architecture discourse on issues of pattern
making and parametric variation relies heavily on congruence relations. It is the purpose of
this work here to examine questions regarding aspects of fitness of the role of congruence
in formal composition. Currently most formal analysis and generative tools using group
theoretical techniques apply or produce highly repetitive designs that show immediately
their recursive structure. Others formalisms, including shape grammars, and especially one
type of them, the kindergarten grammars, produce highly expressive designs with repeated
parts that do not immediately reveal some underlying structure5; still a great portion of the
visual interest of these designs is conditioned by their ability to foreground the
commensurable part that generated the whole design, in itself an issue not desired in every
design world.

The broader question that is opened up here is whether a complex architecture object, or
part, can be interpreted as a layered object whose parts are all related symmetrically; in
other words whether an asymmetric shape or configuration can be understood in terms of
nested arrangements of some order of symmetry. The obvious advantage of such a
worldview is that spatial patterns that are characterized by congruence are candidates for a
group theoretic approach for both analysis and design purposes. A particular analytical
method founded upon group theory, the subsymmetry analysis, has been quite successful in
showing how various types of symmetries can be revealed in parts of a design and how
various symmetric transformations may combine to achieve the whole design6. A nice series
of architectural examples illustrating the method include the subsymmetry analysis of
Pantheon, the Ward Willits skylight by Frank Lloyd Wright, the Free Public Library and the
Second International Conference on Critical Digital: Who Cares (?) 68

How house by R.M Schindler7. Other examples emphasizing the power of the method for
design purposes rather than analysis are available too8.

3. The sieve application

This work here generalizes all above approaches and provides a computational framework
for the complete and automated representation of all underlying group structures of three-
dimensional finite shapes9. The application uses Eratosthenes sieve to apply a net-like
search for designs that comply to given characteristics here the search for parts in a
configuration that have the same algebraic group characteristics to those of the whole
configuration. The subject matter of the application are the fourteen algebraic structures
that can describe the symmetry of any three-dimensional shape with a singular point and
the whole code has all been implemented in JAVA using the JUNG library10.

The interface of the application consists of three modules that provide alternative
representations of all point symmetry structures of Euclidean space: a) a discursive
representation including the Shubnikov, Coxeter, Schnflies, Weyl and Thurston notations;
b) a graph theoretical representation in interactive strict order and partial order lattices; and
c) a diagrammatic representation in terms of labeled shapes in orthographic projections,
very much akin to a Wulff net projection. Each symmetry group can be selected from a) the
set of symmetry types allowed in the space; b) the order of symmetry that characterize the
type; and/or c) the order of rotation of the principal axis of the shape. The first choice is
given in the form of a dropdown menu and the other two in terms of variables that accept
typed numerical input from the user. The design of the interface of the application showing
the parallel computations of discursive notations, graph lattices and diagrammatic
representations of all central three-dimensional symmetry structures is given in Figure 1.

Figure 1. Sieve interface

For any selection of a specific symmetry structure the application provides automatically a
set of discursive notations, including the Shubnikov, Coxeter, Schnflies, Weyl and Thurston
ones. For any such selection the application provides as well a graph representation of all
nested subsymmetry groups in the structure. This graph representation, also known as
Hasse diagram, provides a pictorial representation of the relationship of all subsets of a set;
here the set of all symmetry subgroups of any symmetry group is sorted by a relation that
Second International Conference on Critical Digital: Who Cares (?) 69

orders all the subgroups in the set and the corresponding relations are drawn in strict order
graphs if this relation can be established for all pairs of elements in the set and in partial
order graphs this relation is defined for some, but not necessarily all, pairs of items in the
set. Finally, for any selection of a specific symmetry structure the application provides as
well a diagrammatic representation of all these symmetry types in terms of labeled shapes
in orthographic projections, very much akin to a Wulff net projection. The diagrams provide
a very intuitive account of the order of symmetry of the shape in terms of labels the
number of labels corresponds to the order of symmetry, but also in terms of their spatial
relation one to another and to the overall configuration. Labels associated with the front or
the back face of a shape facing the viewer are denoted as closed or open circles
respectively. In a rather designerly mode, the application allows as well for a composite sum
of several symmetry subtypes in the configuration. In this sense the complete pattern that
emerges may have no symmetry at all but its individual parts have very precise
relationships that are built at will from the user of the application.

4. Discussion

This work here reports on the computational tool that has been developed to enumerate and
illustrate all possible subgroups of a given three-dimensional finite shape. The design of this
tool to help understand constructively the staggering complexity that can unfold within
three-dimensional symmetry structures has been briefly described. The graph
representation of all symmetry subgroups of a configuration suggests a complex but
rewarding insight in the symmetry structure of a spatial configuration and the corresponding
discursive and pictorial representations provide conceptual clarity and design intuition in the
inquiry. The import of shape grammars for design applications based on this framework
suggests a very exciting trajectory of systematic studies in the architecture of form.

Conway J., Burgiel H., Goodman-Strauss C. The Symmetries of Things, Wellesley MA: A K
Peters, 2008.
Weyl H., Symmetry, New Jersey: Princeton University Press, 1954.
Vitruvius M P., The Ten Books on Architecture, (Trasl.) Morgan M H. Cambridge MA:
Harvard University Press, (1914) 2005.
Alberti, L B., On the Art of Building in Ten Books, (Transl.) Rykwert J., Leach N., and
Tavernor R., Cambridge, MA: MIT Press, 1991.
Stiny G., Shape: Talking About Seeing and Doing, Cambridge, MA: MIT Press, 2006.
March L., Architectonics of Humanism: Essays on Number in Architecture, London:
Academy Editions, 1998.
Park J., Subsymmetry Analysis of Architectural Design: Some Examples Planning and
Design B: Planning and Design, 27 (1), 2000, pp. 121-136
Economou A., Tracing Axes of Growth, in Visual Thought: The Depictive Space of
Perception, Advances in consciousness Research 67, Albertazzi L (ed.), Amsterdam: John
Benjamins, 2006, pp. 351-365
Economou A., Grasl T. Unraveling Complexity in Proceedings of the Third International
Conference in Design Computing and Cognition, Gero J., Goel A., (eds.) New York:
Springer, 2008, pp. 361-374
OMadadhain J., Fisher D., Smyth P., White S., and Boey Y. Analysis and Visualization of
Network Data Using JUNG [http://jung.sourceforge.net/doc/index.html];2005.
Second International Conference on Critical Digital: Who Cares (?) 70

Second International Conference on Critical Digital: Who Cares (?) 71

Beyond the Visual - Towards Reality in Digital Design

Arno Schlueter
Institute of Building Technologies, Department of Architecture
ETH Zurich / Swiss Federal Institute of Technology

Requirements on building designs related to ecological, economic and social sustainability
have constantly increased. In a future where digital techniques in design will be ubiquitous,
the question is not if but how architects can utilize available computational power at their
hands to go beyond geometry, to integrate matters of increasingly socioeconomic
importance into their designs. To deal with these non-visual aspects results in a high
complexity concerning the decisions to be made during the design phase. Digital techniques
should be utilized to understand the consequences, limitations and opportunities of design
ideas. They should enable design decision making rather than form making. In this paper
we conceptualize a framework utilizing computational methods to foster a more integrated
view on buildings. The framework is based on building information models as database and
interactive design space. Methods such as knowledge representation, performance
assessment, information visualization and optimization constitute the building blocks of the
framework. They utilize the design space to reduce complexity, aiming to foster intuition
and creativity. Exemplifying the framework on the task of integrating energy and its
utilization as parameters in design decisions, a prototypical implementation of the
framework and related case studies are briefly described.

1. Introduction
The profession of the architect currently undergoes great challenges, probably the most
demanding challenge to the profession in history 1. Since a few years, the most influential
extrinsic influence is the aspect of climate change and the impact of buildings on global CO2
emissions. The environmental paradigm is often linked to requirements on process
efficiency, tightened budgets and shorter timeframes. Theses influences are often perceived
by architects as obstruction, as limiting to creativity in the design process. In order to meet
these increasing demands in future, architects need to know more and they need to know it
at an earlier stage in their designs.

This socioeconomic tendency is flanked by changing tools of the trade. The tendency
towards digitalization2 is demonstrated in many other fields of personal and professional life.
As the most visible influence in architectural design, digital tools have expanded the means
how and which forms can be designed and inevitably built. Powerful modeling tools have
made the designing of complex geometries almost as easy as drawing a line. In order to get
those geometries built, digital fabrication processes had to be developed. Digital fabrication
has been widely established among researchers as well as for an increasing number of
architectural practices.

The perception of architecture and its discourse has traditionally been dominated by its
form3. Geometry is important, but it is just one aspect of architectural design. Focusing on
form as the sole driver when using digital techniques bears the danger of limiting creativity
by being dependent on options certain design software offers. Architecture has always been
a multi-objective task. The non-visual properties of a building such as energy, heat and
mass flows represent increasingly important parameters of a design.
Second International Conference on Critical Digital: Who Cares (?) 72

After decades of 2D-CAD drawings representing a building design, the concept of building
information modeling has been rapidly adopted by the building industry. Subject to
extensive research over two decades and originally evolved from mechanical engineering,
the capability of digital models to store multi-disciplinary information 4 offers interesting
opportunities, also in design. At the example of energy and mass flows as parameters in
architectural design, the potential of such models lies in their function as rich data
repositories. Due to advancements in modeling software, it is easily possible to model,
integrate and manipulate not only geometry but also semantic and topological parameters5.
Building components can be represented as objects containing various parameters such as
physical properties, manufacturing constraints or cost information.

We propose a framework to address the non-visual aspects of architectural design. The

application of the framework leads to digital design techniques that focus on architectural
design decision making rather than form making. The methods proposed utilize a building
information model to store and access relevant information. The aim of the framework is to
identify strategies to process, visualize and eventually simplify complex dependencies
between different categories of information in order to grasp the complex reality of a design
task. Enabling the designer to overlook and evaluate the consequences and dependencies in
his or her design eventually results in more room for intuition and creativity. And in better

2. Design support framework

Many of the decisions an architect has to take during design are characterized by
preconditions that are very challenging, as described in research fields related to decision
making6: Interestingly, the preconditions oscillate between not enough information and too
much information. The decisions to make are often of multi-criterion nature and can only be
solved by compromise, by balancing different criteria such as building form and
performance. Very challenging is the complexity arising due of manifold interdependencies
between form, material and technical systems. In addition, extensive numerical calculations
are necessary for analysis and simulation. Finally, the highly emotional side of design is
antagonizing decisions based on strict reasoning. Intuition and emotions are strongly
influential and not necessarily bound to the most rational solutions of a problem. In order
to support decision making by digital techniques, drivers of human decision making have to
be clearly understood. Two key poles in decision making can be identified: reasoning and
recognition6. Both are closely linked: Interactive decision support systems aim at rather
assisting than replacing the human decision maker by providing rational models to support
his or her reasoning abilities and by extracting relevant patterns in vast volumes of
overabundant information to support his or her recognition abilities 6. The notion of this
framework is to identify digital concepts and methods to support decision making by
addressing the two key poles, reasoning and recognition. These methods can roughly be
grouped into the fields of artificial intelligence, numerical calculation and information
visualization. Each one of these groups, as the research on human decision making itself,
contains an extensive body of research. In context of this paper, only a very rough concept
can be formulated, which is derived from our research work in the fields of performance
related design support tools.

2.1 Embedding knowledge

The architect as a non-specialist needs to decide on matters that require specialist
knowledge, already at an early stage of design. In the case of energy and mass flows for
example, the necessary knowledge to be embedded is related to physics and engineering.
Physical dependencies, thermal behavior and material properties have to be described and
Second International Conference on Critical Digital: Who Cares (?) 73

linked to objects in the building model. As many expert tools require extensive knowledge in
order to input the data, perform the calculations and interpret the results, it is necessary to
condense and include this knowledge into a design framework to make it available already
in early design stages. Such knowledge can be represented as sets of rules and logics,
describing the interdependencies between parameters. As is not always possible to acquire
rules and parameters7, knowledge-based methods such as case-based reasoning, among
others, can be used. Case-based reasoning utilizes scenarios successfully solved in the past
to find solutions to a current problem. Every successfully solved problem extends the
database, the system learns. The same strategy is used in the fields of biomimicry and
biomimetics: Nature serves as the database with proven solutions to apply to a given
problem. The embedding of domain-specific knowledge into the building model is the key
precondition for further steps.

2.2 Analysis and simulation

The availability of connected information, of rules and dependencies, implies the integration
of analysis and simulation. The largest effort to execute specific calculations, the input and
contextualization of the necessary information, has already been done by establishing a rich
building model serving as the database. When using a building information model in design,
every step in design alters and updates this database. The type of calculation model to be
used defines the rules and parameters necessary to be accessed from the database. Most of
the parameters are already an inherent part of an architects design, such as material
properties, spaces and volumes, opening surfaces and orientation. Other, more specific
parameters such as physical properties of surfaces need to be designed as part of the
building model. They must be part of the domain specific knowledge representation. The
aim is to keep the amount of parameters to describe the domain as low as possible. In order
to be seamlessly integrated into the design stage, calculation models have to be able to
deliver fast rather than highly precise calculations. The aim is to show the tendencies a
design is developing in order to evaluate and direct design decisions. When the design stage
is finished, precise calculations by experts still have to validate the final design. However,
making the right decisions already during design avoids tedious and costly changes
afterwards and leads to designs better integrated.

2.3 Information visualization

In order to read the structure of the information and knowledge stored as well as
interpreting the calculation results, suitable forms of visualization are crucial. Only the
understanding of the mechanisms behind makes decision making possible. The architect
must be able to take a certain perspective on a building design, to highlight certain aspects
as well as masking other. Intuitive decision making is dependent on suitable visualization, it
can be described as instantaneous, quasi automatic decision triggered by an affective,
visual or sensorial stimulus 6. Many expert tools visualize information by endless rows of
tables hard to interpret without expert knowledge. It is not necessarily the precise numbers
that are important for making design decisions, it is the understanding of the tendencies a
design is evolving. As designing a building is an iterative process rather than a linear
sequence, visualization should address the process rather than the result.

Another important aspect is relativity: quantitative results are not nearly as easy to
interpret as qualitative statements such as reference states. New forms of visualizations
have to be researched to enable architects to understand the non-visual processes of a
building. Examples can be found in medical applications, for example in the visualization of
CT scans as interactive sections through an object, coupling 2D information with 3D
localization. In engineering, force and flow visualization provide a deeper understanding of
physical processes. However, these apply to a different scale. Physical processes in
Second International Conference on Critical Digital: Who Cares (?) 74

buildings, for example, need not be considered in such a high resolution, they need to be
understood on a more abstract level.

2.4 Optimization
The fourth method group addresses optimization as strategy to support the search for
solutions within a vast solution space. Optimization as problem-solving strategy can be used
to narrow possible solutions due to the designers criteria and thus facilitate decision making.
The choice of the adequate problem-solving strategy is strongly related to the given
problem. Possible strategies include the use of heuristics to make quick estimates or genetic
algorithms especially to address multi-criterion problems. Agent-based models can be used
to simulate actions and interactions of autonomous individuals in a network such as user
behavior and comfort as wells communication between technical systems and components.
Many methods are closely linked and are often used in combination. They have been
extensively used for problem solving in many fields. Their potential in architecture lies in
addressing the multi-criterion nature of design decisions as well as condensing
overabundant information.

3. Prototypical toolbox and case studies

Besides optimization, the proposed methods were implemented into a prototypical design
support toolbox. Over the past two years, the toolbox was gradually developed and used in
student projects with over 100 students of architecture8. The toolbox exemplarily addresses
a non-visual yet important aspect of building design: physical processes of energy and mass
flows and related thermodynamic building systems. Both parameters highly influence
building design as well as performance. The question of the case studies was, if students of
architecture with little knowledge and experience would be able to consider these effects
from the beginning of their designs if they have digital means to study and understand the
dependencies and interactions of their design decisions. A building information model,
equipped with the necessary parameters was used from the beginning of a design on. The
model is equipped with parametric components such as walls, windows or room objects. The
modeling tool is object-based, requiring a certain degree of definition of the objects (areas,
volumes, material properties) In order to keep a certain ambiguousness at the beginning of
the design, generic objects containing the minimum parameters to execute the calculations
are provided. As the design evolves, these components can be closer defined and the model
further enriched. This way, also the performance calculations become more accurate and
complete during the design process.

Figure 1. Design support toolbox in the modeling environment, visualizations

Second International Conference on Critical Digital: Who Cares (?) 75

Engineering knowledge of thermodynamics, technical systems and constructions were

embedded as relatively simple sets of rules and constraints. To alter the parameters, the
designer does not need to enter numbers and values, he or she can decide on a more
concrete level, for example, if he or she prefers radiators over a floor heating for the heat
emission. In the background, related parameters of connected components are altered, for
example the resulting efficiency of the heat pump which itself influences the heating energy
to be delivered. Numerical calculation models (energy, exergy, cost) access the parameters
in the building model as well as in the interface where certain choices can be taken. As the
toolbox is directly embedded into the modeling editor, calculations are executed and
visualized in quasi real-time. Various form of graphical visualizations were tested and
evaluated. While the design evolves, physical building behavior and performance is
visualized and can be related to the design steps. When manipulating form, material or
systems, the results on the physical processes as well as on the aesthetics are immediately
displayed. The direct feedback of the impact of the design decisions enables the drawing of
quick conclusions that influence the following design steps. By using internal rules and
constraints, the amount of additional parameters necessary for the calculations could be
kept at a minimum. The result was a significant reduction in input parameters as well as
reduced complexity for the user

4. Conclusion
Even though the methods of the framework were implemented on a very rudimentary level,
the case studies showed that the chosen computational methods actively support design
decision making in fields of little knowledge and high complexity. The student projects have
shown that considering the non-visual aspects such as energy and mass flows as
parameters in design is not an obstruction to creativity but an opportunity to discover
different design strategies. For some students, this lead to a different visual appearance, for
some it didnt. For all students, however, it lead to more responsible and better performing
buildings. Interestingly, many students developed a competitive attitude towards the
performance, trying to optimize certain criteria to the absolute minimum/maximum. Only
little introduction (about 4 hours) was necessary to give an overview on basic physical
principles. This introduction, however, was very important for the students to judge the
effects they observed when progressing in their designs, it was necessary to avoid the
black box effect, not knowing what goes on inside. By using the toolbox from the
beginning on, they were able to evaluate and experience their actions in a very direct way,
being more aware of the influences and consequences. Balancing between performance and
aesthetics was possible. What proved to be very important was that students understand
the concepts of the digital model behind. In order to be able to apply such a framework
correctly, the understanding of the mechanisms of the software was crucial. When rules and
parameters are not understood, the results can not be interpreted. To establish the model,
although already simplified, proved to be still not intuitive and simple enough. As many
students used the modeling software for the first time, it was still to much about dealing
with software instead of designing. By the increasing use of software for building
information modeling and by more intuitive approaches to do so, we believe that this will

The proposed framework opens up a large field of research. Digital tools and techniques
should open up the solution space for architects, not limiting it. In order to do so, they have
to address the design process, not the result. We believe that this research in digital tools
and techniques is crucial to meet the future demands posed on architects, their work and
their buildings in a changing environment. Or, to use the words of Ludwig Mies van der
Rohe, used already in 1965:
Second International Conference on Critical Digital: Who Cares (?) 76

We are not at the end, but at the beginning of an epoch; an epoch which will be guided by
a new spirit, which will be driven by new forces, new technological, sociological and
economic forces, and which will have new tools and new materials. For this reason we will
have a new architecture.9

5. Acknowledgements
The author would like to thank Prof. Hansjuerg Leibundgut, Frank Thesseling and Volker
Ritter for their inspiration, their collaboration and their support.

1 Et in Arcadia ko. 2007. Sueddeutsche Zeitung, 31.08.2007.
2 Mitchell, W. 1995. City of Bits. Cambridge, MA: The MIT Press.
3 Banham, R. 1964. The Architecture of the Well-tempered Environment. Chicago: The
University of Chicago Press.
4 Eastman, Charles M. 1999. Building product models: computer environments supporting
design and construction. Boca Raton: CRC.
5 Schlueter, A., Thesseling, F. 2008. Building Information Model Based Energy/Exergy
Performance Assessment in Early Design Stages. Automation in Construction.
6 Pomerol, J.C., Adam, F. 2008. Understanding Human Decision Making - A Fundamental
Step Towards Effective Intelligent Decision Support. Vol. 97, Studies in
Computational Intelligence (SCI). Heidelberg: Springer Verlag.
7 Khosrowshahi, F., Howes, R. 2005. A Framework for Strategic Decision-Making Based On
A Hybrid Decision Support Tool. Information Technologies in Construction.
8 Schlueter, A., Thesseling, F 2008. Balancing Design and Performance in Building
Retrofitting a Case Study Based on Parametric Modeling. In ACADIA 2008 - Silkon
and Skin. Minneapolis, USA.
9 Mies van der Rohe, L. 1965, Acceptance Speech upon receiving the Gold Medal of the
American Institute of Architects
Second International Conference on Critical Digital: Who Cares (?) 77







Second International Conference on Critical Digital: Who Cares (?) 78

Second International Conference on Critical Digital: Who Cares (?) 79

From Post-Modern to Digital

Internalizing Nostalgia into the Abstract Machine

Francisco Gonzalez de Canales

Architectural Association, London, UK


During the last several years, some of the leading figures involved in the emergence of digital design
have started to question the validity of their earlier assumptions --be it parametric design, digital
determinism, smoothness or continuity in the expanded field. More than simply questioning the validity of
parametric design, this paper considers the importance of redefining our whole understanding of post-
modern architecture, so we can better comprehend the conditions in which current architecture is being
produced. The paper seeks to contribute to the digital debate by comparing the work produced on both
sides --post-modern and digital. It also reassesses the real interests and aspirations that lay behind the
architectural production of the late 1980s and how those aspirations relate to the eventual emergence of
computer- based technologies as the new architectural savior in the early 1990s.

1. Introduction
The more vast the amount of time we have left behind us, the more irresistible is the voice calling us to return to
it. This pronouncement seems to state the obvious, and yet it is false. Men grow old, the end draws near, each
moment becomes more and more valuable, and there is no time to waste over recollections. It is important to
understand the mathematical paradox in nostalgia: that it is most powerful in early youth, when the volume of life
gone by is quite small.1

Milan Kundera, The Ignorance

The fact that the transition from post-modern to the digital speculations of a younger
generation of architects happened so rapidly arouses increasing suspicion today. Since this
apparently neo-(neo-)avant-gardism emerged from some of the most traditional schools of
architecture on the American East Coast, the puzzlement seems to double. Up to now, only
architects and educators related to the American neo-avant-gardes, such as Peter Eisenman
or John Hejduk, were seen as a notable influence; this is the case, for instance, for the idea
of diagram, a concept of key importance for the emerging generation of designers and
normally credited to the architects mentioned above.2 However, as neo-avant-gardists and
traditionalists became more and more polarized throughout the 1980s, the generation of
architects simultaneously educated under both schools began their careers with the aim to
surpass the dramatic split between the two opposing ideologies and claimed some kind of

Greg Lyn could well epitomize this mood introducing his architecture as follows: Neither
the reactionary call for unity nor the avant-garde dismantling of it through the identification
of internal contradictions seems adequate as a model for contemporary architecture and
urbanism. Instead, an alternative smoothness is being formulated that may escape these
dialectically opposed strategies. Common to the diverse sources of this post-contradictory
work topological geometry, morphology, morphogenesis, catastrophe theory or the
computer technology of both the defense and Hollywood industry- are characteristics of
smooth transformations involving the intensive integration of differences within a
continuous yet heterogeneous system.3
Second International Conference on Critical Digital: Who Cares (?) 80

Under such a spirit of conciliation, it does not seem too out of the question to mention that
if there was a transfer in tools and ideas from the neo-avant-gardes to digital architecture,
there should be also have been another transfer from the traditionalists and the post-
modern core of the 1980 to the digital emergence in the 1990s. The question would be: how
personalities a priori who were so distanced from the digital, such as Michael Graves or Leon
Krier, could inform the development of this new trend? As time is fixing post-modern
architecture in the timelines history, this till now inconvenient question reoccurs today as
strong as ever, namely: how indebted is the research on topological geometry,
morphognesis, and computer-based design to the legacy of post-modern architecture?

2. Re-reading the Post-modern

During the last forty years, from Diana Agrest and Mario Gandelsonas to Charles Jenks, we
have understood post-modern architecture by examining it through the lens of semiotics,
language and mass communication. Although it is true that semiotics have in many ways
helped the development of computation in architecture -as long as it is reduced to particular
signs that architecture is ready to be digitally processed- this reading of post-modern
architecture appears to be insufficient in terms of understanding any social or cultural
linkage between the post-modern and the digital. As a matter of fact, the semiotic link
seems to be the weakest connection, since the new digital generation has gained public
notoriety by rejecting any interest in language and meaning. Abstract machine, diagram,
minor practice, and basically quoting Deleuze instead of Chomsky and later Derrida became
the scattered references of a generation not yet knowing with certainty the end of their
speculations, but wanting to make really clear that architecture was not about linguistics

As an alternative to linguistic readings, a younger generation of critics has tried to find some
other explanation for post-modern production. One of the most commendable efforts was
that of Jeffrey Kipnis, who sought the possibility of reading part of post-modern production
as the elaboration of a kind of catalogue of incidental effects in architecture,4 and crediting
this tradition to the difficult figure of Philip Johnson. Through Kipnis ideas, the reading of
Johnsons eclectic personality can be traced back to a continuous and coherent investigation
of the idea of effect in architecture, going from the Glass House to the AT&T building, to the
Torres Kio in Madrid.5 As a result, the current aim to define a purely atmospheric
architecture (not a linguistic one, but rather a pure sensation that is produced by
architectural effects that do not hold any architectural structure at all), should be considered
--according to Kipnis-- to be the legacy of Philip Johnson. However, although this hypothesis
is fairly ingenious, and it helps us grasp the link between specific post-modern production
and some avant-garde positions (Kipnis here likes to relate Johnson to Koolhaas in
particular), this particular theory does not seem to help us find any relation between digital
production and other postmodern personalities mentioned earlier, such as Michael Graves or
Leon Krier.

More recently, new pondering of the emergence of digital architecture by some of its leading
figures may shed light on how to reassess post-modern architecture today. It has become
recurrent that Lynn, Zaera-Polo et al. begin their lectures and talks by admitting that their
early claims of a panacea of smoothness, continuity of the field, and other corporeal
freedoms have to be played down. Those who were then young architects-theorists have
begun to focus on their own practice, and from within that practice they now want to
propose an alternative framework for their recent production.6 This new framework could be
Second International Conference on Critical Digital: Who Cares (?) 81

broadly summarized by explaining that the emergence of digital architecture cannot be

understood as an isolated technological revolution, but rather should be comprehended as
part of a general generational shift from a discursive paradigm --as described by the official
post-modern criticism-- to a new material-performative paradigm.7 It is worthwhile also to
remark that this new account of the transit from neo-avant-garde and post-modernism to
digital architecture is precisely related to an apparent internal shift in digital production.
During the last several years, we have witnessed how digital architecture has left the early
pan-utopians anxieties behind --with the exception of Karl Chu, who still stubbornly works in
megalomaniac scales- and directed itself to product driven design and fabrication processes.

Although this shift from discursive to the material sounds a bit post-rationalized -in the
most fair tradition of instrumental history-, what is relevant here is that this new focus on
material practices re-opens the debate on how to relate the post-modern to the digital. In
fact, one could ask: was not post-modern architecture one of the greatest achievements in
material culture development? The reassessment of the digital as a shift from discursive to
material practices not only pushes aside the legacy of neo-avant-gardes in the development
of digital architecture, but also suddenly approaches the view of connecting more and more
digital architecture to the post-modern legacy. The question would be: is this new
determination a clarification of what digital always has been or it is just a deviation from the
original sources?

4. Material Practices: Michael Graves and Greg Lynn

If I break a cup, I am left with fragments. I can re-create the cup by gluing the pieces together again. You would
say that that is going back. That is absolutely correct, and that is what I am doing with architecture. 8

Leon Krier

In order to understand the implications of this new scenario, I would use a particular
example of two individuals, Michael Graves and Greg Lynn. In my view, a certain connection
between Graves and Lynn is going to be of key importance in the transition from post-
modern to digital in the late 1980s and early 1990s, and as a matter of fact, it might
reasonable to think that a personality such as Michael Graves at Princeton University could
leave an imprint on a very young Lynn, who was not very well shaped in the discipline and
still too obsessed by his studies in philosophy.

When Lynn arrived at Princeton, Graves shift from neo-Corbusian speculation to

anthropomorphic and classicizing collages had been established a decade ago, thus
definitively separating his work from that of the neo-avant-gardes. However, the
significance of Graves work after this shift is still difficult to grasp. In the early 1970s, Mario
Gandelsonas defined Michael Gravess architecture as semantic, as opposed to Peter
Eisenmans "syntactic operations,9 and in fact, the use of a language of allusion and
metaphor was to be continuous through Graves career, be it modern or classical. In 1978,
when Gravess internal turning point to classical language had already happened, Alan
Colquhoun was able to separate his use of classical language from that of Venturis, as
Graves showed no interest in what seems to have been Venturis chief concern: the problem
of communication in modern democratic societies.10 Further, as Graves developed his
architecture up until the 1980s through commissions for private houses and additions,
Colquhoun found in his work a special idyll between language and a particular structural
system-- the balloon frame -a system that allowed him to solve structural concerns in a
quite ad hoc way. In the hands of Graves, the limit between structure and semantic value
Second International Conference on Critical Digital: Who Cares (?) 82

become blurred as the balloon frame became a pure metaphor, free of any instrumental or
utilitarian value, and indeed achieving a kind of mythical character in the Barthian sense.

Nevertheless, with Graves return to classical languages, most of his commentators have
had to allude to nostalgia to assess his work precisely. Already in 1976, in his appraisal of
the New York Five, Manfredo Tafuri referred to a certain nostalgia for Kultur, especially
noticeable in the work of Michael Graves. It is that nostalgia for Kultur as a myth of totality
that normally reverts back to extemporaneous European references. According to Tafuri,
this nostalgia is not only particular to Graves, Eisenman or Hejduk, but is quite
characteristic of American intellectuality in general. From Benjamin Latrobe, to the City
Beautiful Movement, to Louis Kahn (and his followers), there exists a tie that unifies these
different experiences into a principle of value, that is, a tie that entwines them into the
Lukacsian myth of totality.11

However, in Gravess case this nostalgia has been wrongly understood. As Milan Kundera
describes in his novel The Ignorance, nostalgia, contrary to common thinking, is stronger in
the beginning. Thus, it is precisely in Gravess first works where nostalgia is more profound
and also more explicit. Although violated and dismembered, direct references to a particular
period of time are loquaciously shown floating in the anxious imaginary of the young
architect. With the passing of time, nostalgia becomes internalized into everyday life,
deluded, pervading all materiality without any particular crystallization. Nostalgia then
acquires a certain kind of environmental level. Yet the final fulfillment of this process is not
only a de-historization of language, something so praised by Colin Rowe-, but rather a
whole process of de-signification of language itself. More in his drawings and paintings than
in his realizations, Graves alludes to that pleasure in materiality, where the fragments that
the architect is living behind are not referring back to anything else.

In this case, we are not talking about the liberation of free-floating signifiers, as announced
by Barthes and suggested in Colquhouns interpretation of Graves, but a circumstance
where meaning has literally vanished to leave behind a-signified pieces of flesh without a
body to hold them. In fact, in Gravess drawing, shapes do not seem to relate to anything in
particular. If sometimes they become anthropomorphic it is not because they refer to a
proportional body, but because they can only refer to the very moment of its present, and in
this present the bodies do not re-present but instead present themselves in an extremely
situated human-artifact relation, in which one could be exchanged with the other.

By de-signifying architecture as pure materiality, Graves puts on the same level the
concepts of bare nature and architecture -- both just become flesh or pure materiality.
However, he would never transgress the limits of nature-artifact or human-artifact. In this
sense, after renouncing to use proportion and organicity, Graves remains himself in the
classical tradition, whereby architecture is architecture and nature is nature. To the
contrary, what Lynn is going to posit in the early 1990s is to transgress these limits. For
Lynn, a totally a-signified architecture does not culturally distinguish itself from whatever
surrounds it. According to him, form can be shaped by the collaboration between an
envelope and the active context in which it is situated.12 Hence, the context may construct
the building in its totality. This is clear if we take, for instance, the New York Port Authority
project. Contrary to some common linguistic readings of the project, Lynn is not taking
pieces of information from the context the notation system of floating tiny balls does not
signify anything at all. He is taking a-signified and abstract material particles from the
context, which is in fact are the actually shaping the architecture. They configure a diagram
as a secretion of materiality in the most radical Deluzian sense, that is to say, as a-
signifying and non-representative brushstrokes and daubs of color.13 The material
Second International Conference on Critical Digital: Who Cares (?) 83

secretions are reunited as possibilities of fact, transforming a fragmented urban

environment into a smooth single surface which becomes the building.

5. Internalizing Nostalgia into the Abstract Machine

As happened with Graves, nostalgia in Greg Lynn is stronger in the beginning. In Lynns
case, the nostalgia for the myth of totality -as Tafuri described it-, does not admit any
reservation. The internal conflict presented by Graves between nostalgia for Kultur and anti-
historical determination is absolutely negated by the younger architect, who presents his
position as one that surpasses the dialectically opposed strategies. Lynns is such an
internalized form of nostalgia that it transcends any socio-cultural legacy to regress to a
kind of primitive state of total coincidence between architecture and its milieu, as expressed
in different forms of continuity, smoothness, inside-out biomorphism, and other primitivist
metaphors. The myth of the possibility of coming back to a golden age sustained by
technological reason is thereby reenacted at its best.

In the early 1990s, Greg Lynn epitomized the transit form discursive to material practices
where architecture became purely a-signified material and nostalgia is the glue that puts the
pieces back together. Architecture was definitively de-historicized and de-signified, but
above all, it was consciously de-politicized for the sake of individual enjoyment of the new
socio-natural freedoms. Two decades after the emergence of the digital on the American
East Coast, the promises of socio-natural continuity have totally disappeared, but they
maybe persist in the different forms of de-politicization and disarticulation of the social,
extremely internalized as nostalgia, in the current production of architecture.
Kundera, M., The Ignorance, p.77
Around 1993, R. E. Somol credited John Hejduk for bringing the idea of diagram into
architectural culture with his masks of the 1980s, at the same time that Greg Lynn was
creating his own idea of diagram as derived from Peter Eisenmans indexical processes.
Somol R.E., One or Several Masters?, in Hejduks Chronotope, Hays K. M. (ed.), New
York: Princeton Architectural Press, 1996
Lynn G., Probable Geometries: the Architecture of writing in bodies, in Folds, Bodies and
Blobs. Collected Essays, Bruxelles: La Lettre vole, 1998
Kipnis J., Throwing Stones -The Incidental Effects of a Glass House, in Philip Johnson.
The Glass House, New York: Pantheon Books, 1993, pp.xxi-xxxii
5 Kipnis J., Introduction Part I. 1993, in Philip Johnson. Recent Work, London: Academy
Editions, 1996, pp.6-10
Paradigmatically, since 2006 Greg Lynn has started to criticize his early position as a
parametric determinist in all his public talks. FOA and Reiser+Umemoto have also drastically
abandoned the positions that gave them public notoriety in the early 1990s.
Just to name an example of this new reading: Diaz Moreno, C. and Garcia Grinda, E.,
Complexity and Consitency. A conversation with Farshid Moussavi and Alejandro Zaera
Polo, in El Croquis 115/116, 2003, pp.7-30
Eisenman P. and Krier L., Eisenman and Krier: A conversation, in Davison C. (ed.),
Eisenman-Krier: Two ideologies: a conference at Yale University School of Architecture, New
York: Monacelli Press, 2004 p.31 (originally published in February 1983 issue of Skyline)
Gandelsonas M. and Morton D., On Reading Architecture in Signs, Symbols and
Architecture, Broadbent G. (ed.), New York: Wiley, 1980 (Originally Published in Progressive
Architecture 53, March 1972)
Colquhoun A., From Bricolage to Myth, or How to Put Humpty-Dumpty Together Again
in Hays K. M., Architecture Theory since 1968, New York: Columbia Books of Architecture,
1998, p. 339 (originally published in Oppositions 12, Spring 1978)
Second International Conference on Critical Digital: Who Cares (?) 84

Tafuri M., European Graffiti. Five x Five = Twenty-five, in Oppositions 5, Summer 1976,
Lynn G., Animated Form, in Animated Form, New York: Princeton Architectural
Press, 1999, p.19
Deleuze, G., The Diagram, in The Deleuze Reader, ed. Boundas C. V. (ed.), New York:
Columbia University Press, 1993, p. 197
Second International Conference on Critical Digital: Who Cares (?) 85

Seeking an inherent historicism in digital processes: who care(d)?

Tracing an archaeology of digitality within the shifting paradigms in architectural history.

Emmanouil Vermisso
Assistant Professor, School of Architecture
Florida Atlantic University, USA
evermiss@fau.edu, archi_trek@hotmail.com

Within a current philosophical state which maintains the argument for a nonlinear history1,
contemporary digital designers might gain insight into the present condition of their
discipline by looking at the emergence, dominance and abandonment/ replacement of past
paradigms. Referring to research in digital fabrication that was inspired by 16th century
projective geometry, Bernard Cache once claimed that the future of the next CAD software
generation lies somewhere between 1550 and 18722.
Consequently, should we regard digital design an emergent field or rather a re-
interpretation of an existing forma mentis, result of an older conceptual order? The critical
nature of this conference may warrant paraphrasing its initial premise to ask Who Cared?
This paper attempts to track the origins of digital design in the history between the
Renaissance and the end of the 19th century with regards to what theorists have described
as the crisis of modern science.

1. Introduction

Which cognitive processes of architectural relevance are identified within the term digital?
There is naturally a multitude of associations, ranging from philosophical-methodological to
formalistic, organicist-biological and finally scientific-computational. Certain models of
enquiry can be synthesized, and that is where the value of digital design seems to unfold, in
being open-ended, not merely through the way architects expand on its formal potential,
but in so far as it can embrace other professionals under its umbrella. The manifestation of
some of these cognitive models in the design process -on their own or combined-
(Organicism, Computation), is examined by following their precursors before and during the
European scientific revolution (1750-1900), particularly in France.

2. The modern scientific crisis and its implications in architectural theory

The emergence of non-Euclidean geometries, in ca. 1800, according to Edmund Husserl,

signaled the end of classical geometry, the geometry of the Lebenswelt (the world as lived).
Quoting Alberto Prez-Gmez3: Once life itself began to be regarded as process, whether
biological or teleological, theory was able to disregard ethical considerations in favour of
applicability. The questioning of the transcendental dimension of meaning implied a
rejection of the mythos which until then helped explain the radical ambiguities of
existence. The exclusion of mythos from the logos during the last 180 years of Western
history indicates a certain denial of historical dimensions (Prez-Gmez 1984).

The Husserlian dimensions of meaning for any given system are the formal/syntactic
dimension and the transcendental/semantic dimension. While the former describes the
relation between the elements of a system, the latter corresponds to the relation of these
elements to the Lebenswelt, thus being a historical dimension.
Second International Conference on Critical Digital: Who Cares (?) 86

2.1 Introduction of Specialization

Form, before the 19th century, was not the only preoccupation of architects, as is evident
from the consideration of the three Vitruvian categories together, and not as individual
values. It constituted a symbolic reference to the human condition and not a reference to
function! The cosmological associations of scientific branches were closely tied with life
sciences like biology; as Prez-Gmez notes Astronomy had evolved from Astrobiology, and
was, therefore, full of ontological presuppositions. It is worth noting an existing distinction,
in the 19th century, between true sciences, that is, those sciences which could be explained
mathematically, and some of the natural sciences, ie. chemistry and biology: Kant regarded
chemistry as a science but not Science eine Wissenschaft, aber nicht Wissenschaft,
because its mathematical foundations were not yet developed, and Roger Bacon before him,
had called Mathematics porta et clavis scientiarum (DArcy Thompson, 19174). So, it is
evident that the perception of Science in the 19th century went hand in hand with
Mathematics5, thereby assuming technical rather than ontological and metaphysical

2.2 The loss of symbolism

Bringing forward invariance as a fundamental axiom, the modern conceptual scientific

framework is non-compliant with reality; invariance cannot explain or embrace the value of
symbolic thought. Scientists falsely believed that the specific disciplines could attain a
higher meaning by fully understanding phenomena. There is a criticism from Prez-Gmez
on the specialization promoted within the modern scientific model: Can science really
understand life through its self-adopted model, or does this actually require the use of a
broader, cross-disciplinary framework? Biomimetics, an emerging field based on -but not
entirely a result of- Organicism in architecture and the arts, and Computation, which is a
product of the post-digital mode in design, are possible platforms for the resurgence of the
lost mythical dimension in architecture.

3. The Organicist and Computational paradigms

3.1 Towards multi-disciplinarity: Organicism and Biomimetics

The contemporary field of Biomimicry is an indirect result of the advancement of Biology in

the last two centuries, which influenced Organicist architectural theory. Like the
connotations of Nature6 over time, the development of Organicism is complicated: In the
17th century the observation of Nature challenged the essence of monarchy and religion,
and was therefore abandoned; during the 18th and 19th centuries, the new epistemological
social model addressed this by justifying the interpretation and classification of natural
systems: attention was not placed on the cause of a system but on the system itself, thus
avoiding any clash with social/religious prejudice. The human body -a recurring
preoccupation in architectural theory- was viewed as a tool for examination, looking at the
inherent biological systems and their governing principles. In 19th century Organicism, the
body was important insofar as its proportion - in accordance with the context of Antiquity
and the Renaissance- and therefore regarded from a stylistic viewpoint (Van Eck, 19947).
The functionalist interpretation of Organicism which prevailed during the emergence of
Modernism, changed once more the perception of Organicism; this interruption of its
scientific biological paradigm caused a dissemination of the values which were relevant to
within the biological context, until the recent interest in Biomimetics.
Second International Conference on Critical Digital: Who Cares (?) 87

Biomimetics expert Professor Jeronimidis has accepted the need for an inclusive logic as a
pre-requisite for the understanding and study of biomaterials: Form, Structure and Material
should all be considered as one system, each of these parameters affecting the other,
making impossible the individual evaluation of any of its constituents8: Form structure and
material act upon each other, and this behavior of all three cannot be predicted by analysis
of any one of them separately9. In addition to this inherent synergy within natural systems,
another aspect which has also been discussed by Prof. Jeronimidis is the multidisciplinarity
required in the analysis of nature, something already argued in the 19th century by French
naturalist Buffon: Nature, on the other hand, takes every single step in all directions; in
going forward, it spreads sideways and extends above; it travels in all three dimensions and
fills them all at the same time; whereas mankind reaches a point, Nature embraces the
whole volume10 (Fig.1). The Biomimetic model uses a combined set of skills with regard to
the study of organisms; is this a sort of acceptance of a more universal truth within nature,
which can ultimately draw architects closer to the mythical horizon11?

Figure 1. The breadth of man-made versus natural processes: while mans achievement is
singular, nature simultaneously reaches multiple possibilities to reach a threshold.

3.2 Tracing the impact of the modern crisis: Sub-domains of Digitality

Is digital design a result of the functionalization of architectural theory or is it associated

with the earlier use of geometry? It is, likely, the systemization of Architecture and Building
which gave birth to the methods that ultimately created digital tools, but the development
of these tools through computation extends further back. There seems to be a need to
break down the constituents of digitality: Digital Design-Computation-Fabrication. Is
computation, after all, its truest expression? If so, we should look much earlier than the
Enlightenment. Dealing with the relationship between the invariant and variation, the
work of Objectile (B. Cache, P. Beauc) makes references to geometry of the 16th and 17th

3.3 The associative logic in the work of Bernard Cache

Stemming from Philibert De LOrmes interest in projective geometry12 as a production

rather than a representational technique, Objectile (B. Cache and P. Beauc, 1994)
designed a pavilion (Philibert De LOrme Pavilion, Fig.3) which examines the process of
associating changes in its own geometry and reflecting this by regenerating the machining
programs required for its manufacturing (Projective geometry involves the study of
geometric properties which are invariant under projective transformations).
Second International Conference on Critical Digital: Who Cares (?) 88

Further work of Objectile like the Fast-Wood project relates back to earlier influences, like
Girard Desarguess book Rough Draft for an Essay on the Results of Taking Plane Sections
of a Cone, which set the fundamental concepts of Projective Geometry. Desargues used the
4-rayed pencil theorem to define specific relations in terms of proportions within the realm
of projective geometry: The architectural project consists of a model with its primary
elements varying on the basis of invariant relations between them13.

Figure 2. The projective cube used in the Philibert De LOrme Pavilion14 (Objectile).
Figure 3. The completed Philibert De LOrme Pavilion15 (Objectile).

Paraphrasing Von Clausewitz16, Bernard Cache has referred to his work as philosophie
poursuivie par dautres moyens17 (philosophy attained through other means). Architecture
is therefore, considered a vehicle rather than an end-product in itself, using geometry as a
means of symbolism.

It becomes clear that Objectiles work can be read twofold: first, in the use of Projective
geometry to formulate a parametric modeling logic, and second, in creating an additional
effect of illusion (like in the Philibert de LOrme Pavilion) due to the method of design; the
pavilion is a pre-distorted object, its shape accentuating even more the perspectival effect.
One may find, in the illusion that is created by such projectival geometrical manipulations,
an analogy with the illusions often experienced during the Baroque; the implicit
psychological overtones within Baroque Architecture are undeniable: Where the
Brunelleschian architecture and the Bramantesque were static, this was dynamic; where
those attempted to distribute perfect balance, this sought for concentrated
movementArchitecture was considered, for the first time, wholly psychologically18 (G.
Scott, 1914). It is interesting, to note the ties of computation with illusion as identified by
Antoine Picon: Illusion was also pursued and achieved through the use of perspective- in
the stage design of the Renaissance and Baroque19. Through the use of anamorphic
perspective, as in the church of St. Ignazio in Rome (Fig.4) -completed by Andrea Pozzo in
1650- architects achieved compelling impressions of illusion and movement.

Through the working process of Objectile, the interpretation of architectural design as

metaphor may possibly augment the mythical dimension, which Prez-Gmez claims to
have been diffused in modern science, and which was mentioned earlier on
Second International Conference on Critical Digital: Who Cares (?) 89

Figure 4. Frescoed ceiling in the Church of St. Ignazio, Rome (1650), by Andrea Pozzo.

4. The analogy between past and contemporary methodology: an archetype of the

contemporary architect

In contemplating the premise of this forum (Who Cares? or, for my argument, Who Cared?)
we have identified a number of people; scientists -mainly mathematicians and biologists-
philosophers, engineers and naturally, architects who have jointly shaped the architectural
theory of the last 300 years. One may then consider the object of this contemplation: About

An obvious deduction from the so far analysis of the modern crisis, is to optimize design or
to justify the existential dimension of architecture that has radically faded. It is interesting
to consider if both may be achieved at the same time. This question may be dealt with on
three levels: (1) the state of architecture in general; (2) the impact of digital design on
architecture; (3) the development of digital design in isolation. (Which one of these actually
matters within this discourse?)

If we address the second level, within a biomimetic or computational framework, it is clear

that the designers of the digital avant-garde are a kind of hybrid professional: highly skilled
in a specific area, while capable of bringing together and delegating people from other
disciplines in support of design.

Computation in Architecture is an outcome of the digital designers quest for minimizing the
gap between software and hardware, and therefore between design and fabrication. Digital
Architecture has been accused of dissociating the project from the materiality of what is
actually built20 when in fact it does exactly the opposite. Extending beyond the phase of 3d
modeling, and through the means of CNC fabrication, the object is viewed more consistently
due to its direct situation within the field between conceptual idea and fabrication21.

In some way, contemporary digital architects balance the Renaissance model of master-
builder who coordinated the architecture process, and the specialized architect-scientist of
the Enlightenment. Consequently, the emerging role of architect as a master builder
collapses the separation initiated by Alberti in the Renaissance, between theory and
construction, Intellectual and manual labor, architect and master-builder, and focuses to the
previously existing paradigm of architects coordinating the design and construction process.
In doing so, these architects employ large-scale models to investigate design, while also
exploring potential construction issues, in the same way as Brunelleschi22 did in the 15th
century. There is a gradual convergence of the design and construction process.
Second International Conference on Critical Digital: Who Cares (?) 90

5. On the character of Computation

5. 1 Computational processess in the 21st century

Computation is a field where the architect, not being a specialist, stands somewhere in the
middle, a sort of curious explorer23. This, one might claim, is the essence of Architecture in
the 21st century: an attempt on behalf of the architect to engage various fields; through this
engagement he can extract a satisfaction that goes beyond the normal plane of
realization24. Can we surpass both the divine and mechanistic models of the universe, as
Daniel Botkin25 encourages, towards an understanding of the complexity of nature through
the use of computation? Nature in the twenty-first century will be a nature that we make,
the question is the degree to which this molding will be intentional or unintentional,
desirable or undesirable26.

Does man, at last, possess the power to do almost anything with architecture? Should there
be a separation to Form- Function- Performance, in accordance with the logic of
specialization? Lets assume that Performance is the focus, while Form and Function are also
integral to the systems success; then Performance can in fact embrace the other two
preoccupations according to the biological model, where all three are interdependent, and
none of them works properly on its own.

The problem with the digital architect, says Picon, is the non-convergence of the processes
he launches: when is the right moment to pause an evolutionary process within
computation? Perhaps this is where Prez-Gmez is right: if the designer should revert to
the use of metaphor with the intent to find meaning in design, this action will indicate some
end-point in the process, reaching a result where the objects complexity does not seem
gratuitous but instead, integrated due to necessity. The repercussions of this decision-
making can extend beyond the project boundaries, into the way architecture behaves as a
social medium, since the issue of terminating/completing a process has not only technical or
aesthetic, but also philosophical-existential connotations.

There is a paradox in the way the designer engages with computation at a particular level.
The formulas he employs have no importance in so far as the end result is concerned;
satisfaction lies in the broader creative power to generate complexity, particularly in the
notion of regulating a system. However, although these formulas are derived from strict
systemization, and are fundamentally expressed in mathematical form, the architect draws
from them a much more contemplative, notional pleasure. It implies a kind of cybernetic
experience, having more to do with piloting than with generation in the classic sense as
Antoine Picon points out27. There may lie here, a possible ontological essence of
computation, or, the semantic dimension of the computational model.

5.2 Is digital computation based on pre-Scientific cognitive models?

Is there a need to explain and systematize all operations? According to Picon, there is
something magic in digital transformations; this ambiguity of computation is closer to the
pre-enlightenment architectural model (Prez-Gmez). As a result, even if digital design
was formulated between the 17th-19th centuries referring to its relations with projective
geometry- its core may be found in earlier times: the work of Bernard Cache on Projective
geometry flirts with randomness under the guise of making room for the almost infinite
variability of possible solutions28.

The theme of randomness is also apparent in the work of French biochemist Jacques
Monod29. In his work, Chance and Necessity: An Essay on the Natural Philosophy of Modern
Second International Conference on Critical Digital: Who Cares (?) 91

Biology Monod attributed significance to proteins over nucleic acids, due to the proteins
role as teleonomic systems and their ability for self-assembly. His main argument is the
contingencies in biological systems, making reference to man as a product of chance.

The unexplained within a computational process pre-supposes the existence of a

symbolism, or, to refer back to Husserl, the semantic dimension of the system becomes
more powerful than the syntactic one.

6. Conclusions

We have tried to show that the different aspects of digitality (digital design-computation-
fabrication) all emerged from the same technological premise, but like all systems, they can
be perceived within different contexts; perhaps they will evolve from different necessities,
and not based on a specific conceptual model. Branko Colarevic has argued on the
reintegration of building processes now taking place (see section 4), whose centre is the
architect30; the paradigms of the past, applied into a contemporary framework, are directing
designers towards optimized performance and manufacturing, augmenting mans inherent
essence as homo faber (man the maker).

There remains, of course, the question of the digital-computational frameworks historicist

dimension; paradoxically, science itself is re-embracing history to evaluate itself: during the
last fifty years, scientific advances in natural sciences and particularly thermodynamics have
brought forward new models of equilibrium (or lack thereof) through the introduction of
time in the evaluation of physical systems31. Manuel De Landa has written of the current
penetration of science by historical concerns, which is the result of advances in these two
disciplines32 (biology/Darwinism and physics/Thermodynamics). This penetration happens
because the causality in the processes of the explored systems is non-linear and therefore,
one would need to track their history to understand their dynamical state at any given
moment (De Landa 2000).

In the 1917 introduction of On Growth and Form, DArcy W. Thompson, quoting the Abb
Galiani, wrote that Science is plutt destin tudier qu connatre, chercher qu
trouver la vrit, hence demonstrating the perpetual evolutionary character of Science, and
more precisely, Biology, without the attachment of any teleological significance. Considering
Monods emphasis on the accidental within biological behavior, and the allure of the
random within computational processes as identified by Picon, the Biomimetic model seems
a promising platform for the application and development of computation.

Discourse on computational design is still in its infancy compared to other periods of

architectural theory, and it is perhaps early to anticipate its long-term repercussions. It
seems that computational processes can be more appreciated vis--vis the previous
attempts of similar scope and the realization that we are at last technologically able to
project its applications in various architectural sub-disciplines. It is interesting to discuss the
future of digital design by examining the past history of architectural science. History
presents a pattern of weaving in the way systems interact, and so designers in the past
have constantly sought precedence which was more than often expanded- for the methods
before them. Digital design will no doubt develop within its own dynamic context33, albeit
not without previously re-tracing its cosmological origins. The filtration of paradigm
precedents stimulated by the digital paradigm seems to have unveiled an invisible bond
between biology, history and computation. One cannot help but wonder if the field of
Biomimetics34 can answer this calling for a collaborative effort which simultaneously
negotiates the artistic, scientific, vitalist and philosophical planes of existence.
Second International Conference on Critical Digital: Who Cares (?) 92

For a documented analysis of this paradigm see Manuel De Landa, A Thousand Years of
Nonlinear History, Swerve 2000.
Cache, B: 2004, Towards an Associative Architecture, Digital Tectonics, Wiley-Academy.
Prez-Gmez, A: 1984, Introduction, Architecture and the Crisis of Modern Science, MIT
Press, Cambridge, MA.
Thompson, DArcy W.: 1979 [1917], Chapter I: Introduction, On Growth and Form,
Cambridge University Press.
A distinction has to be made at this point, between the perception of Mathematics as
underlying cosmological order in the Renaissance, and that of an instrument during the
For an analysis of the term Nature from the 15th-20th centuries, see Williams, R: 1976,
Keywords: A Vocabulary of Culture and Society, Oxford University Press.
Van Eck, C: 1994, Organicism in 19th century Architecture: An inquiry into its theoretical
and philosophical background, Architectura & Natura Press.
See also, Alexander, C: 1964, Notes on the Synthesis of Form, Harvard University Press,
Cambridge, MA.
Weinstock, M: 2006, Self-Organisation and Material Constructions, in Techniques and
Technologies in Morphogenetic Design, Wiley-Academy, pp. 34-41
Jeronimidis,G: 2008, Bioinspiration for Engineering and Architecture, in Silicon + Skin,
Biological Processes and Computation (Acadia 2008 Proceedings, Minneapolis), pp.26-33
It is interesting to note Claude Perraults cross-disciplinary background; Perraults
intentions for revolutionizing architecture were born out of his medical, scientific and
philosophical milieu (this is almost analogous to the biomimetic context of synthesis), and
yet his work contributed to the collapse of the symbolic dimension within architecture which
ultimately led to the modern crisis.
Projective geometry is the study of geometric properties which are invariant under
projective transformations.
Beauc, P and Cache, B: 2007, Bridging the Gap, Consequence Vol.6: Objectile, Springer-
Verlag/ Wien.
Cache, B: Towards a fully associative architecture, in Architecture in the Digital Age,
Design and Manufacturing (Kolarevic, B ed.), Spon Press 2003.
Cache, B: Towards a fully associative architecture, in Architecture in the Digital Age,
Design and Manufacturing (Kolarevic, B ed.), Spon Press 2003.
Prussian general who defined was as a continuation of politics by other means.
See George L Legendre in conversation with Bernard Cache (2007), in AA Projects Review
Scott, G: 1974 [1914], The Biological Fallacy, The Architecture of Humanism, W.W. Norton
& Company, NY, pp.127-140
Picon, A: 2004, Digital Architecture and the Poetics of Computation, Focus: Metamorph:
International Architecture Exhibition, Fondazione La Biennale di Venezia, pp.59-67
For the integration in building process that is lately addressed by the development of BIM
(Building Information Modelling), see Branko Kolarevic, Post-Digital Architecture: Towards
Integrative Design, CDC08 Proceedings, Cambridge, MA.
Garber, R: 2009, Albertis Paradigm, in Closing the Gap: Information Models in
Contemporary Design Practice: Architectural Design, Wiley, London, pp. 88-93
Picon, A: 2004, Digital Architecture and the Poetics of Computation, Focus: Metamorph:
International Architecture Exhibition, Fondazione La Biennale di Venezia, pp.59-67
It is exciting to observe the shift from mathematics to architecture, when it comes to the
definition of values within a model: architects define topology in terms of controlled
Second International Conference on Critical Digital: Who Cares (?) 93

transformations, whereas it is the invariants that are of interest to mathematicians (Picon

2004). Architecture is concerned with motion while mathematics with invariance!
See Daniel Botkin, Discordant Harmonies, Oxford University Press 1990.
The work of digital designers Ruy/Klein tries to engage Botkins suggestion: As we witness
the collapse of categorical distinctions between the natural and the artificial, we can ask if
the blooms and the catastrophes of the inaccessible sublime can now be designed.
Picon, A: 2004, Digital Architecture and the Poetics of Computation, Focus: Metamorph:
International Architecture Exhibition, Fondazione La Biennale di Venezia, pp.59-67
From Jacques Monods Chance and Necessity: An Essay on the Natural Philosophy of
Modern Biology: The ancient covenant is in pieces; man knows at last that he is alone in
the universe's unfeeling immensity, out of which he emerged only by chance. His destiny is
nowhere spelled out, nor is his duty, NY 1971.
Kolarevic, B: 2008, Post-Digital Architecture: Towards Integrative Design, What
Matter(s)? CDC08 Proceedings, Cambridge, MA, pp. 149-156
Time had infiltrated science already from the 19th century, but in a much lesser degree.
See Ilya Prigogine and Isabelle Stengers, Order Out of Chaos: Mans New Dialogue with
Nature, New York, Bantam 1984.
De Landa, M: 2000, Introduction, A Thousand Years of Nonlinear History, Swerve 2000,
Branko Colarevic foresaw, in 2003 that Digital Design -at the time in the foreground of
architectural practice- would assume the role of background in the future (see Kolarevic, B:
2003, Architecture in the Digital Age: Design and Manufacturing, Spon Press).
The term is used broadly here, to include Biomimicry applied in Art, Architecture and
Second International Conference on Critical Digital: Who Cares (?) 94

Second International Conference on Critical Digital: Who Cares (?) 95

Sedated Algoritmia
Five Rhetorical Questions about Digital Design

Edgardo Prez Maldonado

University of Puerto Rico

Digital design processes are sustained by what appears to be a resolute rubric. In
many instances, such rubric reveals itself as an ambiguous transaction between
numerical precision and metaphorical constructions. In what appears to be a natural
contract in architecture, structural questions arise when the former is disguised as
the later and vice versa. This very same condition seems to be undermining the
designers agency, as these exchanges occur within the process of design without a
clear reference beyond the matrix of an architectural image or object. Five questions
are articulated as a device to discuss the nature of this transaction. The questions
are structured by the concepts of sedated algoritmia, digital aesthetics/cloning,
digital craftsmanship, numerical metaphors, and digital urbanism.

How are the transactions between the numerical and the metaphorical
revealed in contemporary digital practices? How is this transaction
undermining the designers agency?

As the sophistication of digital tools in architecture grow bolder, we designers

continue to experiment what could be compared to a state of sedation. The use of
the term sedation, in this case, is a resource to approximate the actual condition of
the architectural designers agency in reference to the digital device in contemporary
practices. The notion of sedation also describes the recurrent induction of
metaphorical constructions in order to sustain a precise numerical process of design.
The numerical is in many cases structured by scaffoldings of subjective
constructions. In this transaction, the metaphor is a sedative to manage the anxiety
created by the shift from design principles to numerical approximations as the
validation of the architectural object. This condition might respond to the limitation
of the designers to represent such numerical rubric in relation to other architectural
transactions beyond the process of creating form.

Ironically, the self contained precision of the digital device has liberated a high level
of intuition in the design process. The design process is indeed an intuitive endeavor
and despite the claims of many contemporary practices, the numerical precision is in
many cases a metaphor to subvert the intuitive nature of digital design. Intuition is
exorcised of its connotations by the subjacent numerical principles of the digital. The
numerical logic works, in many cases, as the representation of controlled precision in
the design process. The problematic intersection is not the dialectic between the
numerical and the metaphorical; the history of our discipline reveals this condition as
a millenary dialectic in our trade. Perhaps this is the designers greatest gift: the
capacity to articulate from within this unresolved spectrum.
Second International Conference on Critical Digital: Who Cares (?) 96

Many problems are revealed when the subjacent numerical processes and principles
in digital design are not sufficient to define digital architectures content in reference
to other architectural transactions apart from its image, its form and its matrix. The
design process and its rubric had overcome the experience of architecture itself.
Metaphor works as the hallucinogen that provokes the visions simulating those
absent experiences like the cultural, the ethical or the environmental. Metaphor
works somehow as the narcotic agent that stimulates/simulates those other
subjectivities that are not defined or represented by the numerical. Are these
metaphors informed by alternative ways of inhabitation, people, context or the

In what ways metaphorical constructions confirm the representational

limitations of the numerical dimension of digital design?

Nature has always been a pervasive image in the History of Architecture. Nature is
the architecture of architecture. This we have seen from Vitruvius to Laugier on to
Bio-mimesis. The image of Nature has always played a part in the validation of the
origin and content of the architectural object. To invent the origin is to create the
authority in the discourse. To articulate the metaphors of the origin is a mechanism
of power.

Bio-mimesis seems to be the reigning discourse in digital design. Bio-mimetic design

is defined as a biological process of emergence where the entity (the building)
evolves as an autonomous organism. The notion of autonomy works only as a
metaphor in this case as well. The rubric of the process is stated as a collection or
taxonomies of species that morphologically iterate into the final design solution. The
genes of these species are provided by the numerical insertions or scripts that
register a numerical image of the buildings form. What function does this numerical
image fulfill? How are these numerical images translated into the experiential? Is
there a translation of these numerical values beyond the description of

In Albertis Descriptio Urbis Romae, the city of Rome finds its first digital
representation1. The coordinates of the Descriptio constructed a numerical image of
the city and its most notable monuments. Even though the Descriptio was a
breakthrough in cartography, Albertis numerical reconstruction was limited in many
ways because there was no possibility to translate his digital image into the
experiential qualities of the city.

This is perhaps the Achilles heel of the numerical component in the bio-mimetic
discourse. Its limitation lies in the representational. The numerical is conditioned to
a Higher Metaphor, the metaphor of the Biological, to sustain its validity. Biological
images provide the incarnation of a subjacent numerical image which would
otherwise never be intelligible. This raises the question of the necessity to represent
the numerical image embedded in the digital process. The same could be said of the
structural calculations of the engineer. Their poetry is translated only by the
coherent metaphor of the architectural experience as a whole.

Also, the problem reflects that the Bio-mimetic discourse has distanced design from
the subject. The only autonomy that morphogenetic metaphors present is its
disassociation from the dweller. To prove its validity, the Bio-mimetic must reveal a
consideration of the subject, in all its complexities, as a way to translate the images
Second International Conference on Critical Digital: Who Cares (?) 97

of Nature into tangible experiential mechanisms. If such transactions are not

addressed, biological metaphors in the design process will remain just an extension
of the realm of clonation.

How are digital metaphors informing aesthetic content? Is there a pattern

of digital patterns? Can we identify a cloning pattern? Are these cloned
patterns responding to other architectural translations, or are they just
ornamental simulations of a digital metaphor?

The intersection of the metaphorical and the numerical is clearly revealed in digital
aesthetics. It maybe too soon to coin such a category, but there is a recurrent
reproduction, if not cloning, of the bio-mimetic imagery within digital practices.

The construction of patterns as transcriptions of Natures models, mark the lead in

the articulation of our spatial entities. There is indeed an inherent beauty in the
formal achievements represented in digital shapes; their facility to associate with the
images of the natural is perhaps their greatest strength. Nature is the quintessential
representation of Beauty, even if we do not understand this in the contexts of
principles like decorum, composition or other architectural values in tradition.

The structural models offered by Nature have defined applicable methods for
engineering our building components and surfaces. The conceptual translation of
natural models into actual applications in the design and constructions is an
admirable fact. If the model of Nature is going to inform our procedures it has to be
defined in its applicability as well. Branco Kolarevichs notion of integrative design
presents a more convincing approach to translate the model of Nature as building
performance2. Models are subject to interpretative translations and to behave like a
natural entity does not presuppose mimicking the image of one. If such conceptual
translation does not take place, then perhaps we designer are just inducing a
superficial rubric to our formal expressions.

Such translations do not inform many mainstream architectural projects today. The
proliferation of the bio-mimetic imagery tends to reveal more a cloning process than
a critical interpretation of the natural or numerical models brought forth by digital
tools. The organic pattern is cloned repeatedly as the latest manifestation of the
cutting edge in our schools and practices. There is no clear statement of how digital
aesthetics are defining contracts between the architectural image and the subject,
place or culture in which it is inserted. The risk of this aesthetic cloning lies in the
complete subjectification of the architectural object.

Ornament is back and so is white. Yet, how is the associated digital whiteness
translating within the discourse remains to be seen. We could infer that whiteness
responds to the accentuation of the spatial depth of the projects we propose. Or it
might respond to the strategic potential of not defining surface materials. Or is it that
white is the best that our softwares can do for us with the least amount of ink? If
this condition responds to an extension of the Modern fashion3, what differences are
drawn between the symbolic economies of the Modern white and the Digital white?
Are their meanings equal? We should question the role of whiteness as an
architectural element. We must define what symbolic economies are constructed and
how they operate outside the interface. Otherwise design efforts will remain captive
of fashion. Or white will remain another chimeric side effect.
Second International Conference on Critical Digital: Who Cares (?) 98

What is role of digital metaphors in urbanism? Are digital metaphors

operative within the scale of the urban? Is the Bio-mimetic metaphor an
insufficient image when confronted with the complexities of context?

One of the most conflicting issues of digital design perhaps is its translation within
the urban. In many instances, the transactions proposed by digital design have not
reflected a clear engagement with urban space. The question of context is evaded, if
not omitted, in the rubric of many digital practices today.

In the case of discourses such as parametric urbanism, the content of the proposals
follow the same disassociation of inherent transactions within the context. The
imagery that recurs in many cases is that of, and I quote: viral agents, opportunistic
morphogenetic entities or parasitic self organizing systems. Are there buildings in
these techno-utopias anywhere? Is this really a techno-utopia or just Pandemic

Nowhere is the digital designs metaphor of Nature is weaker than in the context of
the city. The transference of the image of the architectural object as an organic
entity has obscured the integration of the building and its context. The imagery of
many bio-technological urban proposals has exacerbated the objectification of
architecture. The building is not proposed as a supporting component of the city
maybe because the trance of formal achievements is really a claim for protagonism.
It is hard to imagine living in a city composed just of Hadids or Lynns buildings, as
much as we love them. Perhaps Dubai holds more important lessons in this regard
than we may think.

The complexities of nature are revealed as episodes of balance and integration. Even
the chaotic or the unintelligible episodes of the natural environment serve a
determined purpose in these two conditions. The induced metaphors of the genetic
or the biological have not been fully considered in their potential applicability within
the city. We might have transgressed typological referents, but to suggest that
digital architecture has resolved the problem of context is a wandering of the mind.

How is the concept of design craftsmanship redefined by the numerical

metaphor? Is the problem of craftsmanship resolved in this transaction? Is
it the capacity of our digital tools that define design craftsmanship?

The transaction between metaphorical construction and digital design raise a

particular question. How is the role of design craftsmanship redefined by this

The digital tools available force the designer to confront a latent issue. Is the role of
craftsmanship defined by the capacity to operate a determined computer program?
Is the proficiency in computer softwares, or tools, the latest definition of design
proficiency? Will this be the criteria to judge the value of architectural design or the
architectural object itself?

For many practices today, the proficiency in the digital device provides their edge.
Digital craftsmanship provides for the return of the master builder. The architect has
regained a space to articulate his images without the necessity of a precedent. This
freedom grants the opportunity to construct seducing metaphors to validate and
Second International Conference on Critical Digital: Who Cares (?) 99

structure the logic of our designs. Metaphors are necessary to invent the new4. The
digital is just the medium to articulate visually and physically those metaphors.

This brings forth the notion of conceptual craftsmanship. The merit of the digital
perhaps lies in the conceptual capacity of the designers to construct their digital
metaphors as a function to redefine obsolete dwelling patterns or cultural practices,
not just form. Craftsmanship today lies in the skill to propose potential transactions
that engage the subject, the environment and form as well, as an integral

Metaphors in the digital are Poes psychotropic substance5. But metaphors cannot
remain as the induced narcosis for the designers own experience; dwellers want
some of that too! The translation of digital metaphors into experiential content is the
defining factor of craftsmanship amid the digital. If the digital reveals something, it is
that the architect is more a poetic translator than a master builder or a computer
programmer. But then again, there is poetry and there are rhymes.


see Carpo, M., and Furlan, F. Leon Battista Albertis Delineation of Rome Descriptio Urbis Romae,
Carpo, M., and Furlan F. (ed), Tempe: ACMRS, 2007.
see Kolarevic, B. Post Digital Architecture: Towards Integrative Design, in Critical Digital What Matter
(s) Conference Book, Terzidis, K. (ed), Cambridge: Harvard University, 2008. p.155.
see Wigley, M. White Walls, Designers Dresses. The Fashioning of Modern Architecture, Cambridge:
MIT Press, 1995.
see Bachelard, G. On Poetic Imagination and Reverie, (trans.) Gaudin, C., Dallas: Spring Publications,
see Bachelard, G. On Poetic Imagination and Reverie, (trans.) Gaudin, C., Dallas: Spring Publications,
1987. p. 106.
Second International Conference on Critical Digital: Who Cares (?) 100

Second International Conference on Critical Digital: Who Cares (?) 101

Wall-Es Prophecy: or How to Cleanup Toxic Residues of the Digital Age.

Joseph B. Juhsz
University of Colorado USA

Robert H. Flanagan
University of Colorado USA

this phenomenon to an extreme, claiming to believe that the synthetic is the original. On Wall Street, the
companion legend emerged: that derivatives inevitably trump securities, just as securities trump the
underlying product. It does not taketrulypropheticpowers forus,forWallE,Pixar, orWallyDisney tosee


enacted playsand playersto act with responsibility, identity, and authenticitydeveloping Main Street,
notits syntheticderivatives. Decommodifying architectureisaction;architecturalpresentation is a form of
action; without action representation is misrepresentation. Architecture tells stories. Stories are
1. The Call
The emergence of new social, political, and economic concepts, conditions, and practices
such as that of globalization, ubiquity, outsourcing, or design/social networking together
with their corresponding technologies have shaped our world in a way that has no
precedence (from the call for papers for this conference).
Oddly, the above assertion remains utterly true despite the crash of the digital world in the
mean timethe 20th Century mantras embedded in the sentence were true for all of us
when written, and remain as a toxic residue of the closing days of the 20th Century even
nowmuch as the devastated toxic planet pictured prophetically in Wall-E. Globalization is
dead. Ubiquity is dead. Outsourcing is deadat least until variants are resurrected and
repacked. The developing and developed worlds are now aware of the virulent potential
and the insidious societal toxicity of unbridled capitalisms invisible hand (particularly on
digital autopilot). Adam Smiths Theory of Moral Sentiments is poised to upstage his Wealth
of Nations.
We are here to bury the 20thcentury and then to look about.

1.1. Adamand The Fall

Second International Conference on Critical Digital: Who Cares (?) 102

Figure 1: The Cathedral of Commerce

Cass Gilberts Cathedral of Capitalism, the Woolworth Building, seems at first glance to be
the mere unambiguous embodiment of Adams Smiths The Wealth of Nations. Commerce
and unbridled materialism seem manifest as the superior equivalentrepresentation if you
Will of the submission of moral sentiments to the brutality of free market market

It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our
dinner, but from their regard to their own interest. We address ourselves, not to their
humanity but to their self-love, and never talk to them of our own necessities but of their
advantages.1 - Wealth of Nations.

Today we witness an implosion of the synthetic Eden. The promises made have never been
delivered--either by the Cathedral of Capitalism or for that matter by its evil twin, the
Cathedral of Learning.

Figure 2, The Cathedral of Learning

The fall.

1.2. Adam revisited

Yes, indeed, in part, the Cathedral of Capitalismand the Cathedral of Learning--exhibit in

physical form, the Story of Adam Smiths The Wealth of Nations. But, look againand this
is equally importantthey make room for Smiths antidoteThe Theory of Moral
Sentimentsand thus, they are not merely temples to pure greed, efficiency, or
Second International Conference on Critical Digital: Who Cares (?) 103

How selfish soever man may be supposed, there are evidently some principles in his nature,
which interest him in the fortune of others, and render their happiness necessary to him,
though he derives nothing from it, except the pleasure of seeing it. - The Theory of Moral
Sentiments p. 3.2

But, looka this!:

Figure 3: Revit racer3 (or homage to Smiths cathedral of unmitigated greed.)

Adam Smiths Theory of Moral Sentiments is based upon the everyday observation that
people do not only do selfish things that maximize profit they also do unselfish things that
benefit those less fortunate than themselves. Adam Smith sees two fundamental forces
working in opposition to each otheron the one hand the survival of the fitteston the
other sympathy, empathy, and generosity as complementary paradoxical opposites. If we
observe the Revit-designer Smiths Revit-designed Fantasy Design mobile home racer
(above) and compare and contrast it to the two Cathedrals pictured above itits one-sided
moral degeneracy and bankruptcy are as evident as a picture of Mr. Madoff. This racer has
crashed in flames. No one cheered.

This graphic is the Twentieth Century love of synthetics carried to its ultimate and bankrupt
apotheosis. It offers no exit, no excuse, no room for moral sentiments. The Endgame
whereto from here and whence? Wall-E. Poor Bobo!

1.3. The Revit Racer: The Cathedral of unmitigated greed: The triumph and
Tragedy of BIM design.

The progression from AutoCAD to BIM is from optimization of square inches to that of cubic
inches. BIM's objective is to maximize short-term profits in three dimensions. BIM controls
the market inside the building; there are no decisions to be made, it is the invisible hand
guiding the invisible hand. Market decisions in the first instance and supervening synthetic,
toxic, imitation market decisions trump the sentiment of the architect.

In BIM architecture, mechanisms for moral and ethical behavior are circumvented by BIMs
optimization algorithmsthe traditional design process is short-circuited by an invisible
decision maker who gives you the illusion of freedom and decision making--but is driven,
like a Revit racer, by sheer unapologetic selfishness and greed. BIM feels neither shame nor
guilt. Only sociopaths find a world divorced from sentiment acceptable.

With 3-D Drawing Software [Revit], Freedom Tower Architects Put Minds Eye on a PC
Screen.4 On Wall Street, it was time to celebrate. [An] industry wide revolution altering
Second International Conference on Critical Digital: Who Cares (?) 104

how architects transfer ideas from their brains to paper5 had arrived. The architects
touchy-feely hocus-pocus was decoded by this snappy new technology. Finally, a labor
saving device to connect the dots: minds-eye-direct-architecture. NO middleman, no
confusing conceptual issues, no separate schematic distraction, no wasted steps in design
developmentjust the minds eye, computer, then out pops construction documents. Pop

In The Theory of Moral Sentiments, Adam Smith describes human beings who are not
sociopaths, who are capable of shame and guilt; this accounts for their moral behavior in
the actual (pre-synthetic) marketplace. With BIM, there is neither a marketplace nor shame
nor guilt, but there is a reason to excuse amoral decisions (lacking in sentiment), the
overall good of an optimized design process. BIM gives you the excuse to deny moral

We care.

1.4. Poor Bobo!

Poor Bobo came to a sticky end. He was riding in the Duc de Ventres Hispano-Suiza when
his falling piles blew out of the car and wrapped around the rear wheel. He was completely
gutted, leaving an empty shell sitting there on the giraffe-skin upholstery. Even the eyes
and the brain went, with a horrible shlupping sound. The Duc says he will carry that ghastly
shlup with him to his mausoleum. William Burroughs, Queer, p. 40.

Sadly, the Revit architects fate may be that of hapless BoBowhile some proclaim progress
in new dimensions in design, others mourn its tragic end. But in the end, the blessing is that
he feels no painhe can no longer distinguish phantom pain in a phantom limb from
phantom pain in a real limb, from real pain in a real limb. And yet, it hurts.

1.5. Synthetic markets, synthetic buildings, synthetic curricula.

The United States is in the process of becoming a parasite on the rest of the world as it
blithely follows in the footsteps of the British Empire. If The Wealth of Nations and its even
more toxic derivatives were its sole accountability6, that would be okay, but if real, physical
products and real markets are engaged, it is not.

Paradoxical market forces have surfaced. On the one hand, and despite the peachy-preachy
rhetoric of free marketers, capitalist sentiment is doubly subverted and has been flamed-out
on Wall Street. Free markets are devised, and then subverted, and then in the arcane
shadow-world of derivative markets phantom fear and phantom greed engage the
gamblers instinct. As for traditional markets, micro as well as macro, clever business
models subvert markets in plain view. Department-stores are not competitive, banks are
not competitive, and there is no market in insurance--rates are what the companies charge.

Higher Education professes immunity from the scourge of subverted markets, even as it
blithely engages the new era of outcomes-based learning. In today's architecture curricula,
the design process being taught increasingly propounds the phantom virtual merit of
results-oriented rather than exploratory learning. It is education as end-of-the-road big
business; it is Wall-Mart; it is WallE, and not a market place of ideas, but a greedy, selfish,
short-term-profits based curriculum.

In pursuit of the promise of a superior academic product the outcomes-based architecture

curriculum works backwards from the goal. In outcomes-based learning, BIM and
Second International Conference on Critical Digital: Who Cares (?) 105

AutoCAD are taught to engage building in a greedy way. The University of Phoenix is no
longer perceived to be a social aberration, but it is now a model of a legitimate university
education. The poltergeist of the invisible hand of profits shapes the curriculumcuriously,
the curriculum is devised in the image of a market, but since there is no real market--only
the illusion of a market is accomdifyicated.

Is architecture and architectural education bifurcating? Or, are we all headed for Davy
Jones locker? Is McDonald's, or more accurately, Smiths racer, the end result of this

When BIM goes away, what should take its place? BIM empowers the architect, as an
ancient script empowered a priesthood. It is a dense language virtually immune from the
common man's grasp. Once the BIM is relegated to the scrap heap what is the architect
then? Whence and where Wall-E?

What do we do now? One-third of the architects and firms today are joining the
employment line, and yet we are producing students at an unprecedented rate. With the
unemployment at hand is this unconscionable? Are we graduating students into an illusory
toxic marketplace with little hope of employment? Are we training people appropriately?
The BIM world is imploding all around the ivory tower.

What kind of skills can the world use under these circumstances; or do we care? How many
BIM-trained people and AutoCAD-based people can the crumbling market absorb? Is this
mode of education pure greed on our part?

Outcomes-based education is founded on measurably setting goals before the journey is

completedit is a self-destructive mechanism fueled by pure greed. There is a perpetual
reiteration and refinement without evolutionthus removing not only any ethical issues but
even removing the market mechanism. What do we do?

2. What do We Do?

The model of an open-ended design process is not one that works back from the
conclusion. It is writing the narrative from the point of view of the observant crewmember.
BIM is from the point of view of the virtual passenger; AutoCAD is from the point of view of
the paying, embarked passenger, both are passive views, BIM is the worst. Open-ended
design is from the point of view of the rigging and the sails, the deckhands view. The
observant deckhand. Todays curricula are training passengers: how to be a passenger on
the cruise ship. Pure consumers. Students are virtual passengers on a virtual cruise ship,
captives of their own devices, celebrating freedom, experiencing confinement. We officers
work for The Man. Overseers.

We are masters of our own destiny. We need not be pawns of the administration.
Nietzsche. Our future is up to us. Our destiny has not been written. It is up to us to
determine our fate.

The narrative allows one to create a theory of moral sentiments--open-ended. It is not

worked backwards, it works forward. It is exploratory and it engages a plan. It is the
journey to The Western Lands.

With and in the narrative, the ending is implicit in the first line. The text is not the
consequence of working backwards to the first sentence.
Second International Conference on Critical Digital: Who Cares (?) 106

Figure 4. Photography does not work backward7 - Juhasz

Photography does not work backwards, there is no tripod. It instinctively knows what you
want to do. It is violent and invasive.

Figure 5. the Oxygen House8 - Douglas Darden

The pencil is a violent instrument that gouges paper9.

3. Neither back nor Forth.

We cannot go back. We must move forward.

4. In Conclusion, then.
The 20th Century cultivated the ideal that the synthetic inevitably trumps the original.
Postmodernism pushes this phenomenon to an extreme, claiming to believe that the
synthetic is the original. On Wall Street, the companion legend emerged: that derivatives
inevitably trump securities, just as securities trump the underlying product. It does not take
truly prophetic powers for us, for WallE, Pixar, or Wally Disney to see that this house had
to collapse.

In the 20th Century, we had deracinated ourselvesthe decentering of the Age of Reason
was but a recentering into our sense of being able to decode the fundamental laws of the
universe. The toxicity of this notion sundered us from nature herself.

The 21st Centurys decentering involves a structural and functional interaction between brute
reality and its enacted playsand playersto act with responsibility, identity, and
authenticity--developing Main Street, not its synthetic derivatives. Decommodifying
Second International Conference on Critical Digital: Who Cares (?) 107

architecture is action; architectural presentation is a form of action; without action re-

presentation is misrepresentation. Architecture tells stories. Stories are enactments.
Enactments now take place in a world that extends into the global, the digital, and the
virtual. Architecture must emancipate itself from the invisible hand of the derivative.

Smith A., Wealth of Nations, ed. C.J. Bullock, New York: P. F. Collier & Sons, 1937. p. 19
Smith A. Theory of Moral Sentiments, New York: Arlington House, 1969. p. 3
Image referenced from Autodesks Revit website. Ref. as of February 28, 2009.
Frangos A. New Dimensions in Design; With 3-D Drawing Software, Freedom Tower
Architect Puts Minds Eye on a PC Screen. Wall Street Journal. July 2, 2004. Business
section. pp. 1,4
Frangos comments relate to quotation of Charles Eastman, In the past, architects carried
in their head what the three-dimensional conception of the building was and mentally
translated that into two-dimensional drawings.
As imagined in the Thatcher-Reagan era.
Photograph and comments by Joseph Juhasz, 2009.
Darden, D., Condemned Building: An Architect's Pre-Text--Plans, Sections, Elevations,
Details, Models, Ideograms, Scriptexts, and Letters for Ten - Allegorical Works of
Architecture. New York: Princeton Architectural Press, 1993.
As related by Douglas Darden in conversation with Joseph B. Juhsz.
Second International Conference on Critical Digital: Who Cares (?) 108

Second International Conference on Critical Digital: Who Cares (?) 109

Intensity, Extensity and Potentiality

A Few Notes on Information and the Architectural Organism

Aaron Sprecher
McGill University School of Architecture, Montral, Canada


The search of guiding principles for a behavioral architecture, meaning one responsive to
the human environment, has created a centennial obsession to give life to architectural
forms. A few displays of this search are Karel Honziks Biotechnics: Functional Design and
the Vegetable World (1937); Richard Neutras Survival Through Design (1954); Reyner
Banham and Franois Dallegrets Environment-Bubble (1965); Superstudios
Microevent/Microenvironment (1972); Markos Novaks Transarchitecture (1995) and Karl
Chus Genetic Architecture (2000). Their theoretical assumptions share a conception of
architectural performance seen in terms of the capacity to reflect and draw from the
complexity of the natural organism. While they have emerged in different contexts of
knowledge, these assumptions have in fact generated an approach to architecture that is
intricately associated with life and its power to stream and generate information. Intensity,
extensity and potentiality are three notions associated with the exponential influence of
information on todays architectural production. Their respective attributes have generated
an anxiety that no longer arouses from the will to represent life but from the desire to
procreate it. It is here proposed to review some arguments about the reasons why
architecture always cared to integrate the spheres of information.

1. Abstract organisms

With the dramatic development of information theories and related technologies in the
1960s, the discipline of architecture has marked a shift from a mere representation of life
to the development of systems that would ensure the conditions for its emergence and
sustainability. One of the consequences of this shift is the disappearance of the object and
its replacement with abstract systems in charge of creating total environments1. This life
without objects2 has triggered numerous projects and researches that aimed at developing
environments where information would be considered as the common currency for all the
constituents of the realbeing organic or non-organic, living or inert, physical or virtual.

Most notoriously, in 1965, with the proliferation and specialization of building systems,
Reyner Banham described the shift from formal to behavioral systems as a baroque
ensemble of domestic gadgetry [that] epitomizes the intestinal complexity of gracious
living.3 This analogy of mechanical and electrical services with systems regulating the living
organism is striking because it suggests that the accumulation of energy functions, as
diverse as climatic, wireless and grid-based, implies the disappearance of the form, image
and representation of the architectural object. In A Home is not a House, Franois
Dallegrets drawings for Banham are a tribute to this conglomeration of mechanical,
electrical and structural systems, with their associated requisites and interactions.4 Here,
the house is no longer a machine for living but a machine for literally procreating life. Its
amneotic envelop preserves and nurtures the human body while streaming and screening
chemical, physical and biological information. Of significance here is the fact that the house
as an object disappears while information systems expand in all directions. Dallegrets
Environment-Bubble5 is in fact not just reduced to a collection of essential components that
aim at supporting the human body. It is above all a baroque system (Banham 1965),
Second International Conference on Critical Digital: Who Cares (?) 110

whose existence depends on its ability to collect, associate and connect a multitude of
information. In other words, Dallegrets bubble is first and foremost an operational
ensemble of data-compressed entities. This intensified system is also reactive. It involves a
vast number of processes of adaptation to the external and internal environment of the
human body. The Environment-Bubble as an infrastructural network marks the advent of a
design paradigm of performance for an architecture of life, energy and (de)regulated
behaviors. Similar to a living organism, Dallegrets bubble emerges out of energy fluxes,
organic veins forming a unitary system of interwoven and interacting sub-systems which
combine effectively to form the whole. And yet, beyond its biological analogy, Banham and
Dallegrets home resembles an abstract system of operational devices that are defined by
their behaviors, capabilities, and sets of innate and imparted knowledge.

With Franois Dallegret, Reyner Banham, Yona Friedman, Claude Parent and Paul Virilio,
architecture undeniably embraced the spheres6 of information. Architecture ultimately
turned into an interface for the streaming of energetic, sensorial and transparent fluxes.
This radical mutation of the architectural object into an abstract system was largely
influenced by the emergence of a new perception of the real.

This perception found its utmost expression in 1972. In Anti-Oedipus, Gilles Deleuze and
Flix Guattari declared: There is no such thing as either man or nature now, only a process
that produces the one within the other and couples the machines together. Producing-
machines, desiring-machines everywhere, schizophrenic machines, all of species life [].7
This celebration of all of species life was certainly in everyones mind at the opening of
Emilio Ambasz 1972 groundbreaking exhibition at the Museum of Modern Art, Italy: The
New Domestic Landscape. Achievements and Problems of Italian Design.8 As the
participating group Superstudio declared: In this exhibition, we present an alternative
model for life on earth. We can imagine a network of energy and information extending to
every properly inhabitable area. Life without work and a new potentialized humanity are
made possible by such a network. (In the model, this network is represented by a Cartesian
square surface, which is of course to be understood not only in the physical sense, but as a
visual-verbal metaphor for an ordered and rational distribution of resources.)9

Presented on this occasion were two seminal projects that marked a critical position
regarding the surfacing of abstract networks stimulated by the ever-increasing expansion of
information assets, namely No-Stop City10 by Archizoom and Microevent/Microenvironment11
by Superstudio. The latter developed a critical model that announced the advent of an
economy of information as a network that reconfigures both production and consumption
until their final disappearance. Microevent/Microenvironment envisions a boundless
connective system where the dialectical relation between human and nature is reconfigured
into a fused, integrated, and limitless platform of varied informational processes. With
Microevent/Microenvironment, The Italian group described an anti-ideological metropolis
formed by an abstract machine of naturalized technology where information penetrates and
circulates onto the surface. For Superstudio, the city ceases to mediate the social and
political realms, and gets reduced to a concentrated network of energy and
communication.13 It heralds a programmed architectural entity that is no longer a three-
dimensional representation of the real but rather a multi-dimensional model of knowledge.
In this exhibition, writes Superstudio, we present the model of a mental attitude. This is
not a three dimensional model of reality that can be given concrete form by a mere
transposition of scale, but a visual rendition of a critical attitude toward (or a hope for) the
activity of designing, understood as philosophical speculation, as a mean to knowledge, as
critical existence.14

Within this proto-realist environment, the object is replaced with a uniform field of fluid
information that supersedes the typology, a spatial figuration of the social structure, with a
spontaneous, non-conflicting social arrangement.15 The collective gives way to the
connective, the causal event to the ubiquitous fiction, the rigid structure to the open
system, the border to the infinite horizon. Microevent/Microenvironment depicts a condition
Second International Conference on Critical Digital: Who Cares (?) 111

where information networks supersede hierarchical attributes of quantity and quality. It

marks the emergence of a mode of codification that underlies the ubiquitous character of
information. It envisions a mode of information where each component of the system is a
potential for a particular event that ripples onto the rest of the network.

Banham and Dallegrets Environment-Bubble and Superstudios

Microevent/Microenvironment express three essential conditions related to the nature of
information. First, these critical projects recognize the intensity associated with information
streams that permeate the deepest structure of matter. Second, the extensity of
information networks has provoked the disappearance of the object or, in other words, its
reconfiguration and displacement across a boundless environment of transparent
operations. Third, conceptualizing the real in terms of information units supposes that the
human environment and nature are directed, controlled and managed by mechanisms that,
when discovered, contribute to the shaping of guiding rules and principles. These
mechanisms stand at the core of laws that regulate the human understanding of reality. The
striking feature of these mechanisms rests on their ability to add, integrate and organize
tremendous amounts of information into systems that are perceived as increasingly abstract
due to their complexity. These informational mechanisms render a reality that is ever more
shaped by potentialities, instabilities and probabilities. Approaching our reality in terms of
information reveals a world of intensity, extensity and potentiality in the image of
Dallegrets mechanical systems and Superstudios endless field of information.

2. Behavioral organisms

In recent years, Dallegrets diagrams and Superstudios environments have evolved into
computational codes and systems. In the architecture studio, designers now continuously
acquire terms and languages that are borrowed from the sciences. This transformation of
the studio into a scientific laboratory was triggered by the introduction of information
technologies. From the beginning, the technological procedures aimed at producing models
that were ever more efficient, accurate and responsive. The change in practice does not
imply that architecture has turned into a new science but rather that its tools have become
scientific. Act at the core of todays practice, these scientific procedures have gradually
transformed the fixed and idealized condition of the architect into one that activates
behavioral, responsive and adaptive designed systems. A consequence has been the
emergence of architectural models that are now more than ever linked to the notions of
intensity, extensity and potentiality. Some recent debates in the scientific community offer a
perspective on how to assess these three notions and on how they influence the
contemporary architectural production and its associated perception of reality.
Second International Conference on Critical Digital: Who Cares (?) 112

Figure 1. Behavioral and responsive system. The Hylomorphic Project by Open Source
Architecture (MAK Center, Los Angeles, 2006)

At the 39th Rencontres Internationales de Genve in 2003, the perception of reality in

terms of information and technology was addressed by a number of researchers from
various domains of scientific research.16 Among the participants, the philosopher Michel
Serres proposed a model of knowledge based on our technological capability to gather,
associate and connect information. The human ability to stream and screen information has
prompted an exponential development of technologies that projects man to the origins of
life while propelling a future built upon evolutionary processes that are increasingly fast.17
The emergence of new sciences based on the nature of information, such as genomics,
biochemistry and computational biology, has accelerated our evolution and adaptation. This
acceleration results from the ever-increasing specialization of technological tools. As Serres
puts it: What is technology? A great economy of death and time.18 He considers the tool
as a source of condensed time.19 In other words, the growing specialization of technical
tools brought about a swift progress in terms of human evolution and adaptation. Compared
to the long and patient evolution of nature, Serres points out that technology has in fact
intensified time to the limits of human comprehension. The author of the book series
Hermes sees technology as a function of intensity, the intensity to memorize and associate
pieces of information that are collected at all scales of reality.

For the French biologist Henri Atlan, this exponential development of technology has
transformed our perception of the world and, in particular, its representation.20 One of the
main consequences of this vision is the blotting out of the distinctions between living and
non-living organisms. Science has in fact established a continuum between conscious and
non-conscious beings by considering them being made out of the same material substance
but with a differentiated organization. While obvious differences exist between living and
non-living organisms as well as conscious and non-conscious beings, science also recognizes
a substantial unity among them; a historical unity if considering their pre-biotic and
biological processes of evolution.21 For Atlan, it is therefore a matter of limits, limits
between the human and the environment. These limits are increasingly blurred by the
mechanical character of information reported across nature. To the scientists who
continuously scrutinize molecular and atomic interactions, nature seems indeed animated by
increasingly mechanical laws. This model of nature does not announce a post-human era as
some would claim but instead the emergence of a model where the human being is no
longer idealized and limited to a particular terrain. In this model, people are more than ever
Second International Conference on Critical Digital: Who Cares (?) 113

connected to the world while acknowledging the existence of common traces, genetic or
molecular, that belong to humanity and all other sources of organic or non-organic matter.22
For Henri Atlan, information has therefore erased the limits and rendered a reality in
constant extension. A reality made out of abstract and extensive mechanisms that are
continuously self-organizing and recombining.

In response to Michel Serres technological model of intensified evolution and Henri Atlans
extensive nature of the human limits, the physicist and theorist Roland Omnes observes
that our models of nature are rooted in a series of laws that are limited in numbers and yet
tremendously condensed in terms of information.23 With the exponential development of
technology, the laws have in fact generated models that combine both intensive and
extensive sets of operations. These laws tend toward a continuum of guiding principles that
bridge the macro-scale phenomena of our universe and the micro-scale observations of
matter. They make up a world that is inherently mechanical as we find them everywhere,
classical laws like the principle of relativity or abstract laws like those of quantum theory.24
For Roland Omnes, the nature of these laws and mechanisms express potentials for the
unfolding of dynamic processes that take place in nature. Importantly this model of
experimental thinking rests on a concept of addition. The addition of a wide variety of
principles induces the emergence of potentials that are combined and merged with the
ultimate goal of rendering a comprehensively organized world. There is no cause and effect
[] but a combination of potentials, an addition of all the events that may occur outside of
any constraints.25 Omnes model of potentiality is particularly interesting considering the
nature of information. With its capacity to flow across all domains of research, information
has in fact increased the confluence of knowledge on a technological platform that is
growingly integrated but also continuously reconfiguring under the influence of parametric

3. Spherical organisms

From the point of view of todays informed architecture, the notions of intensity, extensity
and potentiality suggest that the discipline has transformed the nature of its object. This
object has mutated into an organism shaped by intensive computational operations that
continuously inform, influence and modify its nature. The architectural entity is now shaped
by information systems that adapt and evolve at the rate of Moores Law .26 The capacity to
manage the complexity of information inherent to this entity has become dependent on the
exponential development of our calculation capabilities. Following Michel Serres model of
evolutionary rate, the architectural organism is now the object of an accelerated evolution;
it has become endowed with an exponential capacity to absorb and process complex sets of
Second International Conference on Critical Digital: Who Cares (?) 114

Figure 2. Intensive computation. N-natures by Open Source Architecture (Rhode Island

School of Design, 2009)

This computational capacity does not depend solely on intensive processes of adaptation. It
also embraces a condition of extensity. Pressured by computational tools that were primarily
developed in other domains of knowledge, todays architectural organism is no longer
autonomous but depends instead on a wide range of research domains. One of the
consequences of this transdisciplinary state is expressed by the current proliferation of
design activities in emerging fields such as material and fabrication research, interactive and
immersive media, and most noticeably, biologically-inspired modeling. As Greg Lynn
recently noted, In the near future, it will not be surprising to see architects designing
molecules in scientific laboratories.27 In other words, the expansion of information and its
associated technologies implies that architecture is increasingly porous to other fields of
knowledge. Its concerns are no longer constrained to a particular dimension but extend at
all scales simultaneously, from the intrinsic structures of material to the macro-scale of
environmental phenomena. The discipline is consequently confronted with a large amount of
parameters that are relayed, processed and re-sampled by sophisticated computational
protocols. Strikingly, these virtual engines are continuously fuelled by a great diversity of
information that blurs the limits between organic and non-organic materials, living and non-
living organisms. Architecture has now embraced Henri Atlans model of extensity by
creating a continuum of knowledge that expands at all scales.
Second International Conference on Critical Digital: Who Cares (?) 115

Figure 3. Architecture as nature. The phototropic installation ParaSolar by Open Source

Architecture (Center for Performing Arts, Tel Aviv, 2009)

This continuum has radically transformed the nature of the practice. By embracing a great
diversity of information and technologies, the architectural entity went from a static to a
dynamic condition in the past 30 years. It now resembles an energetic system, meaning
that its existence depends on the addition and association of parameters, each representing
a potential condition for the reconfiguration of its intrinsic nature.28 Above all, technology
has exponentially increased its ability to add parameters therefore producing models that
are, too often idealistically, qualified as emergent. This notion of emergence is often used
to describe an architectural entity that expresses a formal complexity produced by
increasingly blurred computational operations. And yet, the redundant use of this notion is
not surprising in view of a contemporary reality that appears more and more unstable and
Second International Conference on Critical Digital: Who Cares (?) 116

Figure 4. Energetic system of information. C-Chair by Open Source Architecture (Belgium,


In the past 40 years, roughly since the advent of information sciences and technologies,
architecture has undergone in a profound transformation of its status. And yet, from
Dallegrets Environment-Bubble and Superstudios Microevent/Microenvironment to todays
morphogenetic desires, it remains fascinated with life and its complexity. Considering the
three notions of intensity, extensity and potentiality that act in the most profound structures
of our informed world, architecture is no longer interested in representing life but rather in
procreating its conditions of evolution, adaptation and duration. The architectural organism
is now sensitive, mutative and responsive to its own existence, or as Peter Sloterdijk
expresses, it now embraces the ambient spheres29 of our world.

1 See Total Environment, an exhibition curated by Professor Alessandra Ponte and her
students at the University of Montral, Canadian Center for Architecture, Montral, Canada,
March 27th to August 23rd, 2009
2 Superstudio, Microevent/Microenvironment in Rethinking Technology A reader in
Architectural Theory, Braham W. and Hale J. A. (eds.), New York: Routledge, 2007, p. 196
3 Banham R. illustrated by Dallegret F., A Home is not a House, in Art in America, New
York: Volume 2, 1965, pp. 70-79
4 Ibid, p. 77
5 Ibid, p. 77
6 Sloterdijk P., Bulles, Sphres 1, Paris : Hachette Littrature, 2002
7 Deleuze G. and Guattari F., Anti-Oedipus Capitalism and Schizophrenia, London: The
Athlone Press, 1983, p. 2
8 See Ambasz E., Italy: The New domestic Lanscape. Achievements and Problems of Italian
Design, New York: Museum of Modern Art, 1972
9 Superstudio, Microevent/Microenvironment in Rethinking Technology A reader in
Architectural Theory, Braham W. and Hale J. A. (eds.), New York: Routledge, 2007, p. 196.
10 See Branzi A., No-Stop City, Paris: Editions HYX, 2006
11 See Ambasz E., Italy: The New domestic Lanscape. Achievements and Problems of
Italian Design, New York: Museum of Modern Art, 1972
Second International Conference on Critical Digital: Who Cares (?) 117

12 Superstudio, Microevent/Microenvironment in Rethinking Technology A reader in

Architectural Theory, Braham W. and Hale J. A. (eds.), New York: Routledge, 2007,
pp. 195-202
13 Ibit, p. 197
14 Ibid, p. 196
15 Ibid, p. 197
16 See Nivat G. (ed.), XXXIXes Rencontres Internationales de Genve, Geneva: ditions
Lge dHomme, 2003
17 Serres M., Nouvelles Limites de lHumain, in XXXIXes Rencontres Internationales de
Genve, Nivat G., Geneva: ditions Lge dHomme, 2003, pp. 13-26
18 Ibid, p. 19 (translation by the author)
19 Ibid, p. 21
20 Atlan H., LHumanit dHomo Sapiens, in XXXIXes Rencontres Internationales de
Genve, Nivat G., Geneva: ditions Lge dHomme, 2003, pp. 48-60
21 Ibid, p.49
22 Ibid, p.50
23 Omnes R., Les Limites de lInhumain, in XXXIXes Rencontres Internationales de
Genve, Nivat G., Geneva: ditions Lge dHomme, 2003, pp. 71-80
24 Ibid, p. 78
25 Ibid, p. 76 (translation by the author)
26 See Moore G., Excerpts from A Conversation with Gordon Moore: Moores Law, Video
Transcript, Intel Corporation, 2005
27 See Lynn G., Gen(H)ome Project conference, Los Angeles: MAK Center, curated by
Kimberly Meyer and Open Source Architecture, October 29th, 2006
28 Sprecher A., Alive and Kicking: Energetic Formations, in Performalism, Grobman Y. and
Neuman E. (eds.), Tel Aviv: Tel Aviv Museum of Art, 2008, pp. 74-81
29 Sloterdijk P., Essai dIntoxication Volontaire, Paris: Hachette Litterature, 2000, p. 91


Aaron Sprecher is co-founder and partner of Open Source Architecture (www.o-s-a.com). He

completed his graduate studies at the University of California at Los Angeles. His research
and design work focuses on the synergy between information technologies, computational
languages and automated digital systems, examining the way in which technology informs
and generates innovative approaches to design processes. Beside numerous publications
and exhibitions, he has lectured in many institutions including the University of Pennsylvania
(Conversation | Information In-formation N-formations), MIT (In-fluence Af-fluence Con-
fluence | Notes on N-dimensional proxemics) and Rice University (Dissipative Architecture).
Aaron Sprecher is co-curator and co-editor of the groundbreaking exhibition and publication
The Gen(H)ome Project (MAK Center, Los Angeles, 2006). He is a recipient of numerous
awards, among others, Fellow of Syracuse University Center of Excellence. Aaron Sprecher
is currently Assistant Professor at McGill University School of Architecture.

Credits :

Figure 1. The Hylomorphic Project is a project by Open Source Architecture (Aaron

Sprecher, Eran Neuman and Chandler Ahrens) in partnership with engineers Kristina Shea
and Marina Gourtovaia. Image by Joshua White.
Figure 2. N-natures is a project by Open Source Architecture with JBohn Associates. Image
by Kevin S. Deabler.
Second International Conference on Critical Digital: Who Cares (?) 118

Figure 3. ParaSolar is a project by Open Source Architecture. Image by Open Source

Figure 4. C-chair is a project by Open Source Architecture. Image by Open Source
Second International Conference on Critical Digital: Who Cares (?) 119








Second International Conference on Critical Digital: Who Cares (?) 120

Second International Conference on Critical Digital: Who Cares (?) 121

Is this what we are so afraid of?

Digital Media and the Loss of Representative Power

Mark Lindquist
North Dakota State University, USA

Until the 20th century spatial design professions have been a privilege of the aristocratic
class; attainable only to those that could afford the time for pursuits in the visual arts and
education, this has remained true for much of the past century. As the arts were
disseminated to the lower classes of society via public school programs the upper class
grip on the design professions loosened, allowing for a greater diversity of student access to
higher education. A hangover from this historic shift is that, in many schools, design
competence is validated only by the ability to draw (an historically aristocratic pass time).
Digital media and the computer are altering the landscape dramatically, yet many spatial
design professions are slow or reluctant to engage with societal change. With the
emergence of a current generation of digital native students, it is now time to engage with
current societal shifts if we are to understand this new way of thinking and its impact on the
design professions. This paper will evaluate and assess the impact that digital media has,
and will have, on image making and authority in architecture and landscape architecture
practice. The shift in power and control of image making, from the experienced designer to
virtually anyone with a computer, will be examined and the current and future impact of this
paradigm shift will be discussed.

1. Introduction

The correlation between artistic ability, specifically hand drawing, and design ability and
thinking is a much written about topic. Paul Laseau states the importance of freehand
drawing as an essential tool for a way of thinking for architects and designers1, which is
echoed by Faruque2 in his book of a similar vein. The cognitive connection between
sketching and thinking has been presented.3 Chip Sullivan, a landscape architect, has
recently discussed the importance of drawing, emphasizing its significance for
understanding basic space around us.4 The impact of drawing in relation to society and
culture has been examined by Edward Robbins, establishing further the importance of
analogue techniques for the design disciplines.5 Anecdotally, the most recent conference of
the Council of Educators in Landscape Architecture (CELA 2008/20009) included two panel
discussions one on the role of drawing in the profession of landscape architecture, the
other on the role of digital media in the profession. The drawing panel drew an audience of
over one hundred participants; the digital panel less than ten participants. That there is on
the whole more interest in analogue design techniques than in digital techniques within the
spatial design professions is the basis for this paper. The question what are we so afraid
of? is at the root of this discussion, primarily seeking reasons that the environmental
design professions are currently more interested in revisiting conventional design
techniques rather than augmenting traditional techniques with digital techniques and the
new skill set the digital native student can offer the disciplines.

The perceived relationship between analogue representation techniques, namely hand

drawing, and a persons ability to succeed in the spatial design professions inevitably has
impacted those able to become architects and landscape architects. Until recently only the
Second International Conference on Critical Digital: Who Cares (?) 122

upper class had the time or money for such artistic professional pursuits. That architecture
has been called the white gentlemans profession6 indicates the historic race, gender and
class bias of the profession. The importance of drawing for architects has been elucidated by
Edward Robbins in his book Why Architects Draw, stating that within the act of drawing, the
responsibility, standing and influence of architects is embedded.7 Is a possible loss of power
and control at the crux of architects and landscape architects fear of the digital? Is the
issue with the designer being displaced by the computer, really about the fear that now
anyone can whip up some ideas and designs digitally, and a continued belief that not just
anyone can skillfully draw a napkin sketch? This paper will propose reasons for such fear
and argue that digital media has the opportunity to strengthen the professions of
architecture and landscape architecture by engaging the realities of a digital paradigm and
working with it rather than rejecting it.

2. The Digital Native

Students in the western world are being introduced to computers at what Todd
Oppenheimer argues is an alarming rate.8 The current generation of students, born in the
early 1980s through the mid 1990s, have been deemed digital natives, growing up with
the internet, digital media and other devices, and thus being comfortable with using this
technology in their daily lives.9 This seems to support Oppenheimers alarmist position. The
claim has also been made that because of this paradigm shift, digital natives are
neurologically and psychologically different from previous generations.10 On a global scale
the One Laptop Per Child program places high enough importance on digital technologies
that providing one laptop for each child in developing countries has the lofty goal of
empowering and educating them in ways never before imaginable. There is debate
concerning the level of impact that being digital natives have on the current generation.
However dissenters agree with Prensky that both students and times are changing, and
major adjustments to how education is delivered will be required. The main critiques are
confined to the impact on the educator as digital immigrants11 or the lack of rigor applied
to investigating the phenomena thus far.12 However, it can be acknowledged that in
education, where programs in art and other pursuits once existed, they are now replaced or
overshadowed by education that supports and encourages the need for digital literacy and
fluency, and adjustments are necessary to accommodate and educate the current

3. Professional attitudes

The use of digital media for increased design capacity and contributing to intelligently
designed spaces in relation to landscape architecture was formally called for in landscape
architecture in the year 2000.13 In architecture digital media has been investigated for
decades, as the existence of the Association for Computer Aided Design in Architecture
(ACADIA) since 1981 can attest. However, digital engagement and experimentation has
been a focus for a small segment of architects and landscape architects. While thin margins
and demands for billable hours could be to blame, perhaps at issue as well is the rhetorical
nature of the view of the completely digital office promoted during the past decade, which
is as one-dimensionally unproductive as the anti-digital stance at issue in this paper.
Perhaps as a reaction to the idea of the purely digital office, the author has observed that
for much of the past decade an anti-digital stance has held sway in architecture and
landscape architecture programs and firms in Canada, the United States, and the United
Second International Conference on Critical Digital: Who Cares (?) 123

Kingdom. This paper is not seeking to debate the merits of freehand sketching versus digital
media use; the issue at hand is why there are groups of designers the world over that will
not consider the design possibilities of digital media at all, while seemingly knowing very
little about its implications on design practice or thinking. The prevailing attitude that digital
media somehow or other diminishes the design process or should simply be rendered to the
presentation stage, is prevalent. Two camps have been observed; there are those who claim
that the computer stifles creativity on the one hand and those that see possibilities for
digital media to enhance design and are accepting and supportive of any tool that furthers
the creative act of designing. Yet with such critical discourse there is a prominent anti-
digital slant why? This paper proposes that it is the fear of losing control by established
professionals that continues to feed the anti-digital agenda.

4. Power, authority and control: the importance of making images

The ability to create images has long held power. This is particularly true for the
environmental design professions. Differing from the time of the master mason when
buildings were constructed firsthand by those with knowledge of building practices and
techniques, architecture first separated itself by representing the building rather than
constructing it. The importance of the act of drawing and the drawing itself in architecture
is discussed by Edward Robbins in the first chapter of Why Architects Draw, The Social Uses
of Drawing. Beyond the obvious advantages of representation and design thinking, Robbins
identifies the importance of drawing in the social status and hierarchy of the architect within
society, and as the special talent that the architect has over other professions or trades
people.14 This paper will focus on the social status aspects of both digital and analogue
drawing, visualization and representation. Many discussions on drawing versus digital media
concentrate on the technical or cognitive superiority of one medium over another; this
paper will focus on the reasoning behind such fierce debate of conventional media and
digital media. It is my hypothesis that the loss of power associated with image making due
to the adoption of digital tools and techniques, and the contribution it may have on the
professions, is one reason that digital methods are criticized.

5. Discussion

The anti-digital stance in architecture and landscape architecture, and slow or incorrect
responses to changing societal norms, is not isolated within our professions. Current
attempts to keep digital media out of the design process in architecture and landscape
architecture could have parallels to similar attempts by the Recording Industry Association
of America (RIAA) when confronted with the advent of digital music files and file sharing.
How the RIAA continues to deal with the digital music represents vast missed opportunities
when compared to the way that network television is confronting and engaging with digital
video sharing. Digital music files, specifically the mp3 format, allowed users to copy and
distribute music on the internet with ease and speed never before imagined. The RIAA sued
people rather than change their now outdated delivery mode out of fear of losing control.
Network television, confronted by digital video files and increased internet connections
speeds which have in turn increased online sharing of movies and television, has slowly
changed their model and experimented to adapt to new realities and markets, offering
television programming online for free through a variety of different models. While not
always successful, the willingness to engage and change is critical to a new way of living
and working. Instead of changing, the RIAA chose to engage with protectionist measures
Second International Conference on Critical Digital: Who Cares (?) 124

and lawsuits, none of which addressed in any capacity the issue of illegal music file sharing
and only solidified proponents of free speech against the aggressor.15

As computers and internet access become more widespread broadening the definition of
who can design will be necessary. However, the problem still remains as to why this is
difficult for many within the professions of architecture and landscape architecture to
accept. I identify the outdated view of the designer as individual artist, and the mentality
this view necessitates, as an aspect which can explain in part this lack of acceptance and

5.1 Designer as individual artist

The idea that architecture or landscape architecture is designed by an individual artist is

perpetuated by the continued proliferation of the napkin sketch and the superstar
architects of today. Within these notions there is little acknowledgement of the reality of the
intensive team effort involved in contemporary architectural and landscape architectural
design. The individual designer or star gets credit for something they may not have
actively participated in, or which simply took a greater number of individuals to create than
is credited. The sole designer model is at odds with a business model necessary for
contemporary design firms. In many instances there is a need to prove to a client that a
multitude of experts work on any project in order to validate the rigor of the process, a
necessity which was identified over 15 years ago.16 Yet the notion of the sole designer
persists, partly out of a perceived necessity of motivating creativity and drive within a
firm.17 In the authors opinion this form of individual motivation is outdated, and may not
represent the most productive means to motivate a digital native, who is accustomed to the
positive recognition of team based collaboration.

5.2 Engagement versus relegation

In the design professions, the use (or rather relegation) of the computer to producing
presentation and construction drawings at the end of the design process has been
widespread, and is accepted even by anti-digital proponents. In the authors view, this
supports the theory that many fear losing control of image making power, as digital media
is able to be dismissed as the tool of the draftsperson, rather than an active component of
the design process. The distinction between the draftsperson and the designer is an
important one, as computers are comfortably relegated to the mere technical rather than
creative side of things. Such relegation could be justified as necessary within the
contemporary office, however the case has been made that such divisions of labor are
neither positive nor negative for working or learning with or without technology.18 The
notion that digital media necessitates more technical skill than artistic skill, is arguable at
best. Sharing information and the inter-disciplinary practice afforded by digital design
techniques removes the black box or mystery around the design process, demystifying the
star designer while enhancing the active, integrative designing that is possible when
incorporating digital technology throughout the design process

5.3 Convincing looking design

Another aspect contributing to the fear of adopting digital media methods and techniques is
the convincing look of the representations that novices are able to produce, whether the
design is sound or not. This is often criticized as being worse than a bad hand drawing, as it
somehow seduces the unknowing audience into liking a bad design. There are two
Second International Conference on Critical Digital: Who Cares (?) 125

intertwined issues here, namely the subjective nature of design and that design is only
being evaluated based on representation. The merit of a novice design represented well
versus a sound design represented poorly is a tenuous debate, that acknowledging is an
understandably frightening reality for some. The reactionary solution to this issue is to
relegate the seductive digital media to the sidelines, which will no doubt be an unsuccessful
way of dealing with the issue. Designers must learn the new or unconventional ways of
evaluating design within the altered paradigm. That undergraduate students are able to
make images that are as convincing or more than seasoned practitioners is a positive
development. While this potentially undermines the authority, power and influence of
established practitioners, we must engage and change to capitalize on such developments.

5.4 Sketching limitations

If designers and design educators are seriously concerned with the development of design
ability they would pay closer attention to studies indicating that beginning designers do not
use sketching to develop design ideas as they lack the fundamental skills to do so,19 while
they do have considerable experience with computers that can be capitalized upon. The
notion of drawing as a way of thinking is accepted. So why is this limited to paper and
pencil? Current digital natives are as comfortable or more with the mouse than the pen.
Very little evidence exists to support either side of this debate. While it has been presented
that visual thinking cognition differs when sketching using a pen and paper when compared
to a mouse,20 there is no evidence that this difference is better or worse. Recent research
aims to mimic the act of drawing in real life as an interface to the computer.21 Seeking to
employ traditional interfaces in an attempt to make technology familiar to those from a
previous generation seems unnecessary for the current generation that likely is as
comfortable or more with a mouse than a pen.

5.5 Moving forward

Many of the solutions associated with such a paradigm shift need to address the insular and
protectionist nature of the design professions. I propose four areas that will require
attention if the professions are to successfully engage and employ digital media and, more
importantly, the current generation of students and society in general:

1. Address issues of ego, and the architect as the individual master designer
2. An expanded view of the creation process and collaboration
3. Revise the antiquated concept of paying dues, an attitude that does not fit with
current generation
4. Engage with digital media and acknowledge the realities of the current generation of

The solutions should not be reactionary in and of themselves, as that would be unproductive
and perpetuate the ideological stances that exist. Rather than the current either/or
situation, as many opponents of digital experimentation advocate, a both/and pluralism that
becomes a hybrid of established techniques and experimentation with new ones needs to
emerge. The increased digital exposure experienced by students today, calls for the
professions to accept digital media as a viable and varied tool for design.
Second International Conference on Critical Digital: Who Cares (?) 126

6. Conclusion - changing modes of seeing/thinking/designing/learning

With all the power that representation of design vision holds, it is no wonder that there is a
subversive reaction against media/tools that enable the inexperienced person (student,
recent graduate, layperson!) to create visualizations that are as evocative or more than the
experienced design professional. Sophisticated looking drawings can be produced quickly,
and at an earlier stage than any time in history for those lacking analogue representation
skills. That there are seasoned, and even young, design professionals that as a result object
to the use of the computer, claiming that one cannot design with it, is not surprising. The
common practice of relegating the computer, if accepted at all, to the final stage of design
for final renderings or construction drawings will not be a productive way of dealing with
the new paradigm; such treatment is reactionary, and unproductive. We must look past the
gloss, and become conversant with reading digital media, in order to begin to understand
the impact these digital tools have, are having and will have. Other disciplines are
discussing the need to adjust teaching to this paradigm shift, while the majority of
architecture and landscape architecture professionals are engaged with the past.

While there will always be students that can draw well, and those that excel or not at using
digital media, educators and practitioners need to become more adept at looking past the
gloss and not being seduce by the visualization alone, regardless of media. A paradigm shift
from analogue to digital creative expression is underway. This change should be recognized
with the new types of students entering the classroom, and their new knowledge and
different abilities should be appreciated and encouraged.9

Clinging to an outdated mode of self-referential practice ignores both the changes of society
in general and the collaborative and information sharing propensity of the digital native,
who are the next generation of designers. Isolated from the social and technological context
in which environmental designers operate provides freedom for some in the form of less
accountability. However, not engaging with the socio-technical context of our time is a
mistake, and stands to relegate designers in general, and architects specifically, to a very
small role in society. This is a far cry from the historic power attributed the architect via the
special talents of drawing.
Second International Conference on Critical Digital: Who Cares (?) 127

7. Notes

Paul Laseau, Graphic Thinking for Architects and Designers (New York: Van Nostrand
Reinhold, 1980), 17.
Omar Faruque, Graphic Communication as a Design Tool (New York: Van Nostrand
Reinhold Co., 1984).
Vinod Goel, Sketches of Thought (Cambridge, Mass.: MIT Press, 1995).
Chip Sullivan, "Observation and the Analytical Representation of Space," in Representing
Landscape Architecture, ed. Marc Treib (New York: Taylor and Francis, 2008), 63.
Edward Robbins, Why Architects Draw (Cambridge: MIT Press, 1994).
Victoria Kaplan, Structural Inequality: Black Architects in the United States (Lanham:
Rowman & Littlefield, 2006), 19.
Robbins, Why Architects Draw, 29-31.
Todd Oppenheimer, "The Computer Delusion," The Atlantic Monthly 280, no. 1 (1997).
Marc Prensky, "Digital Natives, Digital Immigrants Part 1," On the Horizon 9, no. 5 (2001):
Marc Prensky, "Digital Natives, Digital Immigrants Part 2: Do They Really Think
Differently?," On the Horizon 9, no. 6 (2001): 5-6.
Timothy VanSlyke, "Digital Natives, Digital Immigrants: Some Thoughts from the
Generation Gap," review of Reviewed Item, The Technology Source, no. (2003),
Sue Bennett, Karl Maton, and Lisa Kervin, "The Digital Natives Debate: A Critical Review
of the Evidence," British Journal of Educational Technology 39, no. 5 (2008).
Tom Turner and David Watson, "Dead Masterplans & Digital Creativity" (paper presented
at the Greenwich 2000 Digital Creativity Symposium, Greenwich, UK, 2000).
Robbins, Why Architects Draw, 31.
J. Brown, "Is the Riaa Running Scared?," Salon. com 26 (2001).
Robert Gutman, "Emerging Problems of Practice," Journal of Architectural Education 45,
no. 4 (1992): 198.
Larry Hirschhorn, "Developing and Evaluating Talent in Architecture Firms," Journal of
Architectural Education 45, no. 4 (1992): 228.
Reed R. Stevens, "Divisions of Labor in School and in the Workplace: Comparing
Computer and Paper-Supporting Activities across Settings," Journal of the Learning Sciences
9, no. 4 (2000).
Malcolm Welch, David Barlex, and Hee Lim, "Sketching: Friend or Foe to the Novice
Designer?," International Journal of Technology and Design Education 10, no. 3 (2000):
P. H. Won, "The Comparison between Visual Thinking Using Computer and Conventional
Media in the Concept Generation Stages of Design," Automation in Construction 10, no. 3
(2001): 324.
Chor-Kheng Lim, "Is a Pen-Based System Just Another Pen or More Than a Pen?" (paper
presented at the Proceedings of Education in Computer Aided Architecture Design in Europe
(eCAADe 2003), Gratz, Austria, 2003).
Second International Conference on Critical Digital: Who Cares (?) 128

8. Bibliography

Bennett, Sue, Karl Maton, and Lisa Kervin. "The Digital Natives Debate: A Critical Review of
the Evidence." British Journal of Educational Technology 39, no. 5, 2008: 775-86.
Brown, J. "Is the Riaa Running Scared?" Salon. com 26, 2001.
Faruque, Omar. Graphic Communication as a Design Tool. New York: Van Nostrand Reinhold
Co., 1984.
Goel, Vinod. Sketches of Thought. Cambridge, Mass.: MIT Press, 1995.
Gutman, Robert. "Emerging Problems of Practice." Journal of Architectural Education 45, no.
4, 1992: 198-202.
Hirschhorn, Larry. "Developing and Evaluating Talent in Architecture Firms." Journal of
Architectural Education 45, no. 4, 1992: 225-29.
Kaplan, Victoria. Structural Inequality: Black Architects in the United States. Lanham:
Rowman & Littlefield, 2006.
Laseau, Paul. Graphic Thinking for Architects and Designers. New York: Van Nostrand
Reinhold, 1980.
Lim, Chor-Kheng. "Is a Pen-Based System Just Another Pen or More Than a Pen?" Paper
presented at the Proceedings of Education in Computer Aided Architecture Design in
Europe (eCAADe 2003), Gratz, Austria 2003.
Oppenheimer, Todd. "The Computer Delusion." The Atlantic Monthly 280, no. 1, 1997: 45-
Prensky, Marc. "Digital Natives, Digital Immigrants Part 1." On the Horizon 9, no. 5, 2001:
. "Digital Natives, Digital Immigrants Part 2: Do They Really Think Differently?" On the
Horizon 9, no. 6, 2001: 1-6.
Robbins, Edward. Why Architects Draw. Cambridge: MIT Press, 1994.
Stevens, Reed R. "Divisions of Labor in School and in the Workplace: Comparing Computer
and Paper-Supporting Activities across Settings." Journal of the Learning Sciences 9,
no. 4, 2000: 373-401.
Sullivan, Chip. "Observation and the Analytical Representation of Space." In Representing
Landscape Architecture, edited by Marc Treib, 62-73. New York: Taylor and Francis,
Turner, Tom, and David Watson. "Dead Masterplans & Digital Creativity." Paper presented
at the Greenwich 2000 Digital Creativity Symposium, Greenwich, UK 2000.
VanSlyke, Timothy. "Digital Natives, Digital Immigrants: Some Thoughts from the
Generation Gap." Review of Reviewed Item. The Technology Source, no., 2003,
Welch, Malcolm, David Barlex, and Hee Lim. "Sketching: Friend or Foe to the Novice
Designer?" International Journal of Technology and Design Education 10, no. 3,
2000: 125-48.
Won, P. H. "The Comparison between Visual Thinking Using Computer and Conventional
Media in the Concept Generation Stages of Design." Automation in Construction 10,
no. 3, 2001: 319-25.
Second International Conference on Critical Digital: Who Cares (?) 129

From Technologies of Representation to Technologies of Performance

Sha Xin Wei

Topological Media Lab, Concordia University, Canada

1. The problem of representation, and the performative turn.

Typically in our practices as designers, architects, scientists, or artists, we build and respect walls between designer
and maker, maker and user, analyst and analysand, and spectator and actor. Guy Debord argued in the Society of
Spectacle that it was the separation of life from its products which led to much of the banalization and anestheticiza-
tion of contemporary social experience. This criticism echoed many critiques of life in industrial, and now technol-
ogically mediated society. A more contemporary, post-marxian critic, for example, could argue the same for the
separation of life from its affects.

In this very large domain, one of the most fundamental elements of this critique was the problematization of systems
of representation. To pick just one prominent example, Ludwig Wittgenstein famously demolished the presumed
universally stable, intrinsically and abstractly analyzable meaning of words and sentences. In the Philosophical In-
vestigations, capping an extended series of paragraphs in which Wittgenstein demonstrates the implausibility of the
thesis that the meaning of language can be captured in a system of rules or can be determined by a rule-based proce-
dure, he writes: To understand a sentence means to understand a language. To understand a language means to be
master of a technique.i The first statement says that meaning is not entirely localizable. And the second, even
more significantly, says that meaning comes from use and practice.

Despite the ubiquity of the paradigm of the Graphical User Interface, going graphical does not yield intrinsic, con-
text-free meaning. Wittgenstein extends his analysis to the interpretation of graphic pictures as well. In 454, he

How does it come about this arrow >>>-----> points? Doesnt it seem to carry in it something besides
itself?No, not the dead line on paper; only the psychical thing, the meaning, can do that. -- That is
both true and false. The arrow points only in the application that a living being makes of it.ii

Just as in the case of verbal speech, the meaning of a graphic sign lies in its practiced, and collective use.

Of course, this is only one, but sharp, example of the modern problematization of representations in media ranging
from writing to photographs, film, and digital video. The larger problem naturally leads us to question whether any
technology for the creation and diffusion of representations could possibly be adequate to our social or aesthetic

Many of our computational tools for design are used explicitly as technologies of representation. Can computa-
tional instruments do something other than separate designers from the products and the experiences of their de-
signs? As the architectural experimental group Adaptive Actions put it in their documentation:

Architects often prefer photographing/showing buildings at the height of their glory: when the pres-
ence of time is imperceptible and user-trace absent. Some architectural agencies even control repre-
sentation, allowing circulation and posting of approved images only. 'Now' is the modus operandi
priority goes to the image of the building in the present and very little concern to its progression, to
the future. Much emphasis is given to what must be photographed, honoured, recorded and published
in magazines rather than to users' adaptation of space and appropriation in various forms. iii

Of course this is not an isolate or new sentiment in the critical history of the relationship between architecture and
technology. But what about the even more fundamental gap between our bodies from our cognitive models?iv The
Second International Conference on Critical Digital: Who Cares (?) 130
Sha, Technologies of Performance Harvard Critical Digital 2009

gap I refer to is not so much the old body-mind dualism, but the inadequacy of any theoretical schema to pragma, to
which Wittgenstein pointed. The turn to the body implies a deeper turn to action, and to performance, those com-
plexes of related, irrepeatable and intentional actions that create an event. Less anthropocentrically, we can regard
performance as the primordial flow of matter holding in suspension the problematic distinction between living and
the inert. This materialist ontology has a long history that ranges from Heraclitus to Maturana and Varela, and Ka-
ren Barad. Barad describes succinctly the modern critique of representationalism and the move toward performance
in a chapter on material-discursive practices from her book Meeting the Universe Halfway. Barad writes:

Is it not, after all, the commonsense view of representationalism -- the belief that representations
serve a mediating function between knower and known -- that displaces a deep mistrust of matter,
holding it off at a distance, figuring it as passive, immutable, and mute, in need of the mark of an ex-
ternal force like culture or history to complete it? v

Barad continues: A performative understanding of discursive practices challenges the representationalist belief in
the power of words to represent preexisting things. vi Barad refers to words, but my point, along with Wittgenstein
and Barad herself as well, pertains to all representations. The move toward performative alternatives to represen-
tationalism shifts the focus from questions of correspondence between descriptions and reality. (e.g. do they mirror
nature or culture?) to matters of practices, doings, and actions.vii

2. The performative turn and technologies of performance

Consider a software application like Photoshop. Here the creator makes an image that another person sees as a fi-
nished work whose form and content cannot be altered by that spectators activity. The same with Adobe After
Effects, Auto-CAD, and Rhino. With Adobe After Effects, the creator makes transformations on the video materi-
al, but each time the final video is viewed, the same transformations occur at the same place in the video. Of
course there is the pedantic case of the creator as spectator. But my point concerns whether the intended audience
regards the created entity as it is shaped by the creator, or whether the audience can re-shape the entity by interacting
with that entity. Whether the entity is itself made of sound or video or plastic, the result is still a representation of
some other thing.

However, there is a whole other category of technologies oriented to live performance and realtime interaction be-
tween the media entity and the spectator. With realtime video processing software like NATO, VVV, PD, or more
professionally, Max/MSP/Jitter, a composer creates not a linear sequence of images and sounds, nor even a set of
discrete pieces of media that can be permuted and selected according to some user input but a set of conditions for
the spatially and temporally continuous modulation of streams of video and sound simultaneous with the gesture of
the person(s) engaged with the responsive media.viii The analogy is a musical instrument like a violin, or the bodily
apparatus of a singer.

A technology of performance does not have to be immaterial or only concerned with the synthesis and modulation
of time-based media. In fact, to identify the technology of the ephemeral performance as ephemeral media
processes would be to commit a grammatical error of a Wittgensteinian sort. Wittgenstein famously said: Dont
regard a hesitant assertion as an assertion of hesitancy. Similarly we should not confuse a technology that
represents performances with a technology that mediates performance.

Now, having sketched the distinction between computational technologies not as technologies of representation but
as technologies of performance, let us pause and ask why should we make this move? I suggest that this is one way
to claim, or reclaim, something of an ethico-aesthetic relation to our work. Do designers inhabit their own prod-
ucts? Should they? The economics of design process as it stands, based on a sequence of transformations of repre-
sentations powered in each stage by commitments of ever larger amounts of capital, makes it difficult to make a
living sketch of an environment that could be inhabited by its putative inhabitant-creators. Nonetheless we can ask:
If we had to inhabit the environments that we design, would we design them quite differently?

Second International Conference on Critical Digital: Who Cares (?) 131
Sha, Technologies of Performance Harvard Critical Digital 2009

Returning to questions of technique, what would it mean to make a sketch? Perhaps it would help to make explicit
another inspiration for the sort of strategy that I am suggesting with responsive environments, regarding theater as a
mode of experiential research. In his landmark book, Towards a Poor Theatre, Jerzy Grotowski writes:

The Rich Theatre depends on artistic kleptomania, drawing from other disciplines, constructing hybr-
id- spectacles, conglomerates without backbone or integrity, yet presented as an organic artwork. By
multiplying assimilated elements, the Rich Theatre tries to escape the impasse presented by movies
and television. Since film and TV excel in the area of mechanical functions (montage, instantaneous
change of place, etc.), the Rich Theatre countered with a blatantly compensatory call for total thea-
tre." The integration of borrowed mechanisms (movie screens onstage, for example) means a sophis-
ticated technical plant, permitting great mobility and dynamism. And if the stage and/or auditorium
were mobile, constantly changing perspective would be possible. This all nonsense. No matter how
much theatre expands and exploits its mechanical resources, it will remain technologically inferior to
film and television. Consequently, I propose poverty in theatre. We have resigned from the stage and
auditorium plant: for each production, a new space is designed for the actors and spectators.

Thus, infinite variation of performer-audience relationships is possible. The actors can play among the
spectators, directly contacting the audience and giving it a passive role in the drama .... Or the actors
may build structures among the spectators and thus include them in the architecture of action, subject-
ing them to a sense of the pressure and congestion and limitation of space....ix

Our situation is not Grotowskis, but there are lessons to be drawn. Grotowskis actors economically achieve their
effects with greater symbolic and physical intensity than what can be achieved by representational media with a
hundred times the technical investment, by training. However, pushing the argument of the previous paragraph to
its limit, the designers of an experimental architectural event may be both actors and spectators in that event, so
some actors in the event will be non-expert, or non-rehearsed. Therefore we do not have his pure condition. I sug-
gest that we take Grotowskis approach more symmetrically (in Barads sense) between the sets materials and the
human inhabitants of an event.

One of the Poor Theaters tactics is for the actors to build a sets structures in the course of an event, rather than in-
troduce elaborate, pre-constructed sets. And to leverage and engage the imagination of the inhabitant, every prop
must be as flexibly re-signified as possible. For example, a broomstick can become horse, a dance partner, a cruci-
fix. This argues for media that can be fluidly re-shaped by the inhabitant rather than prepared with an elaborate,
pre-composed syntax and semantics. With responsive media techniques, projected image becomes chiaroscuro il-
lumination; a table becomes a drum. (See Figure 2.)

What of the actors? In an environment built for everyday life, we may not always have actors expertly rehearsed
in performing an event. But the expertise that inhabitants can draw on is the deep and non-articulated sediment of
all the corporeal intuitions built over a lifetime from birth, all the intuitions that every body brings into an event.
And just as a physical set could be re-configured in the course of an event, so could the performative structure, the
roles and relationships between the inhabitants be re-configured as well. Some inhabitants may be rehearsed and
others not. Moreover, what we construe as event in everyday situations may not be restricted to one marked period
of time, and certainly may not assume pre-constructed conversational templates. Instead, one could study how
people co-habit an event, how we entrain or engage one another in common fields of matter and media. This pro-
vides a profound motivation for working with non-figurative, responsive media that bear some of the manipulable,
palpable qualities of physical matter. To support that, we would need a palette of computational media that permits
manipulation as freely as ink or sand, yet affords practice with more refined effect.x We will return to this in the
next section.

3. Responsive environments as a technology of performance

Of course, the temporal and energetic scales of media technologies typically do not reach the much greater scales
implied in architecture. Hence the interest in so-called virtual reality visualization systems in some technology-
driven approaches to architectural research. But as we have seen, there is a technological hubris in attempting to
entirely replacing the perceptual field with a synthetic one.

Second International Conference on Critical Digital: Who Cares (?) 132
Sha, Technologies of Performance Harvard Critical Digital 2009

I propose that designers of built space using responsive media (rather than interactive media) could take advantage
of certain powerful affordances of emerging technologies of performance for the creation of experimental events in
real-time, responsive environments.

Many different environments have been proposed and built over the past 50 years. For example, Gordon Pask and
Robin McKinnon-Woods MusiColour system (1953) created light-fields and sound-fields in a physical space that
reacted to the activity of the inhabitants physical activity according to some response logics.xi The key distinction
here is not so much the sorts of technologies employed but the intent with which they are combined and used, and
by whom. Against the homogeneously electronic cybernetic system, I ask whether we can institute a practice of
experimental sketching of first-person experience in built environments that capitalizes on responsive media tech-
niques as well as all the technologies of performance, whether computational or not. It seems that, given the condi-
tioning effects of any technology, a designer would do well to use poetic economy and power with computational
means as well.

The key differences between such environments and a virtual reality system are that there is not necessarily any at-
tempt to represent some structure that is elsewhere; nor is there the intention to wholly replace the natural sen-
sory experience by the synthetic. Instead, we start with the full, thick, embodied experience of everyday materials
and props, and augment it with the judicious insertion of responsive media that would sustain the experiential condi-
tions required for an embodied engagement with an experimental configuration of architectural elements.xii

I close with some more contemporary examples of responsive environments ranging from the TGarden play space,
to the Ouija movement experiments on intentional and collective gesture, and on-going research with the Ozone
media choreography system.xiii

In 1997, I proposed a responsive environment called TGarden (Figure 1) with the following characteristics:
Inhabitants engage each other not via explicit verbal communication, but via material fields (air, fabric, etc.) and
temporal textures of structured light and sound;
The media fields are dense enough to put in play where a body ends and the world begins;
The states of the event evolve and superpose continuously according to quasi-physical dynamics, a challenging
alternative to the discrete finite state automata model of digital computation.
Together with the art groups Sponge and FoAM, prototypes (TG2001) and related environments (txOom) were ex-

hibited in 2001-2002 in Linz, Rotterdam, Athens, Torino.

Figure 1. TGarden, Ars Electronica and Dutch Electronic Art Festival 2001.

In 2001, I established the Topological Media Lab to develop certain strands of this work much more systematically.
Concisely put, the TML is an atelier-lab experimentally studying distributed modes of gesture, agency, and material-
ity from phenomenological as well as material performative perspectives.

The TMLs methods include: wireless sensor networks to incorporate any sort of physical signal using pattern track-
ing and sensor feature extraction techniques; calligraphic video -- computationally synthesized video as structured
light, temporal textures via quasi-physical simulations; gestural sound -- realtime sound re-synthesis; wearables and
active textiles, together with the traditional crafts of theater: lighting, scenography, movement, costume, elements of

Second International Conference on Critical Digital: Who Cares (?) 133
Sha, Technologies of Performance Harvard Critical Digital 2009

musical or sonic art. All the media used or created in the atelier is designed to respond in realtime to gesture and
activity. And the approach strategically sidesteps cognitivist approaches to user interaction or user experience.

Over the last five years, the TML has honed these experimental techniques by working with choreographers, real-
time video performers, sound artists and musicians. Rather than just building stand-alone artworks, however, the
TML has built experimental apparatuses for studying gesture, agency, and materiality.

One of the most sustained of these experiments was the two month-long Ouija residency (Figure 2) in the Hexa-
gram-Concordia research blackbox -- a theatrically equipped working space of 15m x 15m x 7m, designed to ex-
plore the following questions refined out of our experiences building responsive environments:
When is a gesture intentional, and when is it accidental? How would a human or machine system distinguish be-
tween such grades of intentionality?
When does a set of actions constitute a collective gesture, rather a set of autonomous gestures? How would a hu-
man or machine system distinguish collective vs solo gesture?

The ancillary question alongside these principal phenomenological questions was: What difference does responsive
media make in these situations?

Figure 2. Ouija experiment: shaping live-processed video as structured light.

An experienced choreographer and researcher devised a series of movement experiments for two to six dancers, plus
zero to six non-prepared participants. These experiments were described in terms familiar to the dancers as struc-
tured improvisation. For the experimental apparatus, the TML created responsive fields with projected video,
sound, and lighting, plus a few sensor-augmented furniture and props. We are analyzing the results of the four
weeks of recorded activity from different perspectives, and will report on that in other venues.

Based on the 8 years of experience with technologies of performance, over the past two years the TML has begun to
host architectural research. This research ranges from a year-long studio (with an architectural theorist/practitioner
and 3 graduate architecture students) focussing on a contested part of Montral called Griffontown, through a sever-
al waves of immigration and industrialization over three epochs of energy economies since the early 1830s; to tem-
porary actions and urban installation-interventions by affiliates of the atelier.

Second International Conference on Critical Digital: Who Cares (?) 134
Sha, Technologies of Performance Harvard Critical Digital 2009

Over the coming years, the atelier-lab is oriented to designing work in the built environment both as sited experi-
ments in gesture and temporal texture, but also as more durable conditioning of events in public space. We expect
that in order to pursue such work with any degree of phenomenological rigor and poetry, we will need to reconstruct
technologies around human, infrastructural and environmental activities that performatively create events in the built
environment. Taking this thesis seriously suggests that we will need to construe technologies of performance more
ambitiously than has been the case in media, and in design, especially if we pursue the implications of blending de-
signers and inhabitants agencies in a responsive built environment. Not least is the fact that every opening created
by interactive, and now, responsive technologies of performance also presents another opportunity for legal, socio-
economic discipline expressed as policies implemented in the material as well as computational infrastructure of our
built environment.

4. Acknowledgements

I thank the 60 artists, scholars, researchers and students who have been affiliated with the Topological Media Labs
experiments, particularly the present affiliates of the atelier-laboratory: Michael Montanaro; the Ozone research
group: Harry Smoak, Tim Sutton, Morgan Sutherland, Michael Fortin, Jean-Sebastien Rousseau; plus the Dedale
architecture studio: Patrick Harrop, with Gregory Rubin, Candace Fempel, Evan Marnoch.


Barad, Karen Michelle. Meeting the Universe Halfway : Quantum Physics and the Entanglement of Matter and
Meaning. Durham: Duke University Press, 2007.
Debord, Guy. The Society of the Spectacle. New York: Zone Books, 1994.
Grotowski, Jerzy, and Eugenio Barba. Towards a Poor Theatre. 1st Routledge ed. New York: Routledge, 2002.
Haque, Usman. "The Architectural Relevance of Gordon Pask." In 4dsocial: Interactive Design Environments, Lucy
Bullivant (ed.), 54-61. London: John Wiley and Sons Ltd., 2007.
Sha, Xin Wei. "The TGarden Performance Research Project." Modern Drama: world drama from 1850 to the present
48.3, Fall, Special Issue: Technology (2005): 585-608.
Sha, Xin Wei. "Poetics of Performative Space." AI & Society 21, Birthday Issue: From Judgement to Calculation. 4
(2007): 607-24.
Simondon, Gilbert. Du Mode D'existence Des Objets Techniques. Paris: Aubier, 2001.
Sponge, TGarden: http://topologicalmedialab.net/xinwei/sponge.org/projects/m3_tg_intro.html
Topological Media Lab, Ouija: http://www.topologicalmedialab.net/joomla/main/content/view/155/11/lang,en/ .
-----, Media Choreography: http://www.topologicalmedialab.net/joomla/main/content/blogcategory/5/21/lang,en/ .
Wittgenstein, Ludwig, and G. E. M. Anscombe. Philosophical Investigations: The German Text, with a Revised
English Translation. 3rd ed. Malden, MA,: Blackwell Pub., 2003.

Second International Conference on Critical Digital: Who Cares (?) 135
Sha, Technologies of Performance Harvard Critical Digital 2009

See Wittgenstein, L, and G. E. M. Anscombe. Philosophical Investigations, 3rd ed. Malden, MA.: Blackwell Pub.,
2003, 199.
Ibid. 454.
See Prost, J-F, Adaptive Actions, forthcoming, AI & Society special issue on Soft Architecture, Sha, X.W. (ed.)
Berlin: Springer-Verlag.
It is telling that in a recent talk about embodied cognition, and the activity of throwing a ball, the speaker
showed FMRI images of the brains neural activity while the subject threw a ball, but no images of the whole body
playing in its environment.
See Barad, K., Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning,
Durham: Duke University Press, 2007, p. 133.
Ibid., p. 135.
I introduce the term, responsive media, to distinguish this sort computational media from so-called interactive
software, which is built around the Shannon-theoretic model of communication in which a sender encodes a mes-
sage into a packet which is transmitted over a channel to a receiver who in turn decodes it. Interactive software im-
plicitly assumes a turn-taking model of conversation in which only one speaks at a time, when in fact, most expe-
rienced events are densely full of concurrent processes.
See Grotowski, J, and E. Barba, Towards a Poor Theatre, 1st Routledge ed. New York: Routledge, 2002, p. 20.
In a word, virtuosity.
See Haque, U., "The Architectural Relevance of Gordon Pask," in 4dsocial: Interactive Design Environments,
Lucy Bullivant(ed.), 54-61, London: John Wiley and Sons Ltd., 2007, p. 56.
By element, I am deliberately drawing on Gilbert Simondons account of the evolution of technical objects. See
Part 1 of Simondon, G., Du Mode D'existence Des Objets Techniques, Paris: Aubier, 2001.
See the Showcase and Research links from http://topologicalmedialab.net, and search for these projects by name:
TGarden, Ouija, Ozone.

Second International Conference on Critical Digital: Who Cares (?) 136

Second International Conference on Critical Digital: Who Cares (?) 137

Material Imagination: In the Shadow of Oppenheimer 1

David Gersten

I would like to begin by sharing with you a story

A number of years ago I was speaking with an architect who was very excited about
their new project in Europe, he was telling me how he was going to do this
and he was going to do that. And the mayor said this and the paper said that,
and I finally asked Oh, what is the program? And he paused and said Form Z.
I hesitated, and chose to keep to myself what I had meant by the program.
This, of course, could be understood as a simple misunderstanding, or an architectural
Freudian slip and left at that. From the moment of that exchange, I sensed it was more,
that it was an expression of a deeper shift, of a new dimension in our discipline
and perhaps in the world. In the years since, I have been haunted by my hesitation;
I have wrestled with it, and meditated on it. Today, I would like to share with you
a few pieces of what I have found. A good place to begin is the birth of modern
Modern Incorporations were born out of ships and navigation. Individual ship owners
grouped together and incorporated the ownership of their ships allowing the predictable
burden of loss arising from the uncertainty of the long voyages to be distributed through
the corporate structure. When an individual lost their ship at sea, they were sunk in
business, if twenty individuals agreed to bind together and share the ownership of their
twenty ships, when one ship was lost, the incorporation remained buoyant, it still owned
nineteen ships and could redistribute the load of this new value among the shared holders
(shareholders) of the incorporation.
These early incorporations were promises about time and value, they worked because
they were no one individual, they were structures that served as a second self, absorbing
the load of losses by distributing them throughout the structure. This displacement of
individual risk required a structure of collective judgment, the shared holders were
assigned voting rights in the guidance and direction of their incorporations, these votes
were and still are, linked to the number of shares owned. In linking votes to share
distribution these early incorporations directly linked collective judgment to capital. The
ships were no longer navigated from on board the individual ship, but from sitting on the
board of the incorporation.
This structure of risk distribution reflects the structure of the ships themselves, which
could be understood as physical manifestation of the second selves of incorporation,
each a set of promises made in advance from the shore, one in materials, one in words,
two expressions of the same will to navigate uncertainty and conquer time.
This double structure; Half material (displacing water) half temporal (displacing the
individual ship owner) creates two forms of buoyancy transporting value across time.

Large parts of this essay was originally developed as a lecture entitled: Globe Double: Mimetic Capital; technology
presented in 2007 at the Ineffable conference organized by Bradley Horn and held at the City College of New York
(CCNY) School of Architecture, Urban Design and Landscape Architecture. I am grateful for the challenge this
conference provided in working out these questions.
Second International Conference on Critical Digital: Who Cares (?) 138

The ships and the incorporations mirror and manifest the principles of risk management
and distribution; they are the same project, the same desire to control the unpredictable
and conquer time. This early mirroring of physical and mental structures speaks to the
larger parallel growth of capital and technology.
Like gravity, capital is difficult to perceive, we can feel its pull, and see its impact,
but not quite recognize its location or physical existence. Perhaps capital moves just
outside of our perceptible spectrum and technology can be understood as visible evidence
of capital offering a glimpse of its force within or partially within our perception. The
basic algebra of capital is grounded in the mediation of time, in the ability of equity or
currency to defer consumption and the ability of debt or credit to accelerate consumption,
to allow us to consume that which we dont have.
These two concepts of deferred consumption and debt or credit act as valves compressing
or extending time, they have a direct impact on the ability of capital to affect our
perception of time. These tools do not remain external; they operate on an instinctual
level allowing us to feel at easy providing a sense of temporal shelter, a sense of well-
being. Modern banking has developed a complexity and intensity that has arguably
positioned capital as the primary ground for our cognition of time. While the seeds of our
current knots of capital and technology lie in the early mirroring of ships and their
incorporation, it is clear, that today, we are within a new dimension.
In the modern era the capital markets constitute a vast calculus of time promises that have
become our primary mode of resource distribution and our dominant form of collective
judgment. Over the most recent thirty years we have seen a dramatic transformation in
the calculus of global capital. Today, the majority of financial instruments are exchanged
and traded within vast frameworks of information technologies. A relatively new
practice called algorithmic trading employs massive computational technologies
emulating neural networks to monitor the markets and carry out billions of trades a day
with no human knowledge of the individual positions being bought or sold. The
frequency of the positions held is often in milli seconds and would not be conceivable as
a human activity. The Equity markets are now overwhelmingly traded algorithmically;
perhaps as much as 90% is done that way. These developments have opened up the world
of trading to a wide range of ideas borrowed from: mathematics, computer
science, genetics, physics, logic, etc. These ideas make the old "fundamental" and
"technical" trading paradigms appear truly limited in their scope and applicability.
Even the thought process has experienced an evolutionary leap, certain traders now talk
about ideas such as "trade mutation", "echo minimization" and "dark pools". 2
Some would argue that computational networks deployed in algorithmic trading do not
emulate neural networks but that they ARE neural networks as in many cases they
employ genetic algorithms which learn to adjust their strategy independently,
in live time. Algorithmic trading is one aspect of a new universe of the computation
based global capital markets where all manor of new tools allow time to be cut, packed,
broken down and reordered with the same speed and precision of particle physics.

This discussion of the capital markets and algorithmic trading in particular has benefited greatly from extensive
conversations with Farid Moslehi. I am truly thankful for his enormous generosity and patience.
Second International Conference on Critical Digital: Who Cares (?) 139

This is from David Harvey in

A Brief History of Neoliberalism

In so far as neoliberalism values market exchange as an ethic in itself, capable of acting

as a guide to all human action, and substituting for all previously held ethical beliefs, it
emphasizes the significance of contractual relations in the marketplace. It holds that the
social good will be maximized by maximizing the reach and frequency of market
transactions, and it seeks to bring all human action into the domain of the market. This
requires technologies of information creation and capacities to accumulate, store,
transfer, analyse, and use massive databases to guide decisions in the global market place.
Hence neoliberalisms intense interest in and pursuit of information technologies (leading
some to proclaim the emergence of a new kind of information society) 3

The new ethic of global capital, which seeks to incorporate all human action into
the market exchange, has found its ships in information technologies. The collective
judgment born in binding together of risk and reward within the early shipping routes
is now largely held within the neural networks of computational exchanges.
Value determination or Price discovery is to a large extent no longer based in the
fundamental value of any individual enterprise but more in a computational cloud
mirror; a kind of ship made of water, where algorithms read and write each other in the
constant navigation of a computational global exchange. For better or worse the primary
determinate of global resource distribution lives within the computational organisms of
the global markets. Like Borges Aleph this cacophony of inputs/outputs may constitute
a new non-human metabolism of collective judgment, one deeply implicated in the
double mirror of capital and technology.
Unlike the early double structure of ships and their incorporation
this double structure can be traced to a single defining event. Perhaps the most seminal
event of the 20th century: The Manhattan project; The building of the first atomic bomb.
Linking: capital, politics, technology, life and death the Manhattan project
simultaneously produced nuclear weapons and information technologies.
The mathematical demands of splitting the atom, led to the birth of digital computation.
The complexity of atomic catastrophe required: Numbers that could make numbers.

Perhaps the first verifiable act of alchemy Robert Oppenheimers faustian bargain of
number, substance and energy launched a radically new Janus face of capital and
technology. I have often thought of the odd symmetry of the Sun (in the sky) being an
enormous life giving fusion event, and fragments of the sun on earth (nuclear weapons)
being the direct inverse, (enormous life taking fusion events). This vast arc contains a
difficult paradox toward imagining the globes resources. To estimate the total wealth of
the planet, economists must register the planet in its entirety as a closed system with no
addition or removal of any resources into or out of the system with one detectable
exception; the energy of the sun. This gigantic nuclear fusion event is ultimately the only
external input that continually adds to the resources of the planet. If the sun is thought of
David Harvey, A Brief History of Neoliberalism, (New York: Oxford University press, 2005), p.3
Second International Conference on Critical Digital: Who Cares (?) 140

as a source, at the other end of the economy is debt. The cost of debt is not a source but a
sink. I find it remarkable that, the sum total of the worlds debt is secured in a direct line
by nuclear fusion devices. World trade, like many forms of consensus, contains
agreements that are enforceable; this means that they are maintained by force. In a
chilling symmetry to the gift of the sun, these technological fragments of the sun on
earth are the collateral, at bottom, securing every loan. The threat of total consumption is
the anchor of the entire economy. Globally, the link between nuclear weapons and debt
has had important spatial and political consequences; the weapon that ultimately brought
about the collapse of the Berlin Wall was debt. This debt, of course, created by the
nuclear arms race. The Manhattan project origininated both the collateral at bottom,
binding together the globes debt and the computational ships, transporting value from
computer port to computer port at the speed of light. The Manhattan project eclipsed all
previous epochs. In principle, we could say that we live in the shadow of Oppenheimer.

Comprehending this new double headed mirror is assisted by looking at the most
elemental definition of mimesis or the mimetic, which to paraphrase Dalibor Vesely is:
The re-enactment of movement as a significant gesture4 Looking at the current
horizons of technology we can identify a number of significant gestures that may be
understood as re-enactments of the movement of capital. From the early mirroring of
ships and incorporations, to the inverse technological mirror of the sun on earth found in
the collateral of nuclear weapons to the latest nonhuman; genetic time trading grounded
in information technologies we see the re-enactment of the movements of capital as a
significant technological gesture. It is quite possible that the construction of a
technological globe that we are witnessing, that Globalization itself, is not solely the
result of a reciprocity or mimetic exchange between us (humans) and the world but more
between incorporations and capital. Possible, that the model which positions humans
perceiving the world and engaging in a mimetic exchange has to a large extent been
displaced by a model which positions the second selves of incorporations viewing not
the world, per say, but capital and the intermediary plane of exchange is technology.
Perhaps technology is the other half of capital, that it completes it, allowing it to manifest
its movements into the world. While the original will to overcome the unexpected by
constructing entities that collapse time was certainly human perhaps the double
trajectories of Incorporations and Capital have become two picture planes that are facing
each other, a double mirror to infinity, producing a flickering constellation of station
points, focal points, foregrounds and backgrounds, a cacophony of second selves,
producing a commingled geography of algorithms and metabolisms
that is in-fact a technological globe double.

Dalibor Vesely, Architecture in the Age of Divided Representation(Cambridge, Massachusetts, MIT press), pg 287
In principle, it is possible to say that mimesis is a creative imitation in which something with the potential to exist is
recognized and reenacted as a significant gesture; it may be sound, as song or music; visible reality, as image or
picture; or ideas, as an articulated and structured experience.
Second International Conference on Critical Digital: Who Cares (?) 141

Recognizing global technology as a mimetic exchange between incorporations and

capital we begin to understand the profound ruptures between the globe and its double.
Bringing us to ask: Does the world fit?
Does the full spectrum of the world fit within the second technological constructed
Do we fit?
Does the full spectrum of our humanity fit within the prism of capital?

The capacities of capital and technology as our primary modes of collective judgement
are in serious doubt. This is evidenced not only by the 20th century war continuing now
into another century with no sign of slowing. But in the unprecedented inequities in
resource distribution generated by the requirements of Global capital. Long before the
fragilities of our current financial structures were exposed, the global financial markets
have generated unsustainable human and material inequities.

With its seemingly unlimited growth of material power, mankind finds itself in the
situation of a skipper who has his boat built of such a heavy concentration of iron and
steel, that the boats compass points constantly at herself and not north. With a boat of that
kind no destination can be reached; she will go around in a circle, exposed to the hazards
of the winds and the waves5 Werner Heisenberg

Perhaps the ships have expanded and are encompassing the globe.

To return to the initial question: What is the program? What is the promise of

Architecture is at root a discipline of mediation; an empathetic discipline with the

capacity to mediate an exchange of life and space. Our interior thoughts hold the capacity
to construct literate spaces; spaces of participation, inseparable from our memory and
imagination, inseparable from our being. The point of the individual imagination
becomes a line between the self and the other; empathy this line becomes a plane binding
us all; ethics, a constellation of points, lines and planes; a social contract.
These exchanges may speak in whispers, telling of our fragility, embodying with great
nuance the material and spatial empathy of life. It is quite possible that space is the other
half of us, that as Alberto Perez Gomez has so beautifully articulated space completes
us, and allows us to understand ourselves and others. Point, to line, to plane, the social
contract is a form of participation and contribution among our fellow citizens, but it is
also a contract with space itself, as the other half of us. A contract, to embody the widest,
most nuanced spectrum of what it is to be human into our reciprocal spaces. Architecture
is a material imagination of the social contract.6

W. Heisenberg, Rationality in Science and Society, Can We survive our future? (Bodely Head, 1971), p.84
A Material Imagination of the Social Contract Is the title of a seminar that I have been teaching at both The Cooper
Union School of Architecture and the Rohde Island School of Designs Graduate Studies Division for a number of
years. I am thankful to both institutions and to the many students whos questions have made significant contributions
to this my thinking in this essay.
Second International Conference on Critical Digital: Who Cares (?) 142

The ubiquitous observation of our time is transformation: cultural, technological, social,

and economical. Often risky, always challenging and occasionally perilous,
transformation relies on discipline to manage and mitigate the inherent risks and
challenges. It looks to discipline to discover and find its guiding principles. As atoms,
algorithms and metabolisms commingle in our twenty first century geography,
information technologies increasingly construct an encompassing mathematical
interpretation of the world; of time, of space, of architecture and of life itself.
In asking how do these spaces complete us, how does architecture mediate our
inhabitation of these geographies the debates should not be limited to being for or
against computation in architecture. The for position is always framed as
open to new possibilities or being part of the inevitable future the against is
thought of as Nostalgic and holding on to the past. These divisions mute all kinds of
dialogue. It seems that, talking about the computer as just another tool is also not
precise enough. This discourse asserts a false sense of neutrality, the porosity of being
offers no such shelter from the perceptual and cognitive shifts of our technological
enframing globe. The profound shifts in architectural ideation and production
that have come with information technologies must be considered within the eclipsing
double mirrors of Global Capital and Technology. The interweaving of information
technologies into the structure and content architecture is perhaps signaling a
transformation that we do not yet comprehend. Perhaps, it is not that information
technology is becoming a significant aspect of architecture, but architecture is becoming
an aspect of information technology. Perhaps architecture itself is Just another tool
within the double mirror of capital and technology.

Today architecture and humanity face a fundamental question:

What is the fundamental nature of space today? If space is the other half of
us, a participant in our thoughts and actions: Can space conceived wholly
within a mathematical interpretation of the world complete us? If
architecture opens a communicative exchange between our being and the
spaces we inhabit? Does the creation of architecture within a mathematical
interpretation of the world assert that the world itself could exist within
this mathematical interpretation? While it may be increasing difficult to
define exactly what spaces are completing us? I do believe we retain
the deeply human desire to construct coherence and the inherent capacity to craft
reciprocal exchanges with our geographies. Asking these questions may not be entirely
possible from within information technologies themselves.
Perhaps navigating and comprehending these new geographies, requires an inverse
perspective, begins with the parts that dont fit.

Heinrich Heine once wrote: "every epoch is a sphinx which plunges into the abyss
as soon as its riddle is solved."7
Heinrich Heine, The Prose Writings of Heinrich Heine (Walter Scott, 1887), p. 71
Second International Conference on Critical Digital: Who Cares (?) 143

I believe, architecture contains a riddle, contains perhaps our best hope of finding
north. If we can recognize time and information as the two dominate forces of our current
epoch, it is remarkable to me that the inverse opposite of time could be thought of as
space and the inverse opposite of information could be thought of as substance.
It seems that architecture with its capacities to mediate an exchange of life, substance and
space, holds a unique position in our time. I believe, that architecture contains deep
within its ontological structure the seeds of an inverse perspective. Perhaps, unbeknownst
to itself, architecture contains a mode of insurgency. Architecture and humanity
share the same predicament: architecture is as irreducible as we are. The spatial/material
imagination inherent to architecture is a dimension of human life. In this sense,
architecture is a life sustaining discipline; an empathetic discipline with a life of its own,
reciprocal to ours; A nuanced viscous discipline for a nuanced viscous humanity. This
may be architectures promise.

As we move into the 21st century perhaps architecture will discover itself anew,
not as a mirroring expression of the capital market exchange but as a deeply human
exchange of life and space, a sheltering exchange with the capacity to navigate our
current geographies and imagine a social contract with all of the nuance and imagination
of life. Finally, when architecture asks: What is the program? Does not space
empathetically call us close to whisper: The program is life itself.
Second International Conference on Critical Digital: Who Cares (?) 144

Second International Conference on Critical Digital: Who Cares (?) 145

Neo-vernacular | Non-Pedigreed Architecture

Design Authorship And Expertise At The Time of Participatory Culture and Crowdsourcing

Yasmine Abbas.
PanUrbanIntelligence, France.

This essay focuses on spatial production at the time of participatory culture and
crowdsourcing practices. If vernacular architecture is the result of transmitted knowledge
from generation to others and the specific answer (and deep connection to the ground) to a
geographical/geological context climate and local resources, neo-vernacular, is the peer-
to-peer transmission of knowledge through Internet platforms, over a shorter period of
time, and resulting in the situational design of spaces. The plethora of platforms and tools,
for example blogs, wikis, social networking sites, software free to download the Google 3D
software Sketchup for instance and virtual environments such as 2nd Life enable to co-
construct various kinds of production. Be it the construction of the self, that of a candidate,
a service or a space, all productions seem to follow self-organizing principles, and build on
previous knowledge with the goal to reach perfection. In the absence of a unique owner
with a particular knowledge, why and how do things get built? The hypothesis is that today
design authorship and design expertise are both being shared, facts that offer other
alternatives to traditional STARCHitectural practices. In what seems to be a doomed
landscape for want to be practitioners, new design opportunities emerge.

1. From DIY to party crashers.

The election of Barack Obama is a landmark in history of a particular interest to designers.
We observed that mobilization is a matter of identity sharing and co-construction.1 Obamas
strategy to seek solidarity for the struggle ahead aligned with Putnams observation, that
inequality and solidarity are deeply incompatible and led him to victory.2 Various Internet
platforms have now emerged calling for participation to press on issues that matter to
particular interest groups (urban designers, architects, environmentalists). In establishing a
web2.0 democracy, President Obama challenged two concepts of critical importance to the
design field: authorship and expertise.
Authorship has morphed from the individual to the collective, where the power of the
individual matters less that the associations the individual builds. In recent years, there is
evidence that spatial creation has transferred authorship and expertise from the exclusive
(STARCHitectural), to the individual (Do It Yourself) and finally to the communal (Do It With
Others) practices. Collaboration is nothing new to architects, but it has happened mainly
between professionals. Now, the project emerges from the collaborative effort of experts,
semi-experts and neophytes altogether. A new form of architecture without architects, the
outcome is the product of what Howe calls crowdsourcing, the act of taking a job
traditionally performed by a designated agent (usually an employee) and outsourcing it to
an undefined, generally large group of people in the form of an open call, and participatory
culture, where knowledge is established through peer-to-peer learning. 3 This means that
none needs to know everything, but that one has to belong to (or momentarily plug into)
the right production line.
If vernacular architecture is the result of transmitted knowledge from generation to others
and the specific answer (and deep connection to the ground) to a geographical/geological
context climate and local resources, neo-vernacular, is the peer-to-peer transmission of
knowledge through Internet platforms, over a shorter period of time, and resulting in the
Second International Conference on Critical Digital: Who Cares (?) 146

situational design of spaces. There are many examples illustrating the topics of neo-
vernacular or non-pedigreed architecture. Do It Yourself (DIY) practices, possibly the
premise of social networking, illustrate the rise of the amateur. The following everyday life
observation is an example of the changes in operation: an acquaintance asked for advices
from two of his architect friends independently from each other. He wanted to remodel the
new apartment he had bought in the outskirts of Paris. To his surprise, both advices
concurred (which meant something about architectural curriculum and practice). Yet, he
downloaded the free Google software, Sketchup, and with it visualized his personal space.
He made few design decisions before going to the equivalent of Home Depot in France.
There he sought the advices of an in-house interior architect who, with the help of another
software, designed the bathroom and the kitchen using standardized pieces that were sold
in the store. What one needs to retain from the above story is one, the fragmentation of the
design commission and two, the social network involved, friends, acquaintances, and
strangers. In this instance DIY is more like DIWO Do It With Others.
Doing things with others is a generous idea, but where is the economic value of it? One
lesson to take from the internet is that there is no such thing as a free gift. Along with the
free software Sketchup which enables users to use and enjoy the benefits of the service
(but not to copy nor rip the code), Google provides a blog to assist users. In return,
Sketchup users are invited to contribute to developing Google Earth by Building 3D
representations of bridges that exist: simply model your bridge in Google SketchUp, geo-
reference it in Google Earth, and submit the bridge by uploading to the Google 3D
Warehouse to earn eternal online glory and, if you win, a handsome prize.4 A gift, Sketchup
for example, represents the transaction between companies and users. To be successful,
this transaction should benefit both parties. One potential benefit of the transaction is what
Jenkins calls the spreadability of media. When you offer ideas, you gain publicity for them
in exchange. The author and blogger writes that: nothing spreads widely in the new digital
economy unless it engages and serves the interests of both consumers and producers.5 Or
as Jenkins writes, if it doesnt spread, its dead.6 Spreadability is vital.
With regards to architecture, the spreadability of media is a window of opportunity to
identity creation, fame, however temporary; it increases serendipity and social networking.
By spreading information and co-creating through the appropriate channels, such as blogs
and wikis, otherwise unrecognized architects may gain popularity, share useful ideas or get
a commission. These are all examples of the transactional value of doing things with others.
In order to spread, content has to matter to users. More than the mere transmission of
information, it is the transformation of information, meaning the appropriation or
distortion of content, that has a transactional value. The following example illustrates a
desire for malleable of content. Building Space With Words (BSWW) an installation
organized by Anne-Laure Fayard and Aileen Wilson projects posts and comments from.
bloggers onto an architecture of fabric. Looking for a little more involvement and (heated)
debate, one of the participants asked: where are the party crashers?7 The installation
relied on and took shape through peoples participation, and disruptive action.
These everyday life examples highlight the fact that designing today is a shared activity
between stakeholders from various backgrounds and of different levels of expertise. This
dispersion of agency can be worrisome to the profession of designers, and challenges
traditional pedagogy of schools and institutions.8 On March 10, 2009, Howe wrote in his blog
that Designers didnt like the concept of crowdsourcing: Witness the upheaval currently
afflicting the design industry, sparked by the rise of so-called "spec design" sites like
crowdSPRING and 99designs.9 In fact on the crowdSPRING website you read: its easy:
just post your project [that of your business], watch the world submit ideas and choose the
one you like. If anyone can become a designer, what is then the shape of todays design
Second International Conference on Critical Digital: Who Cares (?) 147

practice? The hypothesis is that participatory culture and crowdsourcing practices introduce
a valid alternative to STARCHitectural practices.
In this paper we will first address the 1. reason for peoples engagement, to then 2. analyze
the strategies employed in the communal creation of space and conclude with 3. principles
to follow in order to create alternative design practices.

2. You know what the trouble with life is? There just aint any background
Saul Alinsky, the community organizer who inspired both Barack Obama and Hillary Rodham
Clinton mentioned crudely few things to remember: people are starving for recognition, and
seek power.11 This observation holds even truer in todays time of participation. A close look
at Barack Obamas presidential campaign tells us that the Internet facilitated the co-creation
of Obamas collectively imagined identity. Who is Obama? The question opponents asked
reflected the fact that Obamas identity was left a coloring book, for supporters to fill in. We
noted that many creations that participate in the construction of Obamas identity also
reveal individual or group self-identities. People contributed to his brand with the tools of
production that they knew, also hoping to gather momentum and rise above the crowd. As
Saul Alinsky writes, the trouble with life [is that] it is a desperate search for personal
identityto let other people know that at least you are alive (Alinsky, 1971: 121). Obamas
identity becomes a tool to achieve visibility, and thus strengthen ones own identity.12
With regards to architecture, identity creation affords recognition. This leads to more
commissions, but the more people on the market, the fiercer the competition is. In a sea of
ideas, where the savoir-faire can be acquired through peer-to-peer learning, how do
designers run a business and/or earn a living? Is harnessing ideas from people as Studio
Wikitecture states economically viable in a competitive market? It is not yet, as the
organizers say on their blog: we are looking for sponsors who would be willing to provide
much needed funding to carry this project forward.13 One danger with the digital is that
ideas can rapidly be emulated, and recognition can fade rapidly. Architects need to
recognize that ideas have lost their transactional value.
crowdSPRING, a business that provides a platform for entities to outsource (graphic) design
jobs, seems to have understood the business value of social networking and competition. A
strategy for people to rise above the crowd is to join, even temporarily the right crowd and
network with its members. In a comment to a post by Jeff Howe about crowdSPRING,
Angeline, crowdSPRING's Community Manager writes that:
Our site is fairly new (only 3 months old), and already, we have almost 4,000
creatives at work and 440+ projects. crowdSPRING's goal isn't to take business away
from graphic design firms or established designers. They have their place and are
necessary. There are many designers on our site who are well-qualified, hold graphic
design degrees, and choose to submit entries. We just want to give opportunities to
anyone talented who is looking to create, regardless of where a person lives,
how old he/she is, or formal training.
People sign up for our site and choose to submit entries with the risk of not being
awarded. However, we also offer creatives opportunities to showcase their work to
many potential clients and to interact with fellow Creatives. Many Buyers award a
Creative and continue to work outside of our site with them. We are happy to
connect these people, as it helps continue to keep our community thriving.14
Such a process drives competition and creativity. Participants master skills through
knowledge sharing (based on users preferences, behaviors, needs and contributions), and
Second International Conference on Critical Digital: Who Cares (?) 148

faster trials (Studio Wikitecture has completed three projects, and is perfecting in no time
the technology needed to implement in 2nd Life). In the end, what seems to characterize
neo-vernacular practices is that collaboration is not to the benefit of the whole but to the
benefit of the individual.

2. Just let go. What space is that anyway?

Is this Space? Lynne Henderson wrote on the Building Space With Words:
I do think that the somewhat relaxed sense that developed in me in the exhibit
seemed related to a sense of being in a community as well as a space. People at the
show could post to the blog, the words moving through the visual area in a rhythmic
fashion, the sound of voices in the background. It was like being able to work or just
be, with a sense of people around you whom you could consult when you wanted to,
but whose voices and words were accessible and connecting...15
Henderson describes her experience of the BSWW installation, a spatial co-creation. The
above quote provides insights into the communal 'production of space.'
One strategy for 'communal space making' is to orchestrate an experience. BSWW users
would change the spatial configuration of the piece through words (Fig. 1). The participatory
nature of the installation is similar to the project by Lozano Hemmer, Vectorial Elevation,
Relational Architecture 4.16 Hemmer provided an internet platform where users could log in
and reconfigure the lighting of the plaza, and hence the physical space that people
experienced. Whether with words, as in BSWW, or the action of placing lights on a plan as in
Vectorial Elevation, there are various tools that can remotely mediate a change in physical
space. These changes address the senses. We experience the place.
A second 'strategy for communal space making' is to play up the multiple nature of spaces.
Spaces are physical, digital and mental (discourse). In the physical word, building space
happens not only through brick and mortar but also through inhabiting it in Hendersons
words, a sense of being in a community as well as a space. Space making in the digital
world is not so different than space making in the physical world. For example propinquity
and privacy are concepts that belong to both worlds. The space of the BSWW installation
is for example multiple: it is physical (with boundaries made of fabric), it is digital because
of the blog platform and it is mental, as the discourse is meant to challenge our
understanding of physical and digital spaces.
Second International Conference on Critical Digital: Who Cares (?) 149

(Fig. 1) iPhone picture of the BSWW opening taken by Laura Forlano, PhD.
In the configuration described above, the trace of every individuals action does not fade
with time. On the contrary, any comment left on the blog can spur further discussion and
revive a thread the digital increases the serendipity of encounters. In theory, if the
physical structure of BSWW were to remain in place, the space a living architecture
would change shape according to peoples involvement, disruptions, disorder and order.
Though built together, under the vigilance of gatekeepers, spatial reconfiguration happens
remotely and at a faster pace than for traditional vernacular architecture.
These examples highlight that when architectural creation is left to the crowd, there is still a
need for a master of ceremony. They suggest alternative roles for architects, that of the
monitor, the coordinator, or the conductor.

3. Are you talking to me?

Crowdsourcing practices and participatory culture have in some ways democratized the
design industry. Yet, we live in a market where consumers are more 'digitally' aware, and
the offerings are surpassing demands. In light of what we have learned throughout, the
following introduces you to common mistakes and provide a conceptual framework for
inventing an alternative practice:
Second International Conference on Critical Digital: Who Cares (?) 150

Crowdsourcing has its limitations. Alinskys rules for radicals in mind, we understood that
what drove people to action, and to making, is emulation, a sense of ownership, identity
seeking, power and fun.17 In a gift economy, companies need not to forget to give back to
the people they have extracted knowledge from! Through activism, and appropriation
consumer will spread (distorted) knowledge about things. In a discussion about spreadable
media, Jenkins writes that:
Spreadability as a concept describes how the properties of the media environment,
texts, audiences, and business models work together to enable easy and widespread
circulation of mutually meaningful content within a networked culture.
This new "spreadable" model allows us to avoid metaphors of "infection" and
"contamination" which over-estimate the power of media companies and
underestimate the agency of consumers.18
What is the agency behind the use of blogs, wikis, and social networking sites etc.? These
mediums are used to reach people as much as to spread and communicate the work of the
few. The more malleable the tool a slogan for example the faster and wider is the
spread. Yet, how can one benefit from the design? In a humongous market place, where
friendship rhymes with business, the beneficiary of the design is a middleman, organizing
the meeting between businesses and designers (this is the role crowdSPRING takes).
Amongst these who can do design, you find the retired, these who hold a great experience
and skills, less skilled workers, who through peer to peer learning manage to rise above the
crowd. The amateur who rises the highest, is the one who mastered the craft, who got
involved more. The wiki or social networking platforms provide with extensive possibilities of
expression and influence. They are portals that are unavoidable in order to reach ones
goal to make it in life. The period in which we are now resembles a period of transition in
which craftsmen are still learning to perfect models and spaces. Will there be a point where
this will freeze, or do we trust the fluidity of the medium, to encourage new vernacular

To conclude, there is no single formula that prescribes the shape of neo-vernacular

practices. Yet, here is below a list of suggestions that designers of today and tomorrow shall
be aware of:

1. Who are you building for? Are you building for ivory towers of designers? This is not
the only market for your offerings.
2. You are building for entities and people in needs of your offerings.
3. Look for needs. Diversify your offerings.
4. You offer an experience.
5. Sharing is not free.
6. Ideas are cheap. Ideas do not sell well.
7. What sells is the product of a savoir-faire, not only a computer skill, but the ability
to gather knowledge, manage teams and funnel it toward a precise goal.
8. Find channels of information and distributions; and if they do not exist, create them.
9. Transgress, transcribe, transfer, transform.

Acknowledgment: Thank you to Michael Piper, Principal of DUB Studios, for the revisions.
Abbas, Y. and Dervin F. Technologies of the Self, forthcoming publication 2009
Second International Conference on Critical Digital: Who Cares (?) 151

Putnam, R., Bowling Alone: The Collapse and Revival of the American Community, New
York, London: Simon & Schuster paperbacks, 2000; p. 294.
Crowdsourcing, http://crowdsourcing.typepad.com/cs/, last accessed April 6, 2009.
According to the author, this is the white paper definition of the word crowdsourcing.
According to the software term of use.
last accessed April 6, 2009.
Jenkins, H., If It Doesn't Spread, It's Dead (Part Three): The Gift Economy and Commodity
Culture, http://henryjenkins.org/2009/02/if_it_doesnt_spread_its_dead_p_2.html,
last accessed April 6, 2009.
It is the main title of eight blog posts by Henry Jenkins, who is serializing a white paper
which was developed last year by the Convergence Culture Consortium on the topic of
Spreadable media. http://henryjenkins.org/2009/02/if_it_doesnt_spread_its_dead_p.html,
last accessed April 6, 2009.
Building Space With Words: http://blogs.poly.edu/bsww/, last accessed April 6, 2009.
On the blog of Studio Wikitecture you read: Studio Wikitecture assumes the principles of
good design are universal enough that they can be learned in one discipline and applied in
some fashion to another. Through Studio Wikitecture, we are trying to provide a channel
where these individuals can apply these skills to the design of a building. This does not
negate the fact that a certain foundational knowledge is still necessary to design a building
that will actually function and standup, but SW feels that this knowledge can be acquired
through a number of channels and should not be restricted to just architects and their
particular educational path.
http://studiowikitecture.wordpress.com/about/, last accessed April 6, 2009.
Crowdsourcing, http://crowdsourcing.typepad.com/cs/2009/03/is-crowdsourcing-evil-and-
other-moot-questions-.html, last accessed April 6, 2009.
Alinsky, S. Rules for Radicals, New York: Random House, 1971; p. 121: People hunger
for drama and adventure, for a breath of life in a dreary, drab existence. One of cartoons in
my office shows two gum-chewing stenographers who have just left the movies. One is
talking to the other, and says, You know Sadie. You know what the trouble with life is?
There just aint any background music. [] But its more than that. It is a desperate search
for personal identityto let other people know that at least you are alive.
Michael C. Behrent (2008) Saul Alinsky, la campagne prsidentielle et lhistoire de la
gauche amricaine in La Vie des Ides: http://www.laviedesidees.fr/Saul-Alinsky-la-
campagne.html?decoupe_recherche=obama%20saul%20alinsky (Last accessed January 15,
2009). Saul D. Alinsky inspired both Hillary Rodham Clinton and Barack Obama (Clinton
wrote a thesis about the community organizer; the thesis is available online here:
http://www.gopublius.com/HCT/HillaryClintonThesis.html, Last accessed April 6, 2009. My
thanks to Thomas Watkin for the article.
Abbas, Y. and Dervin F. Technologies of the Self, forthcoming publication 2009.
Studio Wikitecture, http://studiowikitecture.wordpress.com/about/, last accessed April 6,
Crowdsourcing, http://crowdsourcing.typepad.com/cs/2008/08/crowdsourced-de.html,
last accessed April 6, 2009. My emphasis in bold.
BSWW, http://blogs.poly.edu/bsww/2009/03/24/is-this-space/, last accessed April 6,
Vectorial Elevation, http://www.alzado.net/, last accessed April 6, 2009.
Abbas, Y. and Dervin F. Technologies of the Self, forthcoming publication 2009.
Jenkins, H., If it Doesnt Spread, It Is Dead:
http://henryjenkins.org/2009/02/if_it_doesnt_spread_its_dead_p_1.html, last accessed
April 6, 2009.
Second International Conference on Critical Digital: Who Cares (?) 152

Second International Conference on Critical Digital: Who Cares (?) 153

The Familiarity of Being Digital

Digital Abstraction and Representation of Embodied Interaction

Aghlab Al-Attili
School of Arts, Culture and Environment, The University of Edinburgh, UK.
Al-Attili@ed.ac.uk ; alattili@attil.org

Maria Androulaki
School of Arts, Culture and Environment, The University of Edinburgh, UK.

This paper addresses issues pertaining to the familiarity of the digital abstraction and
representation in architecture. Our investigation into the nature of human interaction with
space, its abstraction and its representation is based on the critical contrast between the
outcomes of interaction with two virtual versions of a physical reality; the first version is a
non-linear interactive graphical abstraction of the space where no assertions or indicators
are given as to whether or not there is a relationship between the abstraction and its
physical reality, whereas the second is a none-linear interactive 3D virtual environment
clearly representing the physical space in question.

1. Familiarity

Familiarity denotes the dynamic and active relationship between interaction, perception and
reasoning. It is theoretically grounded in conceptual metaphors as introduced by Lakoff
and Johnson, that in turn, are grounded in correlations within our experience not only
language.1 Familiarity is a perceptual tool with which we interact with one space realm, in
terms of the dynamic relationships, from a realm of a different kind. This interaction
theorises an obtained knowledge that Johnson argues is relative to our understanding:
What counts as knowledge, therefore, is relative to our understanding that permits our
more or less successful interaction with our environment.2 We investigate embodied
interaction in digital realms.

1.1. Familiarity in language and action

Space and its various representations and abstractions share qualities and associations that
are specific to space though sometimes these are shared with other objects. Attributes can
be inferred from them in most of the cases. Experimental space abstraction and
representation in many literary works and cinematic films, offers an interpretation that
concedes the prospect of a phenomenological exploration producing a plurality of alternative
critical abstractions, and therefore, representations of space. However, for the purpose of
this experiment (the consideration of a space independently of its associations or its
plurality of qualities and attributes) an outcome of a process of abstracting space needs to
be validated. In order to be able to evaluate space abstraction and representation, we ought
to be normative by deploying Humes concept of normativity for its ability to tie ideas, their
origin and abstraction. Hume makes the distinction between ideas and impressions
according to the degree of force and liveliness, with which they strike upon the mind and

1 Lakoff, G. & Johnson, M. Metaphors we live by, Chicago: University of Chicago Press, 1980, pp. 147-
2 Johnson, M. The body in the mind: the bodily basis of meaning, imagination, and reason. Chicago,
Illinois: the University of Chicago Press, 1987, p. 209.
Second International Conference on Critical Digital: Who Cares (?) 154

make their way into our thought or consciousness3. Ultimately, what is underlying here is
that human nature is not only a neutral attribute, rather, it is a normative principle by its
virtue of uniformity. Human subjectivity is uniform in a manner that permits objectivity to
emerge. Uniformity of human social condition, allows the study of human praxis irrelevant
of location. Thus, context implications are reduced. This uniformity in human condition
spells out familiarity.

1.2. Abstraction of Sensory Stimuli Producing Meaningful Experiences: Gestalt

The task of abstraction or representation is to extract and reinstate attributes, qualities and
associations in a dual process of contesting subjectivity and growing into objectivity. When
executing this process of isolating (disconnecting) subjective elements, we confront
traditional connections between human perception and understanding of space; connections
that formulate the modus operandi of perception. We confront Gestalts principles of
proximity, similarity, continuation, closure, prgnanz and figure/ground. Thus, as the first
step in a proposed procedure for space abstraction, we suggest eliminating Gestalts
principles of perception. Abstraction of familiar space as a physical environment can be
correlated to the degree of elimination of Gestalts principles of perception.

1.3. Unintentional Signification through to Intentional Imagination: Metaphor

Bachelard in La Potique de lespace makes the distinction between daily experience of

space and the daydream of an intimate space in that the latter is detached from familiarity
as a domain of interaction. The entity of being is not Heideggers Dasein anymore; it
morphs into Bachelards des tres entr'ouverts.4 The power of imagination as an attribute of
the poetic image becomes a new being of our language, it expresses us in making us that
which it expresses.5 In conjunction, abstraction of the signified conceals its structural
relationship with the signifier. In the effort to associate new attributes we rely on
metaphorical association, which countervail against the fact that metaphorical systematicity6
generally exacerbates perception of abstracted space, thus, admitting spatially divergent
interpretations. Integration of diversities7 by metaphors strict or logically necessary
implication of one proposition by another deviates abstraction, due to lack of attributes,
from its space by characterising the abstraction with a coherent system of metaphorical
concepts8, which, in turn, admits inappropriateness that Turbayne labels sort-crossing.9
However, Paul Dourish argues that the referent of the metaphor may possibly possess a set
of capabilities that the metaphorical object itself does not.10 This can be harnessed to the
fact that virtual environments allow various interactions that are, otherwise, impossible in a
corresponding physical environment. Thus, as the second step in a proposed procedure for
space abstraction, we suggest eliminating metaphorical systematicity.

3 Hume, D. A treatise of human nature. [e-book] Scribd.com, 2009, p. 6.

Available at: http://www.scribd.com/doc/52892/A-Treatise-on-Humane-Nature [Accessed 24-04-
4 Bachelard, G. The Poetics of space. Translated by M. Jolas. Boston: Beacon Press. Originally
published: New York, Orion Press 1964, p. 200.
5 Ibid., p. 7.
6 Lakoff, G. & Johnson, M. Metaphors we live by. Chicago: University of Chicago Press, 1980.
7 Stanford, W.B. Greek metaphor. Oxford: Basil Blackwell, 1936, pp. 101-105.
8 Lakoff, G. & Johnson, M. Metaphors we live by. Chicago: University of Chicago Press, 1980, p. 9.
9 Turbayne, C. The myth of metaphor. New Haven, CT: Yale University Press, 1962, p. 11.
10 Dourish, P. Where the action is: the foundations of embodied interaction. Cambridge, Mass.: MIT
Press. 2001, p. 142-144.
Second International Conference on Critical Digital: Who Cares (?) 155

2. Abstraction and representation

The proposed steps of abstraction were applied to the plan of the space in question (figure
1) which is the fourth level of the building that hosts the Department of Architecture at the
University of Edinburgh (abbreviated to L4). The result was a diagram (figure 2) which was
made available for navigation. On the other hand, an interactive 3D virtual environment
corresponding to L4 was created and made available for navigation and interaction (figure

Figure 1. Original plan

Figure 2. Abstract interactive diagram Figure 3. Interactive virtual environment

Finally, In order to examine this hypothesis, a qualitative method of research was used to
probe on subjects experience focussing on issues related to the abstraction, the
representation and the familiarity of space. Subjects were asked to give a personal account
of their experience which is supposed to give us an insight on how they think.

3. Eye-tracking the familiar digital

An eye-tracking system was used to track and record subjects eye movement and points of
fixation on the virtual environments, to further validate the process of experimentation. The
results did not only aid in interpreting their personal accounts, but also gave us an insight
into the subjects attention points.

In the abstraction of L4 (figure 4) the average of eye fixation on zone 1 was 68%. The high
percentage suggests that most of the subjects preferred a wider schematic structure and
not a zoomed-in view on the point of interaction. When enquiring about the reason behind
this extra attention to zone 1, one subject stated that he did not realise he paid attention to
that zone in specific. Another subject pointed out the fact that the zone had a lot of shapes
which he classified as details, and argued that these details would always draw my
attention. Another subject blamed the increased attention on the moving dot that was
arbitrarily reacting to his actions on the keyboard and argued that the dot represented her
body in space; but when confronted by the fact that zone 2 provided the same capability
but in a different scale, she argued that zone 1 provided an overall view that was easier to
follow. By contrast, the average of eye fixation for zone 2 was 17%. Most of the subjects
were of the view that zone 2 was only referred to when trying to investigate the orientation
of the dot in the overall diagram. This view came in many spatial narratives about
embodiment. One subject said that she looked at zone 2 when she was not sure if she was
facing the door or not. The spatial elements and references used in her account clearly
Second International Conference on Critical Digital: Who Cares (?) 156

shows that she was treating the abstraction not as a diagram, but a metaphorical space.
Another subject pointed out a similar notion when he said: the space is in my mind.

We argue that the subjects familiarity with the physical space abstracted in this interactive
environment, and not only the spatial nature of our involvement in space, is the main drive
behind the attention to zone 1. Both zone 1 and zone 2 hold spatial values projected into
them by each subject, but zone 1, in particular, fulfils the familiar aspect of interaction, by
indirectly presenting the familiar space in its totality. By contrast, zone 2 holds the same
spatial value, but does not present a familiar space; rather, a fragment of a unconfirmed

Figure 4. a) The 3 main zones, and b) percentage of eye fixation for each zone.

In the 3D virtual environment of L4 we found that the eye fixation was generally surprising.
The prominent zones of eye fixation were the floor (figure 5b) and the ceiling, by 53% and
26% respectively. The implications of this average give a strong pointer to the embodied
aspect of interaction. The floor did not hold any kind of information that was required to
achieve the task. Perhaps, unlike the ceiling that had some lighting details, the floor did not
stand out as an object full of details. When subjects where asked about the zone of
attention, most of them corroborated that they were enjoying looking at the doors, and
checking the numbers in search for their own office. Some others pointed out that they were
checking details and, in particular, the details of the ceiling. While this might explain the
reason for the average of the top zone, it does not provide an explanation for the other
percentage. When asked about it, some subjects gave reasons related to embodiment. For
example, one of the subjects reported that when the level of perspective was close to the
floor, he felt like a mouse wondering around the building. This feeling is induced by many
factors that led the subject to explore other embodiments. Another subject reported that
when the level of perspective did not match the normal horizontal degree, he felt he was
drunk. Accordingly, he felt the urge to look on the floor and try to adjust his view.
Embodiment is strongly implicated in this account as well. The sense of having unlevelled
view reflected on the embodiment and induced a reaction, which resulted in the eye fixation
on the floor.

All of the accounts provided a great insight into the process of design. For example, judging
by the overwhelming averages in this particular example of our investigation, we can
confidently assert that in the process of architectural illustration, and particularly when
producing perspectives, it is important to accentuate the floor or ground, because most of
the viewers will refer to it as an element of engagement in the space. It also possesses
Second International Conference on Critical Digital: Who Cares (?) 157

various implications on embodiment. Accordingly, the choice of the level of perspective

affects the type of embodiment and the attention to other objects within space. All of which
are dependent on familiarity and embodiment.

Figure 5. a) The Virtual environment, and b) percentage of eye fixation for each zone.

4. Conclusion

Familiar space, its abstraction and its representation were used to test the definition of
familiarity in order to produce a better understanding of it and its relationship with virtual
space. The interaction with the abstraction and representation of L4 produced two types of
familiarity: emotional familiarity and spatial familiarity. Both types of familiarity acted as a
representation of intimacy or as a side effect to intimacy. Familiarity came across as a
reaction to intimacy produced by human embodied interaction with physical space. When
physical space as a concept was virtualised, the underpinning embodied interaction with the
virtual space preserved the familiar emotion and interaction. Familiarity, in turn, produced a
different understanding of virtual space. Intimacy moved from being the character of
interaction with space to the condition of interaction; a condition that highlights the
knowledge of one specific instance of space. Familiarity, on the other hand, appeared as a
tool to produce a convincing interaction between the user of intimate space and multiple
instances, copies or modes of this space. Emotional familiarity is a reaction to the embodied
perception of the various modes of space. Spatial familiarity is a physical reaction to
different modes of space. The characterisation of the reaction as being physical is due to the
nature of human embodiment. The emotional attributes figured positively in instances
associated with familiar space. When positivity decreased, spatial attributes occupied a
bigger part of perception.

Intimate space is preserved in various modes of space through familiarity. Intimate space
transforms into familiar space. Intimacy as the character of the knowledge of a space
transforms into familiarity which is the character of the interaction with space. The value of
familiarity is preserved through interaction; therefore, by definition interactive space is a
familiar space. Familiarity is better experienced in representations, but better felt in
abstractions. Successful abstraction preserves the abstract structures of embodied
interactions. Although they cannot convey space, these structures are capable of construing
the emotional experience.

Familiarity is the representation of intimacy in space representations. Hence real space is

intimate and virtual space is familiar.
Second International Conference on Critical Digital: Who Cares (?) 158

Second International Conference on Critical Digital: Who Cares (?) 159

Pretty Polygons Or Experiential Realism

Meaningful Game-World Design by Architects

Erik Champion
Massey University, New Zealand.


I suggest that technology or evaluation is not the fundamental problem in designing

virtual places. Rather, we still have not truly grasped the native potential of
interactive digital media as it may augment architecture, and that is why debate on
the conceptual albeit thorny issues of the subject matter is still in its infancy.

Are there specific needs or requirements of architecture that prevent us from relying
on digital media and online worlds experts? Or is it not so much that the new tools
are currently too cumbersome or unreliable, but instead it is our conventional
understanding of place design and platially situated knowledge and information that
needs to change? Here I suggest the terms Place, Cultural Presence, Game and World
are critically significant. Clearer definition of these terms would enrich clarify and
reveal the importance of architectural design not just to real but also to virtual world
design in terms of interaction, immersion and meaning.

1. introduction

Architecture is not only about the artifacts we see built around us, but it is about the
process of designing and building, about the way we are all embedded, and
embodied, in the practice/praxis that is architecture.1 In a sense, people are not just
physically embodied; they are also socially embedded. Their motives, intentions, and
actions can be fully understood only when referenced to a social perspective that
makes sense of a specific physical environment. Recreating the objects that make up
our society is however not recreating the society itself, as some of our cultural
knowledge is not ostensive and is not directly tangible. Undoubtedly, there are also
many cultural and ethical issues. For example, some critics see purely physical
recreating of traditional societies as a typically Western phenomenon.

Cyberspace is particularly geared toward the erasure of all non-Western

histories. Once a culture has been 'stored' and 'preserved' in digital forms,
opened up to anybody who wants to explore it from the comfort of their
armchair, then it becomes more real than the real thing. Who needs the
arcane and esoteric real thing anyway?2

Current notions of place in Western literature may be ignorant of other cultural

perceptions of place as opposed to space.3 Yet I am not convinced we have gone so
far as encapsulate culture in a frozen digital form, culture is too vague and misused a
term for it to be fully encapsulated.

I suggest a working concept of culture that may help us answer Sardar, but not as a
system of rules, high culture or pop culture. Cultural geography views culture as
directed integration with the immediate world around us through shared language,
customs, behaviors and thoughts. So culture is in some way socially created, defined
and managed; it is expressed via language and artifacts, but it is vaguely bounded,
and open to (mis)interpretation. Culture is thus a connection and rejection of threads
over space and time. How cultures are spread over space and how cultures make
sense of space is thus interdependent. A visitor perceives space as place, place
perpetuates culture (frames it, embeds it, erodes it) and thus influences the
Second International Conference on Critical Digital: Who Cares (?) 160

inhabitant. Architecture is thus a key contributor to cultural presence.

In order to measure how closely culture can be observed, appreciated or understood

through virtual environments, I have suggested that cultural presence be defined as
the feeling of being in the presence of a similar or distinctly different cultural belief
system. This does not mean we must build virtual environments for thousands of
people; after all, we can experience some sense of cultural presence in an otherwise
empty museum. So unlike social presence, which requires other people to appear to
be in the same real or virtual environment, we can experience cultural presence
without (necessarily) having to meet or hear other people. However, it does require a
feeling of layered history as a situated palimpsest, and it does raise the issue of what
sort of interaction would best allow us to understand the mindset of other distant or
exotic societies.

Technology is thus not necessarily the problem; rather, it is the issue of how to digital
media can help afford cultural learning. Virtual environments can be culturally
constrained, and multi-perspectival; there is no inherent necessity for meta-
narratives or Western-biased viewpoints. Virtual environments can also be abstracted,
challenging, and dynamic. They can choose their own form of presentation, interface,
navigation, narration, and goal. For virtual environments can contain more than
objects, and they can do more than offer sensory feedback, they can also force us to
be constrained by the social roles and rituals residing in the environment that has
been digitally simulated.

So technology can throw us into specific situations far beyond the power of museum
exhibits. However, the major issue restricting engagement, in the opinion of many
people, is suitable contextual interaction.4 5 Any technology (such as VRML) or
environment (such as Adobe Atmosphere or Google Lively) that did not provide
intuitive social interaction soon became moribund. There are too many virtual Pruitt-
Igoes, award-winning but disastrously inhabited communities. And so far even the
commercially successful world-building communities in cyberspace have struggled to
afford ritual and community. Certainly there are enthusiastic social groups and
displaced clusters that meet online, but their social transactions do not fully happen
inside the virtual worlds themselves.

World is also a vague and misused concept. It is not just space: users navigate
through space, but they explore worlds. Their exploration is thematic, cognitive, and
motivated; their interaction directly shapes their experience. So a world is not just
the collection of phenomenological boundaries. And here I am reminded of a
definition a city that was attributed to Louis Kahn, a boy can explore a city and at the
end of three days know what he wants to do for the rest of his life. Likewise, virtual
worlds can involve spatial, historical, counterfactual or objectively chronological
exploration, or exploration of a characters potential future or past. And a virtual
world may require learning how to translate and disseminate, or even modify or
create the language or material value systems of real or digitally simulated
inhabitants. Participation would hinge on how well culturally appropriate information
can be learnt and developed by the participant and passed on to others.

However, games are not (yet) genuine worlds. Rich social interaction is usually
around rather than inside the games, and individuals and cultural roles cannot be
deciphered from the traces of the participants. We can understand and extrapolate
more about Mesopotamian tax evaders from 5000-year-old tablets than we can from
traces left by modern MMORPG players in giant commercial game-worlds such as
World of WarCraft.

So while I am also not yet sure that the sterility of virtual environments is purely due
to Western mindsets, Sardars overall point is valid: meaningful situated interaction is
Second International Conference on Critical Digital: Who Cares (?) 161

missing in virtual world design. And when companies like AutoDesk6 claim they are on
the verge of creating game engines specific to architectural design: they forget to
mention the real-time interaction design skills that architects need to acquire. For if
architects are not trained in usability and interaction design principles, how can they
design engaging and profound interaction in these virtual worlds?

2. do architects understand gamespace, gameplay and virtuality?

I am concerned that architects are using the algorithmic power of computer graphics
to create flights of fancy, rather than personalisable and user-attracting media. In
Embracing the Post-digital, Dominik Holzer wrote ..for morphogenesis has allowed
designers to learn letting go of total control over their design process and to allow the
computer to surprise them with unexpected results. In addition to this, the more
playful use of design software has enabled us to generate a plethora of design
variations for comparison and selection.7 While a range of design solutions is a plus,
such types of articles may be misread. Freedom of expression might appear to be
only to do with architectural form, not with the outside environments ongoing
interaction with architecture, or the end-users experience, inhabitation, and use of
architectural form. Successful games are played and popular virtual worlds are
inhabited by end-users.

Architects have also seized on commercial games, game mods, and game-editors, but
in his chapter Gamespace for the anthology Space Time Play, Mark Wrigley has
declared The real key to the architecture of game space, like any other architecture
is the entrance and exit.8 Considering modernism has long dithered over how to
design either, I find this most worrying. Surely one key aspect of architecture is the
movement and perception of movement through and across space, it is not an
either/or, exit or entry. And secondly, many games are hybrids, situated between real
and virtual, presence is not actually a binary phenomenon. Thirdly, I dont believe
enough architects really understand immersion or the effect of games on many
gamers before, during or after the actual game-experience.

Debatably, the first great make or break criterion of digitally or chemically mediated
immersion was not Neuromancer, but Philip K. Dicks 1964 science fiction classic, The
Three Stigmata of Palmer Eldritch.9 Set in an climate changed feature world, the
author invites us to consider virtuality as drugged escape from mortality and
existence on a terra-formed but banal Mars, virtual presence is conflated with an alien
religious presence, and the detective anti-hero is never sure whether he left the
virtual reality after chasing the villain (who is god or devil of the virtual realm), or
whether he is actually still trapped inside, but convinced that he is free. Not only did
Philip K. Dick use the term Presence long before Mel Minskys famous Telepresence
essay in 1981, he has created a parallel to a Turing test: you can tell you have
created a successful virtual reality when you are never sure if you have escaped from
it. Hence; believable immersion does not rely on an exit/entrance, nor is it a simple
binary phenomenon.

Theorists in architectural and game space have not yet created a convincing
overarching theory that both describes and prescribes. For example, in Michael
Nitsches recent book on the subject,10 his overarching diagram describes game
space as including rule-based space, mediated space, fictional space (what the player
imagines they are seeing), play-space (the physical space the player is in), and social
space (when other players are physically present). Where is the cognitive space? It is
not exactly the fictional space the player imagines, but rather the past experiences
and future projections the player is extracting, collating, interpreting, and predicting.
Second International Conference on Critical Digital: Who Cares (?) 162

The diagram also does not feature somaesthetic space. And yet triggering
somaesthetic, proprioceptive and kinaesthetic responses is common in real-world
architecture. There are cantilevers in Egyptian architecture, knee-deep windows in
Arne Jacobsen buildings (no doubt to heighten a sense of vertigo), the path-centre
theory of Byzantine architecture, and of course Baroque architecture, designed to be
experienced by bodies in motion.

Ilinx (vertigo, dizziness, disorientation on director or movement), is one of the four

categories of game play as proposed by Roger Caillois. Vertigo is also a prominent
trigger and factor used in testing immersion in virtual environments. In Nitsches
book none of these three related terms (involving bodily responses to space and to
movement through space) are mentioned in the index. Ironically, Hitchcocks film
Vertigo is twice referred to, and at least one of Nitsches projects described in the
book involves a deliberate use of ilinx. So why isnt ilinx an important component of
any theory of gamespace? Players may be sedentary, but their engaged minds are
navigating, orienting and balancing according to virtual cues as if they were actually
in motion. Real or virtual, architecture works not just on the eyes, but also on our
minds, memories and bodies.

3. games are not yet the answer

Turning to architecture specifically, there are games featuring animation of building

processes,11 and new tools such as the gravity gun in Half-Life 2 coupled with Garys
mod to write script, has helped fans to design games where part of the gameplay is
to actually build walls and fortresses.12 Yet these are really interactive tools, they
dont create worlds. For unlike a typical software package, which ideally is designed to
be easy to learn and easy to master, a virtual place is elusive in boundary and
contrary in nature: humans often wish to experience both the periphery and the
centre, simultaneously. Similarly, a digital game is often designed to be challenging,
difficult to learn and difficult to master.13 Yet if a game is perpetually challenging, it
will not help afford typical symbolic elements of place, such as rest, stability, shelter,
and identity. Digitally mediated technology can attempt to reproduce existing data
but they can also modify the learning experience of the user through augmentation,
filtering, or constraining. The following are some issues we will have to resolve.

3.1 phantom space

CAD was designed to get buildings built, to quantify rather than qualify the
architectural experience. They show static additions to the environment, rather than
environmental changes acting and interacting over time. There is no fog, no dirt, no
wind, and often even no people (Figure 1). Yet the real world experiencing of
architecture is always mediated through a dynamic and imperfect sensory interface:
our minds and our bodies. So for a project recreating a nineteenth century mining
town, ghost voices, and sounds of the bush defined the edge of inhabitation, and of
certain knowledge (Figure 1).
Second International Conference on Critical Digital: Who Cares (?) 163

Figure 1: A visualisation of a 19th C mining town

3.2. text is not space

I have to point out the obvious; text is not space. Traditional communities like the
Well, or a MUD, capture some notion of a platial history, but they typically do so
through text, not spatiality. The virtual communities that offer virtual landscaping and
house design may also remember vandalism of visitors, but the actual social history
of the visitors and inhabitants is still textual, and social interaction is typically outside
of the spatial environment, via forum or email, not a materially embedded part of it.

Other designers may also by default add text to an interactive experience because
text=instructions seems self-evident to them. For example, the developers of the
Deva CVE system have complained that they could not fit more text onto the screen
interface of their virtual environment, they did not complain that they had to use text
at all.14 They also admitted that reference to the rules was via text logs, not via in-
world activity or research. Information overload can also be a problem for players in
the more complex and powerful multiplayer games.15 Yet attempting to navigate
through a spatial field, while reading flashing test, is a heavy cognitive test for the
brain. To some extent, the next project mentioned addresses this issue (there was no
keyboard, rather surround projection, a 3D mouse, and Arduino boards that one
walks on in order to virtually move), but providing information without taxing the
brain through text is a recurring problem.

3.3. interaction fades over time

The expanding interaction abilities of participants may be counter to the decreasing

range of interaction possibilities, as historical knowledge becomes more accurate the
closer in time to our own era, the range of possible yet authentic interactions
available to the player, will diminish. In order to balance these competing factors, it
may be possible to split off historical and counterfactual or mythical places. Figure 2
shows a virtual environment in which students are rewarded for finding historically
valuable artifacts by being sent to an artists impression of the local cultures mythic
version of the underworld. A reconstruction of a Mayan temple in Mexico, using a
game engine, the staff has magical properties when near certain artifacts. The upper-
right circle is both a rotating map, and a Mayan calendar. Hence, the more
experienced user can do what they like in the aesthetic freedom of the mythic world,
as long as they work out the cultural cues of the archaeological reconstruction.
Second International Conference on Critical Digital: Who Cares (?) 164

Figure 2: Palenque Project in Unreal Tournament

3.4. emotional richness

When virtual environments are inhabited, the non-playing characters (NPCs)

typically lack emotional depth and social richness. The bots (computer scripted
agents) found in computer games are often added to virtual heritage environments,
but their most meaningful interaction is to stalk (Figure 3). They suggest a social
agency, but they actually function as an extra cognitive load to make the game more
challenging. Typically bots and avatars lack close up facial expressions16 and the
environments do not provide fuzzy peripheral senses,17 social role recognition18 or
general social awareness.19 Other researchers have noted that a typical screen interface
can create tunnel vision, which reduces awareness of others.20

Figure 3: A bot with path-finding ability, (UT 2004).

To overcome these limitations, we created a game-design scenario where the bots try
to sniff out the human impostors. Immediately the low-realistic behavior of the bots
is hidden in the background, as the player must successfully copy local behavior and
rituals without raising suspicion.

3.5. cinematic responsiveness

Second International Conference on Critical Digital: Who Cares (?) 165

Typically, virtual environments are not complex in their interactional history, the past
and the present do not intermingle as they do in real places, the many conscious and
subconscious ways that people leave traces in the world are not conveyed in static 3D
models. Using biofeedback, we could design auras (of color or sound or changing
surface texture) that are affected by the avatars speed, animation set,
environmentally affected attributes, players keyboard decisions (or reaction times),
or their biofeedback.

We have also hooked up biofeedback devices to games (Error! Reference source not
found.), so when the player draws closer to the spooky places, the eeriness factor
increases in relation to galvanic skin response, and the game shaders are dynamically
altered to suggest fading health.

Figure 4: Biofeedback connected to Half-Life 2 game

4. conclusion

Teaching architecture (design, history, theory, construction, user-experience etc)

through simulating traditional forms of learning by doing is an incredibly
understudied research area and is of vital importance to a richer understanding of
place.21 22However, the actual spatial implications of siting learning tasks in a virtual
environment is still an area largely un-researched, as typical evaluation of virtual
environments have been relatively context-free, designed for user freedom and
forward looking creativity. It is much more difficult to create a virtual place that
brings the past and present alive without destroying it.

Cultural presence, world, place, and interaction are all inter-related in the real world,
it is up to us as architects and designers to interpret and transfer their power and
potential to virtual environments. In doing so, we should be more critical of superficial
use and application of these key terms as they are critically important.

Be they virtual reality systems or game editors, digital media technology provided to
architects can provide virtual places that are evanescent, ephemeral, experientially
immersive and atmospheric. In return, architects could show our digital media
colleagues the importance of key architectural concepts of inter-related and
interstitial space, inhabitational wear and tear, territoriality, kinesthetically learnt
narrative, proprioceptive feedback, phototropic signifiers, and head-tail spatial design.
Second International Conference on Critical Digital: Who Cares (?) 166

There is much more work to be done in designing procedural decay and user-based
erosion, learning via construction inside the environment, and engaging alternatives
to violence. However, the next phase in virtual world design is before us, and
architects can do much more than design pretty polygons.


Coyne, R. "The Embodied Architect in the Information Age. Richard Coyne Inaugural
Lecture Delivered 16 February 1999 at the University of Edinburgh." University of
Edinburgh, http://www.caad.ed.ac.uk/Coyne/Inaugural/
Sardar, Z. "Alt.Civilizations.Faq: Cyberspace as the Darker Side of the West,
inCyberfutures: Culture and Politics on the Information Superhighway edited by Z.
Sardar and J. Ravetz, pp. 14-41. London: Pluto Press, 1996, p.19.
Suzuki, H. "Introduction, inThe Virtual Architecture - the Difference between the
Possible and the Impossible, edited by Ken Sakamura and Hiroyuki hen Suzuki.
Tokyo: Kenchiku Hakubutsukan, Yonsei University, 1997.
Adams, E. "The Philosophical Roots of Computer Game Design."
Gillings, M. "Virtual Archaeologies and the Hyper-Real, in Virtual Reality in
Geography, edited by P. Fisher and D. Unwin, pp. 17-32. London & New York:
Taylor & Francis, 2002.
Virtual Worlds News. "Autodesk Developing Immersive Game Engine for Architects."
game-engine-for-architects.html Also see Varney, Allen. "London in Oblivion."
Holzer, D. "The Inclusion of Computational Tools", paper presented at the First
International Conference on Critical Digital: What Matter(s)? Harvard 2008, 17-22,
quote from p. 18.
Wigley, M. "Gamespace, in Space Time Play: Computer Games, Architecture and
Urbanism: The Next Level, edited by Friedrich von Borries, Stefan P. Waltz and
Matthias Bttiger, 484-87. Basel: Birkhauser, 2007.
Dick, P. K. The Three Stigmata of Palmer Eldritch. 2007 ed. Kent: Orion Publishing
Group, 1964.
Nitsche, M. Video Game Spaces: Image, Play and Structure in 3d Game Worlds.
Cambridge, Massachusetts: MIT Press, 2008.
BBC, Interactive Animations,
http://www.bbc.co.uk/history/interactive/animations/ (not dated). See especially
Newman, G. "Gary's Mod." Mod DB, http://www.moddb.com/mods/4408/garrys-
Brown, B., and Bell, M. "CSCW at Play: 'There' as a Collaborative Virtual
Environment, paper presented at the 2004 ACM conference on Computer
supported cooperative work, Chicago, Illinois, USA 2004.
Mitchell, W. L, Economou, D., Pettifer, S.R., and West, A. J. "Choosing and Using a
Driving Problem for CVE Technology Development, paper presented at the ACM
symposium on Virtual reality software and technology, Seoul, Korea 2000.
Ducheneaut, N, and Moore, R. J. "The Social Side of Gaming: A Study of Interaction
Patterns in a Massively Multiplayer Online Game, paper presented at the 2004
ACM conference on Computer supported cooperative work, Chicago, Illinois, USA
Second International Conference on Critical Digital: Who Cares (?) 167

Fabri, M., Moore, D., and Hobbs, D. "Mediating the Expression of Emotion in
Educational Collaborative Virtual Environments: An Experimental Study, paper
presented at the Virtual Real conference. 2004.
Fraser, M., Benford S., Hindmarsh, J., and Heath, C. "Supporting Awareness and
Interaction through Collaborative Virtual Interfaces, paper presented at the
Proceedings of the 12th annual ACM symposium on User interface software and
technology, Asheville, North Carolina, United States 1999.
Ducheneaut, N, and Moore, R. J. "The Social Side of Gaming: A Study of Interaction
Patterns in a Massively Multiplayer Online Game, paper presented at the 2004
ACM conference on Computer supported cooperative work, Chicago, Illinois, USA
Prasolova-Frland, E., and Divitini, M. "Supporting Social Awareness: Requirements
for Educational CVE, paper presented at the ICALT 2003-3rd IEEE International
Conference on Advanced Learning Technologies, Athens 2003.
Yang, H. "Multiple Perspectives for Collaborative Navigation in Cve, paper presented
at the CHI '02 extended abstracts on Human factors in computing systems,
Minneapolis, Minnesota, USA 2002.
Roussos, M., Johnson, A. E, Leigh, J., Vasilakis, V. A., Barnes, C. R., and Moher, T.
G. "Nice: Combining Constructionism, Narrative and Collaboration in a Virtual
Learning Environment", SIGGRAPH Computer Graphics 31, no. 3 (1997): 62-63.
Second International Conference on Critical Digital: Who Cares (?) 168

Second International Conference on Critical Digital: Who Cares (?) 169

Session 5 Identity:

Moderators: Ingeborg Rocker and Zenovia Toloudi

Jose Cabral Filho

Beyond Transgression a playful future for digital design

Mirja Leinss
Making meaning of technology - technology as a means

Han Feng
Quantum Architecture: An indeterministic and interactive computation

David Celento
(Digital) Rock, Paper, and Scissors and Stork?

Roel Klaassen
Mind the Mainstream!
Second International Conference on Critical Digital: Who Cares (?) 170

Second International Conference on Critical Digital: Who Cares (?) 171

Beyond Transgression A Playful Future For Digital Design

Jose dos Santos Cabral Filho

School of Architecture Federal University of Minas Gerais - Brazil
cabralfilho@gmail.com, jcabral@arq.ufmg.br

This paper argues that interaction and automation present two diverse ways of approaching
the question of being in the world and, therefore, inform key aspects of contemporary
design trends. Interactivity leads architecture to a high degree of openness and brings as its
social appeal the idea of inclusiveness and participation. Automation, on the other hand,
brings to the design milieu concepts such as generative and evolutionary architecture,
where a high degree of mechanical articulation transforms an input with low significance
into a complex and varied output. The uncritical adoption of these two concepts leads to an
alienated profession that seeks to design an objectified world, specially if coupled with
transgressive strategies, which since the Greeks are the privileged way to deal with the
uncertainty necessary for the emergence of novelty. However, play and games, taken as
framework for the coexistence of determinism and indeterminism, offer an alternative way
of seeking innovation without resorting to transgression. The paper, then, proposes a wide
adoption of play in digital design as a way to overcome transgressive strategies without
loosing hope in the necessary uncertainty of the future.

1. Bodily disengagement and the loss of complicity

To a great extent, the problems of contemporary western culture can be traced back to a
bodily disengagement that permeates most of our activities and our relation to the world.
More than ever we are living in an increasingly abstract realm of technology, which has
been leading us into unusual experiences of time and space, with positive and negative
consequences. The way we now deal with places and people, mediated by digital
technologies, is no doubt more intense and extended if compared with the scenario of a few
decades ago. The idea of expanding our reach of the world is so widespread that the term
augmented has caught peoples imaginary and the concept of augmented reality replaces
the aged fascination with virtual reality. However, as John Thackara1 points out, we are
facing an innovation dilemma, as there is no direct correlation between the increase of
technological development and the quality of our daily life. There are multiple aspects of
such a dissimilarity but one of the most fundamental is exactly the fragmentation of our

The cost of augmenting and expanding our reach of the world is exactly the fragmentation
of our experience of the world, the fragmentation being a necessary condition for such
achievement. Moreover, the requisite condition for this fragmentation is a bodily
disengagement, which has been recently taken to its extreme. Thus, if in the Renaissance, a
disembodied eye was the condition for the objectification of the world as image
accomplished by the perspectival paradigm, the condition for the current objectification of
the world as pure data is a fragmented and disembodied body. As a result of this
suspended body we experience the world less and less with the fullness of our senses, and
increasingly via all kinds of digital and reduced representations.

In other words, we are amplifying our reach of the world at the expense of our bodily
engagement in the very experience of reaching the world, losing an intuitive sense of
Second International Conference on Critical Digital: Who Cares (?) 172

complicity with the world we inhabit. In principle it is not a problem in itself, but the matter
turns into a paradoxical question: on the one hand we want to recover the complicity with
the world, as shown by all discourses on the body; on the other hand, we are not willing to
sacrifice the newly acquired ability to experience the world in an expanded fashion, as
shown by the prompt adoption of the various techniques and gadgets brought into the
market. This paradox, in a way, is a crucial crossroads for contemporary culture, digital
design included.

2. Transgression and the advantages of disembodiment

For certain, the fragmentation and disembodiment of experience we are presently

witnessing has historical roots. As Hackim Bey reminds us, the search for the immaterial
and the disembodiment of the body is not a new phenomenon that emerged with digital
technology. He argues that civilization has always been characterized by the idea of shifting
all value from body to spirit.2

In fact, disembodying the body seems to have many advantages and it becomes clear if we
look at the dawn of Western culture, back in Greece, when theatre became more prominent
than ritual. Ritual and theatre are two primeval activities that by means of diametrically
opposed strategies, allow us to charge the world with cultural meaning. While ritual
reaffirms one fixed picture of the world, usually confirming the validity of a specific
cosmogony, the theatrical performance plays with change and transformation,
experimenting with new world visions. Ritual has a conservative quality aiming at the
preservation of values, while theatre, on the other hand, brings about, within the stage
realm, the possibility of innovation by means of transgressing and replacing established
models.3 Therefore, it would be not far-fetched to say that the transgression in theatre is a
rehearsal for Western adoption of linearity and progress.

The way theatre attains such a privileged status is by disengaging the body of the majority
of its participants, who are transformed into an audience by having a restricted bodily
participation as they are detached from what happens in stage. As Prez-Gmez and
Pelletier4 put it, the architectural separation between stage and audience points to the
establishment of a critical distance that would mark Western culture from then on, allowing
for the arise of author and actors, therefore, creating the possibility of a constant
intellectual improvement of theatre.

Contrary to the circularity of ritual, the transgressive action projects itself against its own
limits and boundaries, opening space for rupture and experimentation towards the unknown
and the uncertain. That is probably the reason Western culture has favored transgression:
as a privileged way to experiment with new world visions, as a way to deal with the
uncertainty necessary for the emergency of novelty. Thus, if a bodily disengagement has
played this important role in the development of Western culture, and if by means of
transgression it goes hand in hand with the emergence of novelty, it should come as no
surprise that the extreme and unprecedented innovative period we are living in, comes
associated with a likewise radical and unparalleled bodily disengagement.

3. Interactivity, automation and Architecture organism, structure or machine?

Second International Conference on Critical Digital: Who Cares (?) 173

The current willingness to accept the fragmentation and the disembodiment of experience
has a direct association with two concepts that permeate most of contemporary discourses,
specially in the design field: interaction and automation. The present use of both concepts
are grounded in the early discussions on communication and control carried out by
Cybernetics in the 40s. Even though, they are charged with more ancient and deeper
meanings, dealing with non-determinism and tackling the uncertainty of our being in the
world. Interactivity deals with the question of otherness and the possibility of dialogue, in
opposition to discourse. The original drive behind it is our absence of knowledge about the
other. Automation, on the other hand, deals with the question of knowledge of the world
and the possibility of human invention, in opposition to Gods creativity. The original drive
behind it is our lack of knowledge about the origin of life (and by extension, the meaning of

The richness and subtlety of these meanings will be kept as the base for a digital interplay
of interactivity and automation. Moreover, computers potential for keeping the formal and
logic coherence of any given system, allowing space for informality to emerge without
destroying the systems integrity, will inform key aspects of current design trends,
provoking specific displacements in the architectural practice. At least four displacements
can be identified:

(i) Displacing architectural representation - from descriptive geometry to experiential model.

The possibilities of having a new kind of representation highly automated and with an
openness to interactivity has promoted this shift from fixed description of geometries to
abstract models open to be experimented and tested, and as Vilm Flusser puts it, waiting
to be stuffed with matter.5 The readiness to accept manipulation via parametric settings
makes architectural representation more similar to machines than conventional drawings.
However, the majority of practitioners are using these new capabilities to pursue the
culmination of perspectival tradition, collapsing the space between drawings and
construction. In fact, the parametric packages coupled with recent developments in
CAD/CAM and the wide adoption of file to factory strategies, seem to point to a linear
cause and effect process of producing architecture, as dreamt by architects since
Brunelleschi. In spite of this, the real innovation brought about by the conjunction of
automation and interactivity in the field of representation is the anti-Newtonian possibility of
having a viable indetermination between drawings and buildings. It is possible now to have
a formalized process of production that does not have a prediction of its final result.

(ii) Displacing architects creativity - from designing objects to designing processes. This
shift from the shape of objects to the form generating process is a direct result from the
developments of computational procedures based in iteration and feedback applied to
architectural representation. In other words, we can now include time in representation,
which opens up a new field where processes can be modeled and visualized. As Kwinter puts
it, this may unlock the door on the universal laws that govern the appearance and
destruction of form, and in so doing free us from the multiple tyranny of determinism and
from the poverty of a linear, numerical world.6

As a consequence, we have designers working with automated interaction, seeking an

architecture that presents a high degree of openness and indetermination, responding to the
social appeal of inclusiveness and participation. The designers role, in this case, is totally
redefined and his or her responsibility is limited to program the degree and fashion of the
sought openness. On the other hand, interactive based automation brings to the design
milieu concepts such as generative and evolutionary architecture, where a high degree of
mechanical articulation (be it a hardware or software mechanism) transforms an input with
low significance into a complex and varied output. The social and mythical appeal of
Second International Conference on Critical Digital: Who Cares (?) 174

automated design is the liberation of the workload by handing it over to a non-human

entity. The designers responsibility, in this case, is limited to program the sequence of
algorithmic operations and its iterative process, defining the way it loops and where there
will be a feedback.

(iii) Displacing the architectural object from building as resistance and opacity to building
as plasticity and communication. The combination of interactivity and automation opens the
architectural object to a new level of responsiveness and interactivity. The architectural
place that was traditionally the locus for fixing meaning and prescribing behaviors, can now
be characterized as a relational void, which may support social interaction in its vast
pragmatic and symbolic extensions, without falling in the traps of environment determinism.
In short, architecture may recover its historical role as an ethical instrument that articulates
people amongst themselves, while articulating them to the physical world.

(iv) Displacing dwellers identity - from being as user to being as architectural subject. An
interactive and automated architecture, where plasticity and communication is fundamental
to the building, ends up shifting the dwellers identity. With this shift to dwell is no longer
linked to the individual but to the subject, an inhabitant that makes and unmakes him or
herself in consonance with the architecture as interface. To dwell becomes more associated
with desire than with a predetermined functional structure. Thus, if we want to take Vilm
Flussers proposal, we should consider the dweller not only as subject, but as a proper
project, meaning someone who ceases to be subject to conditioning and turns into
someone that defines and projects his or her own future and persona.7

These four displacements described above are imbued with two opposite tendencies,
roughly associated either with interactivity or automation respectively. Thus, on the one
hand, we are witnessing an architecture that takes advantage of indetermination, which is
now a feasible option in the practice of architecture. Thus, it is possible now to
accommodate the presence of peoples informality and creativity, which has been
historically restrained and put aside, not only for ideological motivations, but also for
operational reasons. In this case, the four displacement ends up rendering architecture at
the same time as an organism (that evolves), a structure (that accommodates informality)
and a machine (that interfaces us with the physical world).

However, on the other hand, it seems that the majority of architects are taking these shifts
not as displacements but just as the replacement of an old model; in short, as continuation
and optimization of transgressive strategies. Representation can now be accurate to an
extreme level; creativity may be automated to reproduce worn out typologies; interactive
architecture may lead to frivolity and eye-catching ambiences; and the dweller may just
consume architectural tendencies as fashion information. As a result, we are witnessing a
revival of the objectifying aspects of architecture, which leads to an unparalleled level of
spectacularity that can be seen as the culmination of the Renaissance perspectival
paradigm. In short, the continuity of the perpetual transgressive design strategy,
particularly when accelerated by computers, may lead only to the exhaustion of resources
and to the anesthesia of perception.

4. Play and games: framework for uncertainty

This contemporary prominence given to the objective and deterministic qualities of

architecture, even when we have all means to overcome it, is a clear evidence of how
entangled we are in the transgressive strategies. The main difficulty for accepting and
Second International Conference on Critical Digital: Who Cares (?) 175

encompassing indetermination in activities that work under the idea of planning like the
production of architecture is that it is difficult to envisage a structure that accommodates
both determinism and indeterminacy. However, we already have a cultural structure,
pervasive and available, that combines predefined deterministic structure with the presence
of informality and chance: games.

Games and play are one of the most ubiquitous and fundamental activities for human beings
and are present in all known civilization, as J. Huizinga has shown in his seminal book
Homo Ludens8. The French sociologist Roger Caillois provides us with six points he
considers as essential features for discerning games from other activities: games are free,
separate, uncertain, unproductive, governed by rules, and make-believe.9 Furthermore,
Caillois proposes a comprehensive and enlightening classification of game: games of
chance, vertigo, simulation and competition.

The qualities of being uncertain despite being governed by rules seems to point precisely to
the framework needed for the development of feasible strategies for the employment of
computation in the production of architecture. Despite being based around rules, which are
arbitrary and not subject to indeterminacy, the outcome of games cannot be known
beforehand, unless a player is cheating. In this sense, all games are activities that allow the
coexistence of determinism and non-determinism: games of chance by definition include the
idea of indeterminacy; games of vertigo deal with indeterminacy by inducing an
uncontrollable confusion of the senses; games of competition include indeterminacy in the
form of the unpredictable abilities of the competitors; and games of simulation present a
particular type of interplay between rule and indeterminacy, where the gap between the
scripts/rules and the interpretation/gaming provides the sense of uniqueness each time it is
performed/played.10 Thus, if we consider games as formal structures open to uncertainty,
and if we ally them with the computers capability of intertwining interactivity and
automation, games turn out to be a propitious scenario for dealing with the informal and
unspoken aspects present in the complex interrelationship of beings and their built

5. Conclusion - A future beyond transgression.

Even though we are witnessing significant shifts in the design practice, it seems that
Renaissance tendencies are still holding architecture within the perspectival paradigm, as an
objectifying and prescriptive discipline. It is the argument of this paper that one of the
reasons for such tendency is the fear that by overcoming architectural determinism we may
lose access to the path of innovation. This is justifiable only if we take transgression as the
only way to project ourselves towards the unknown. If we consider play and games as
structured framework for the interplay of determinism and indeterminism this fear may

In fact, if on the one hand the digital technology with its endless waves of innovation has
brought transgression to an apparent dead end, it has also brought about a renewed
interest in games and play. What is really relevant is that games are paradoxical: they are
ritual activities (meaning they dont rely on transgression), they have fixed rules, and yet
their outcome is necessarily unpredictable. Moreover, we are already living in a context in
which play has achieved a more prominent role than in any previous stage of Western
culture, due to the potentiality of coupling them with interactivity and automation by means
of digital technologies. Thus, if we make a far-reaching adoption of play into digital design,
taking game as framework for the coexistence of determinism and non-determinism, it may
Second International Conference on Critical Digital: Who Cares (?) 176

be possible to overcome transgression in the design of our environment without losing hope
in the necessary uncertainty of the future.

3. References
Thackara J., The Design challenge of pervasive computing, available at
Bey H. The Information War Mediamatic Vol 8, No 4 pp.56-69
Rostas S., The Dance of Architecture: from ritualisation to performativity and ... back
again? Architectural Design - Architecture and Anthropology vol. 66, no 11/12 Nov./Dec. pp
Prez-Gmez A., and Pelletier L. Architectural Representation and the Perspective Hinge,
Cambridge: MIT Press, 1997.
Flusser V. The shape of things: a philosophy of design, London: Reaktion Books, 1999,
Kwinter S. The Cruelty of Numbers in ANY No 10 p.60-2 (issue Mech in Tecture)
Flusser V. A Dvida, Rio de Janeiro: Relume Dumar, 1999.
Huizinga J. Homo Ludens New York: Roy Publishers, 1950.
Caillois R. Man, Play, and Games New York: The Free Press (1961) p.9
Cabral Filho J., "Flip Horizontal: Gaming as Redemption," M/C: A Journal of Media and
Culture 3, no. 5 (2000), <http://www.api-network.com/mc/0010/flip.php> (29/03/2009).
Second International Conference on Critical Digital: Who Cares (?) 177

Making meaning of technology through Design

Mirja Leinss
Vodafone Group Services, User Experience, Germany


Technology inspires design. Technology requires the designer to turn it into something
relevant to people, to design functions and interfaces and to make use of its properties in a
specific context. Technology per se brings mostly no added value - it is the meaningful use
of technology that can actually turn it into an innovative product or environment users will
like. In a world where technology is ubiquitous and affordable, it plays a key factor in the
evolution and life cycle of products, environments and services.
Under these aspects the discipline of design became more and more a fundamental driver in
product development, product differentiation, and business strategy over the last few years.
Despite this, there is no single definition or unique common understanding of design, but it
seems that there is now an additional emerging task for designers to bring the "human
factor" to an engineering product, process or service, to make it valuable to people and
successful to businesses.
This human role is a fantastic opportunity and considering the sheer amount of factores a
challenge for the design discipline. And it raises requirements towards a designer's role,
education, experience and responsibility.

1. The evolution of the design process

Looking at the design process from a "form follows function perspective", where tools and
technology were not exclusively the driver of the design process, the designer today has a
novel task of successful reverse engineering. Essentially a "functions need meaning process"
is now a very common task for designers. In this reversed process, technology is the
starting point for designing new products and services using functions and features of the
technology. Technology can open up new possibilities to the designer and inspire new
products, services and environments. Although it comes with the risk of failure it also comes
with the chance of a breakthrough.

Different cultures have influenced their design definition and evolution over time based on
their economic, artistic and historic conditions.
Traditionally, design practice and its definition and perception in society has been very close
to the arts, as in France for example with the Art Deco movement in the 1930s, or very
close to engineering, depending on the respective cultural environment. In a decorative
manner the Italian design historically dealt with "styling" in the most part, German and
Swiss design having been closer to functional engineering, whereas the Bauhaus already
aimed at infusing industrial artefacts with the aesthetics from modern art. Today, design is
not the styling component at the end of a process nor is it not only about form giving, it is
about giving meaning. Meaning to technology, utilizing and transforming it into something
that is meaningful to a user, society, something that creates loyalty and identity and
therefore revenue for businesses.
In a global market place, defined by long human networks, permanent information
exchange, global competition, economic dependencies, ecological challenges and changes in
work and life style of people, designers face a tremendous challenge with keeping up in
understanding this environment to get the product right.
Second International Conference on Critical Digital: Who Cares (?) 178

Right for the target group, for social circumstances, for cost, for timelines, for sustainability,
for scalability, taking into consideration the short term and long term consequences and its
How can one create products for this complex world? Advanced design tools such as used for
animation and effects, CAD/CAM programs, algorithmic architecture or other generative
programs, help to generate an output and cope with complex input. They require specific
skills to use, but they're just a tool for the holistic thinker, as much as pen and paper are
tools - even if easier to use. There is need for highly skilled specialists among the
generalists, to translate complex ideas through computational processes into products.
Overarching however there is need for a human mind that understands the logic of the
process and can define the right input to get the desired output.
Quantitative and qualitative methods to gain user insight and experience in the field and
skills to translate those into design principles and frameworks belong to the standard
repertoire of a designer. Additionally a designer needs to be able to create an output that is
contextually relevant, sustainable and compliant with core values and conventions of
society. He needs to understand technology and know its core properties.
Designers need requirements, this is part of their task and the challenge lies in identifying
those and coming up with the best possible, suitable, convincing solution within the given
field. Technology by nature comes with clear restrictions and possibilities. It became a major
task for designers to develop usable and engaging products, environments and services that
make use of new technologies. New technologies come with novel properties and
opportunities for the designer to explore and to utilize.
The tasks and skills of a designer became more complex and shifted from a craftsmanship
oriented practice to a human-centric problem solving discipline.

2. The competitive market and innovation through design

The problem is that there are often already many solutions before there is an actual
problem, new technology can be a solution, but someone needs to identify the problem
where the solution can be applied. Designers are good in identifying such needs, based on
user research, experience and the capability of divergent thinking.
20th century thinking has been successful with getting efficient, affordable products on the
market, quickly becoming a valuable commodity for the customer. So there is no competitve
advantage here to achieve through bringing another functional product out at a reasonable
price to meet a short term demand. The long term demand though faces businesses with
taking a more fundamental approach and carefully looking at the worlds big problems and
phenomena, such as overpopulation, hunger, scarcity of natural resources, pollution, global
warming, globalization, fragile economic systems omniscient customers, and others.
Customers expect products to be meaningful and businesses to be ethical with a human
management style.
To deliver on these expectations and gain long term competitive advantage, businesses need
to combine existing or newly developed assets, such as technology and with open minded
thinking to get innovative. They need to develop more sensibility towards the world and
more flexibility in their processes to come up with authentic products people will like.
Creativity and flexibility allow change, change opens up great opportunities to innovate, and
innovation adds value to the business.
In the case of electronic consumer products for example, the topic of usability became
specifically important in the 80s and 90s and is still the core consideration for products.
Usability methods - inquiries, testing, persona etc. - help the designer to define the best
interfaces, known from the area of Human-Computer-Interaction (HCI). The goal is to
manage functions and features the technology offers so that the user can interact and fulfill
the use case. Good usability defines the quality of a product and is still a differentiator. But it
Second International Conference on Critical Digital: Who Cares (?) 179

is not the only one and needs to go hand in hand with something that creates delight for the
user. The combination of relevance, usability and delight of the product create an innovative
product that holds for the customer.
Companies with innovation potential often start as technology engineering driven
companies. They invent and develop a core technology that they would like to sell. Marty
Neumeier writes in his Book The Designful Company, that for long term profit the starting
point shall be design, not technologyi. However, technology can actually inspire design and
incentivize an interesting and successful innovation process in which design plays a
fundamental driving role. It is a good thing that technology research and development is
done simply with the goal of exploring what is possible. It can bring up many new
possibilities to design novel and meaningful products.
Designing for and with a technology can challenge and improve technology. The iterative
design process can help engineers and business managers to overcome standardized
operations and drive innovation. It can raise specific requirements to optimize and get the
most out of the technological invention. Design fosters innovation which in turn creates
brand, brand creates customer loyalty, and this generates long term revenue.

Figure 1. Process flow, adapted from Marty Neumeier

3. The value of the designer in technology driven businesses

The design process is flexible, iterative, it takes into account context and generates change
by the making use of thinking and tools that help to overcome boundaries between the
technology at the bottom and loyalty with customers at the top.
It is tempting to include all functions and features that technologies offer and to forget about
the relevance, usability and consequences of the outcome. This is where you need the
holistic thinker, the designer, who creates a product before the solution gets implemented.
The designer has the skill set to envision or strategically develop use cases for the use of
technology, knowing needs of people and society, knowing established products and taking
advantage of the specific parameters technology has to offer.
Similar as in business strategy, designers look into processes, frameworks and capabilities.
Design tools help to make abstracts more real, communicable or testable, which is very
powerful. Visualisations, models, prototypes or digital simulations of the product help to
make it more tangible, verify its functionality and initiate an iterative optimization process.
The perspective a design thinker brings to the table is an out-of-the-box thinking, a human
perspective. This makes the difference between design strategy and classical business
strategy work.
In addition and as a key factor for a sustainable product with a long term impact, the
designer needs to create an emotional connection with people. To achieve loyalty from the
customer, or, in the case of architecture and environments for example to achieve loyalty
Second International Conference on Critical Digital: Who Cares (?) 180

from the people that live in the space. The designed experience needs to fulfill more than
mere functionality, it needs to create joy of use. The well-known example of the iPod did not
start with the goal to invent a new music player, rather Apple created a convergent music
experience with iTunes for your PC, a portable device to take your music with you, and later
in the process the experience was enhanced through the wireless accessible music and
application store. The iPod itself is only one artefact in the overall experienceii.
In short, without the designer there would be a business concept and a technical solution -
or technology and a business plan, without having successfully translated both sides
requirements into the novel product.

4. Requirements for designers in practice

Who is the designer today is a valid question that still needs an answer. It might be in the
flexible nature of the discipline that even its function is constantly redefined, almost as a
product of its own process where design responds to the context. If the context changes, the
discipline evolves with it. It is a matter of fact though, that design shifts strongly from a sole
craftmanship function to a strategic function.
What does this mean in terms of requirements? Firstly, the designer needs to understand his
own ecosystem he works in, across all layers how is the experience going to be perceived,
what is the relevance and the logic for the experience, intuition, what are the ideals and the
values that the experience incorporates and what does a user feel when interacting with the
product? Secondly, the designer has to understand the vision and strategy of the company
he works with, he has to know successful products on the market, he has to understand
brand behaviour and communications as well as the company culture in order to understand
and integrate with people and processes. Thirdly, he has to take the context and his
experience into account and fourthly, he has to master appropriate design tools and be able
to interact and communicate the design work within the company or towards the client.
Logical thinking is needed for grounding, intuitive thinking is needed to see the whole
picture and to go beyond the standard.
Designers tend to be idealistic, empathetic, imaginative and intuitive. To actually deliver
products for the real world, grounding through logical thinking is necessary, as well as the
ability to plan processes and deliver. A fundamental interest in people and interpersonal
skills are crucial to understand the audience that they design for. Ned Hermanns four-sided
brain model of thinking styles; logical, organized, imaginative and interpersonal, describes
these characteristics, whereas usually one of the four areas tends to be dominant. Designers
have often a predominance of the intuitive, upper right quadrant, but need to show thinking
in all the four areas.iii

Figure 2. The Whole Brain Model; Hermann, N.

Second International Conference on Critical Digital: Who Cares (?) 181

Talking about technology inspiring design, an interest in technology and understanding of

the context - political, economical, societal is part of the basic requirements. The designer
also needs to be fast learning, responsible and user oriented, only then he can logically
develop scenarios for products or environements that make use of new technology, evaluate
their impact on the user and society, conclude and design a meaningful product. This
involves all four quadrants and requires flexibility and prioritisation of one over the other
thinking process, depending on the phase of the design.

5. Conclusion

Design as a discipline and function is experiencing a true hype, but it is not the solution to
every problem and it is not a guarantee for successful innovation, projects and products.
With this shift in the nature of design described in the first section of the paper, the designer
takes cradle-to-grave responsibility for the experiences he designs. Our conscience will
demand it, our environment will require it, and can you believe it our clients will insist on
it.iv In a business world that was focusing for too long on short term profit, a change in
focus towards long term achievements is definitively more desirable. From a customer
perspective, loyalty can be created through human centred core values, such as trust,
ethical and environmental responsibility or authenticity. An intuitive thinker who has an
understanding for people and an interest in society will inspire and contribute to this change.
However, he can not entirely be made accountable for the success or failure of the result.
Design depends on all the other functions from technology to business.
The culture of companies defines the roles for each function, and the balance and
collaboration between them.
In terms of identity and ownership of a design, mastering of advanced design tools is not the
crucial factor, since the scope of design goes beyond craftmanship. But the compelling
creative part in the design process involves skilled craftsmen who support and contribute
to the result.
Designers need inspiration and restrictions, from business, society or technology, to create
something valuable, which is their job. New technologies open up new opportunities and the
designer with his imaginative idealism is the ideal partner to put them into the right context
and make them meaningful to people.

See Neumeier, M., Design, Design, Where Art Thou? in The Designful Company, New
Riders, 2009, pp.12-15
see Brunner, R. and Emery, S., How to matter in Do you matter? How great design will
make people love your company, FT Press, 2009, pp.40-51
See Harmann, N., The Whole Brain Business Book, McGraw-Hill, 1996
see Heller, S. and Vienne, V. In a continous state of becoming in Citizen Designer
Perspectives on Design Responsibility, Allworth Press, 2003
Second International Conference on Critical Digital: Who Cares (?) 182

Second International Conference on Critical Digital: Who Cares (?) 183

A Non-deterministic and Interactive Computational Design System

Han Feng
Hyperbody, Architecture Faculty, TU Delft, the Netherlands

The evolution of computational design technique from mere substitution of hand drawing to
customized algorithms exhibiting certain degree of intelligence, has brought up not only
great design varieties, but also the demand for a critical study on the relationship between
human designer and their customized design algorithms. Most of current customized
architecture design algorithm adopts a deterministic paradigm to raise design questions,
that is to say, given the explicit rules and parameters, only one solution is allowed at each
discrete computation step. This design philosophy may work perfectly with strictly defined
design problem solving and optimization, but due to its deterministic nature, an efficient and
progressive communication between design algorithm and designer is hard to achieve, as
there is no need for designer to step into the running generative process. This lack of
communication and the inefficiency of translating perceptual judgment into computer
language directly results in the unconscious rejection of the non-parameterizable design
factors like intuition, esthetic and associational reasoning that are essential to any design
activity. It is on this ground, this paper introduces the quantum design paradigm as a
substitution for constructing an interactive design system. An algorithm prototype,
probability field, will be introduced to illustrate the logic and possible application of the
proposed quantum design paradigm.

1. Interactive design system

The evolution of computational design technique from mere substitution of hand drawing to
customized design algorithms exhibiting certain degree of intelligence, has brought up not
only great design varieties, but also the demand for a critical study on the relationship
between human designer and their customized design algorithms. As shown in figure 1, this
relationship could be mapped as a one-way command and execution relationship, which is
heavily relying on the automation of design algorithm, while limiting the input of a human
designer to mere programming. This relationship can however be set up as an interactive
process, in which a human designer not only program the design algorithm, but also actively
participate in an interactive computation process to bring in a set of non-parameterizable
design inputs, such as intuition, aesthetics and associational reasoning1 that are essential to
any design activity.

Figure 1. Designer and design algorithm relationship.

Second International Conference on Critical Digital: Who Cares (?) 184

This paper will discuss one possible way to construct such an interactive design system by
introducing the probability field algorithm prototype2. A brief examination on the limitation
of deterministic algorithm structure will be provided at first to reveal the necessity of
introducing a non-deterministic and interactive design paradigm namely the quantum design

2. The limitation of deterministic algorithm structure

As for mathematic and computing, an algorithm is defined as a sequence of finite

instructions. It is formally a type of effective method in which a list of well-defined
instructions for completing a task will, when given an initial state, proceed through a well-
defined series of successive states, eventually terminating in an end-state.3

In light of this definition, most of current customized architecture design algorithms are
constructed as a series of computation instructions that are organized by a predefined
structure, while the transition between different instructions are rigidly regulated by if, then
logic. For such algorithms, given explicit rules and parameters, only one computation
solution is allowed at each discrete computation step, or in another words, the computation
question is formulated to expect only one rational answer. This algorithm structure may
work independently with strictly defined design problem solving and optimization, but due to
its deterministic nature, an efficient and progressive communication between design
algorithm and designer is very hard to achieve, as there is no need for the designer to step
into the running computation process, and no reason for the design algorithms to provide
feedback to the designer except for an error print. This lack of communication and the
inefficiency of translating perceptual judgment into computer language directly results in the
unconscious rejection of the non-parameterizable design factors and reduces the customized
algorithm to a design problem solving technique instead of a design tool to its full potential.

Thus, in order to bring in personalized design input from human designer, it becomes
necessary to search for an alternative computational design paradigm to develop a novel
computation structure, which can challenge the binary nature of current algorithm structure.

3. Quantum design paradigm

Oppose to the independent algorithm setup that is based on deterministic transitions

between instructions, quantum design paradigm advocates an interactive design system
setup that includes both design algorithm and human designer as indispensable parties
within the same structure. In order to better discuss the quantum design paradigm, a brief
introduction of quantum paradigm4 in general will be necessary.

3.1. A brief introduction on quantum paradigm

Quantum theory, as the most successful explanation of our physical world, has not only
triggered tremendous technical improvement but also an emerging scientific worldview: the
quantum paradigm. Quantum paradigm is constructed by a set of correlated philosophical
reflections on the nature of the universe and our relation to it. As stated by the particle
wave duality5 and Heisenberg uncertainty principle, the observed reality is a statistical
representation of a relative reality that has its uncertain principle derived from the
Second International Conference on Critical Digital: Who Cares (?) 185

irreducible fuzziness of its basic building blocks, particles. As for quantum paradigm;
nothing can be said for certain about a physical system other than a probability wave
function that can only be described with statistics. While any attempt to probe the
configuration of a quantum system will definitely collapse its wave function. This is to say
that the very act of observation is actually an interaction between the observer and
observed system that not only yields a reading of the current configuration but also
reconfigures the system itself. Thus the objective belief of an absolute and deterministic
reality is put into question, and replaced with a relative reality that potentially includes
consciousness as a part of it6.

Besides the concept of absolute reality, the principle of locality has also been violated by
quantum theory. Derived from Eisensteins relativity theory, the principle of locality states
that physical processes occurring at one place should have no immediate effect on the
elements of reality at another location. As proven by a series of experiments, the opposite
of this principle is apparently well demonstrated. The violation of the principle of locality
promotes new ideas of understanding the universe with more than 3 dimensions which were
further developed into many-world interpretations and the super string theory that, among
many others, are both very important explanations of the quantum reality.

3.2. Key concepts from quantum design paradigm

Besides many other philosophical impacts, such as validity of causality, notion of time and
nature of order, quantum paradigm reveals an interactive relationship between observer
and the observed reality which can be concluded as an attractive quantum design paradigm
for setting up a novel computation system that involves bi-directional interaction between
designer and design algorithm. The study on quantum design paradigm yields a set of
interrelated concepts for setting up such an interactive computation system. This paper will
discuss two key concepts that have been tested with the probability field algorithm

3.2.1. Probabilistic description method

For quantum physics, probability is a definable non-deterministic function based on the

assumption of the systems elements. Due to the internal fuzziness and correlated event
potentiality of any quantum system, it becomes necessary to introduce a probabilistic
description method to depict the state of tendency for a quantum system with incomplete
information. An illustration of the probabilistic description method can be found in figure 2,
which is a sequence in increasing time after the launch of two wave packets in a weak
random potential landscape that is outlined by dark high potential barriers. In this artwork7,
the color temperature of the 2D space represents the possibility for the electrons to be at
that particular spot at a particular point of time, instead of a definite indication of their

Figure 2. Resonator Triptych by Eric J. Heller.

Second International Conference on Critical Digital: Who Cares (?) 186

By introducing the concept of computational probability and probabilistic description method

into computation system setup, we have the tool to construct and communicate with an
indeterminisitic system built with multiple possible configurations. Thus, on top of the
classical system description vocabulary composed of position, velocity and other affirmative
object property, we could add one more dimension to describe the system potentiality. This
naturally paved the way for proposing open-ended computational design question.

3.2.2. Non-local information sharing

In quantum physics, the violence of locality has been accepted as a given fact of particle
system. As for the computational design system setup, the concept of non-local information
sharing indicates a condition that information within a computation system can be
communicated and shared remotely. This leads to a new attitude towards the construction
of bottom-up computation system, for which local connectivity wont be the obstacle for
direct communication of certain kind of information that are necessary to be grasped by the
whole community of actors. A direct follow up of this attitude is the construction of a global
information layer which can record the summation of certain design parameter. With which,
a control mechanism can also be introduced to regulate the system evolution in a way that
preserves constant system summation so as to fulfill particular design considerations that
require certain overall system summation to be constant.

4. Probability field algorithm prototype

The probability field algorithm prototype is based on the synthesis of non-local information
sharing concept and probabilistic description method. Its algorithmic principles can be
summarized as follows:
The probability field is constructed with homogeneous, non sub-divisible units, which
are identifiable by unique states.
The units from different states have different impact on their neighbor units,
measured by the intensity of its influence field.
Unless sufficient local and global information is obtained, a single unit is always in a
quantum state, which is a collection of all possible states. For these elements, a
probabilistic ranking between all possible states can be calculated and
The order of the probabilistic ranking can be altered by the designer to introduce
personal design considerations.
The global criteria must be always fulfilled by means of automatic system
examination and update mechanism.

4.1. Probability field Algorithm structure

The probability field algorithm structure is constructed by grafting the reasoning tree
structure on top of a bottom up computation system, as shown in figure 3. Global constant
parameter is also introduced as an overall computation criterion to regulate the evolution of
system solution space.
Second International Conference on Critical Digital: Who Cares (?) 187

Figure 3. Reasoning tree structure and bottom up system.

The reasoning tree structure allows multiple computational solutions to be simultaneously

rational at each decision making node. Similar to the traditional branching design decision
making process, the reasoning tree structure here also allows the interactive design system
to jump back to a previous decision making node to choose another path while keeping the
reasoning trajectory before that node untouched. A bottom up computation system, on the
other hand, provides a distributed computation network to handle massive design
information. By assigning a unique reasoning tree structure to each distributed computation
unit of the bottom up system, the designer is invited to participate with the computational
decision making process with personalized design input on the basic computation unite

As shown in figure 4, the evolvement of the probability field algorithm prototype is triggered
by the interaction between human designer and the design algorithm. At each computation
step, the design algorithm will examine the general system condition and suggest a list of
rational states that the designer can assign to the particular computation unit that is of
concern for this step. All of these rational states will be further evaluated on their different
level of recommendation according to the built-in evaluation criteria, while the result of this
evaluation will be presented to the designer as visual feed back, as indicated by the gradient
color in this diagram.
Second International Conference on Critical Digital: Who Cares (?) 188

Figure 4. Computation system evolution.

Human designer, on the other hand, has the freedom to choose the suggested rational
states that are not best recommended by the algorithm to manifest his/her particular design
considerations that overweighs the automatically generated ranking. Thus the evolution of
this computation design process will no longer be confined to predefined mathematical
formulas, but open to personalized aesthetic and intuitive judgment from human designers.
General evaluation is made automatically after each decision making step, for any unit, if
there is a conflict8 between its current state and its local condition, the system will
automatically fix it and update the entire system accordingly.

4.2. Minesweeper design game

The probability field algorithm structure setup discussed above has been implemented with
an interactive design system, the minesweeper design game, figure 5. This design game
reverses the mine discovering process of a normal minesweeper game into a design process
that requires the designer to define the location of mine cells by assigning local connectivity
figures to the undetermined cells within a definite 2D matrix. As the cells in the 2D matrix
can only have 3, 5 or 8 neighbors around, the local connectivity figures are provided as
integers from 0 to 8. By assigning an integer N to any undetermined cell, say with 8
neighbors, the designer is actually telling the algorithm that there will be N mine cells
located beside this particular cell, thus the local probability for all adjacent cells to be a mine
cell will be increased with N/8 with reference to this newly assigned figure. There is also a
global requirement on how many mine cells in total should be imbedded within the matrix,
which introduces a global probability for each undetermined cell. The sum of both local and
global probability will be constantly updated after each new design decision making and
visualized by gradient color to inform the designer about current system condition.
Second International Conference on Critical Digital: Who Cares (?) 189

Figure 5. Screen shots from Minesweeper design game.

To play this design game, the designer first picks up an undetermined cell in the matrix,
then the minesweeper design algorithm will suggest a list of possible figures that the
designer could assign to this cell, the ranking of different figures are communicated via the
number of green bars above each option. The action of assigning one figure to the picked
cell will trigger an examination process through out the whole system, which can locate any
inconsistency within the system and fix it automatically. Thus the playing of this design
game also displays certain degree of unpredictability which comes from the interactive
nature of the computational system setup.

4.3. Application sketch

The output of the probability algorithm prototype will be a set of data that documents the
evolution of the initial bottom-up system. This set of data can be mapped onto any point
cloud that shares the same geometrical typology to promote local component generation
and differentiation.

Multiple sets of data can be combined together for complex design task. As in the urban
development game sketch, figure 6, the data table A tells the amount of extrusion of the
initial grid surface units, while data table B distorts the initial grid surface. Calculations
concerning with different design considerations could be carried out with separated
probability algorithm to reduce the scripting complexity, while the result from these
probability algorithm could be added up later to direct the final automated 3d
transformation process.

Figure 6. Urban development game.

Second International Conference on Critical Digital: Who Cares (?) 190

Another application sketch is the surface population tool, figure 7. This tool keeps the sum
of the surface opening and the overall construction cost as global constant constraints and
operates with multiple-layered heterogenous probability fields that manifest the
differentiated local condition of the given surface. The output from the operation on different
probability field layers will be weighted and combined to direct the local differentiation of the
given point cloud surface.

Figure 7. Surface population tool.

5. Conclusion

With the increased expectation of incorporating personalized design considerations into

computational design algorithm, the need for constructing an interactive computational
design system is becoming increasingly evident. Quantum design paradigm, as discussed in
this paper, provides novel computational concepts and methods for setting up such an
interactive design system. However, along with the exciting promise exhibited with the
probability field algorithm prototype, the general research on interactive design system also
exposed a great need for the support from a branch of parallel research. Some of the
expected research may come from the psychological point of view about how human
designer adapts his/her design habit to the interactive design environment, some from the
computer science point of view about how to make algorithm smarter to understand,
remember and even predict the design preference of different designer, some may also
come from the interface design point of view about how to balance the level of
representation complexity from both sides to facilitate a more efficient communication
between human designer and design algorithm. All of these interdependent research topics
could be very well concluded under the general research direction of digital design
interactivity, which could be a very challenging and promising research frontier for the
current exploration of digital architecture.

6. Endnotes
Second International Conference on Critical Digital: Who Cares (?) 191

Oppose to causal reasoning which has been commonly deployed by computer
programming, associational reasoning refers to the faculty of human mind to bridge one
idea with another in consciousness, if they were associated by principles like similarity,
contiguity and contrast.
Algorithm prototype refers to an abstract machine for testing a particular scripting logic. It
mainly focuses on the typological computation system setup and is open to further
implementation for specific design task. Examples of such algorithm prototype can be found
as computer generated swarm system and cellular automata.
Detailed discussion on quantum paradigm can be found in the book of Quantum City, by
Ayssar Arida, 2002, Architectural Press, Oxford.
Waveparticle duality is the concept that all matter and energy exhibits both wave-like
and particle-like properties. A central concept of quantum mechanics, duality addresses the
inadequacy of classical concepts like "particle" and "wave" in fully describing the behaviour
of small-scale objects. http://en.wikipedia.org/wiki/Wave-particle_duality
For approachable illustration of quantum theory, see Keneth W. Ford. The Quantum World
Quantum Physics for Everyone, Cambridge, Massachusetts London, England: Harvard
University Press, 2005.
The reason why choosing a rational suggestion may result in a system conflict in the future
is because that the suggestion evaluation algorithm only looks at its current global and local
condition to provide suggestions, it is not designed to probe into the dimension of time to
predict the future human decisions. Without concrete information about the human
decisions in the future, it is impossible to make a rational suggestion that works with all
possible scenario of the interactive system. One contrary example that the algorithm tries to
predict human decision can be found with popular chess-playing machines, for which,
human decision-making strategy is modeled by the machine based on the calculation only
for game benefit. This effort is certainly less interesting for this research, as it intentionally
reduces human intelligence to mere calculation and excludes intuitive and aesthetic faculty
of human player for the benefit of an efficient game model.
Second International Conference on Critical Digital: Who Cares (?) 192

Second International Conference on Critical Digital: Who Cares (?) 193

Digital Dwellings for the Experience Economy

David Celento
Penn State University, USA

In this paper I will expand upon arguments presented by authors Makimoto and Manners in
their book Digital Nomad.1 Written over a decade ago, they assert that continued adoption of
mobile technologies will create large-scale societal changesmany of which have already come
to pass in recent years. With most high profile designers pursuing large architectural projects,
(armed with increasingly obstreperous displays of digital fireworks) one wonders whether new
domestic desires resulting from digital advances may be percolating but going unaddressed.

Emerging trends influenced by digital technologies are examined in order to construct

arguments for a new type of living environment for urban dwellers. This solution relies on open-
source standards to encourage the creation of diverse free-market components that are
prefabricated and easily combined in different ways by consumers. Such a system will permit
the creation of mass-customized dwellings that are more flexible, adaptable, affordable,
sustainable, recyclable, and mobile. Several projects by third year architecture students at
Penn State University will be shown, exploring the possibilities of branding in architecture as
outlined by Anna Klingmann in Brandscapes: Architecture in the Experience Economy.2
Prefabrication, Mobility, Mass Customization, Micro-Architecture, Web Based Configurator, Open
Source, Branding, and Sustainability.

QuickTime and a
TIFF (Uncompressed) decompressor
are needed to see this picture.

Figure 1. BET (Black Entertainment Television) Figure 2. BET deployed studio, by M. Hoffman
Urban Recording Studio, by M. Hoffman

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 194

1. Cultural Context

One of the consequences of mobile computing and cellular technology is that people are
becoming less reliant upon land-lines for telecommunication and data services, enabling 43%
to perform part (or all) of their work remotely.3 Additionally, people in the U.S. are now
moving, on average, once every five years.4 Mobility is on the rise, and so, too, is population
growth, projected to go from 6.6 to 8 billion by 2025.5 In this scenario, urban areas will gain
eighteen times more rapidly than rural ones, resulting in sixty percent of the population
projected to live in urban areas by 20306 Yet, city dwelling has already become unaffordable for
many. There is tremendous demand for more affordable urban dwellings, and more flexible
solutions will increasingly be sought.

Today, with almost one third of the adult U.S. population renting their dwellings,7 increasing
home ownership for low-income people is an important concernespecially when considering
the average price of a new home in the U.S. is $290,6008 with urban housing being even more
expensive. With the average wage in the US being $58,0299 new house payments are
approximately 1.5 times more than most can afford.

Figure 3. Apple Dwelling, Controller, by C. Brown. Figure 5. Apple Dwelling, Deployed, by C. Brown.

Figure 4. Apple Dwelling, Interior, by C. Brown. Figure 6. Apple Dwelling, Perspective, by C. Brown

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 195

2. Prefabrication and Mobility

The list of notable design personalities exploring prefabrication is noteworthy: Le Corbusier in

1919 writes Mass Production Houses; Walter Gropius and Adolf Meyer develop Building Blocks
in 1923; Buckminster Fuller introduces the Dymaxion House at Chicagos Marshall Fields
department store in 1929; Frank Lloyd Wright introduces Usonian House in 1936; industrial
designer Henry Dreyfuss and architect Edward Larrabee Barnes collaborate on the design of a
prefab house for Vultex Aircraft Company in 1947; Jean Prouv commissioned by the French
government to create twenty-five mass-produced housing units in Meudon, France in 1950;
Richard Rogers proposes his Zip-Up Enclosures in 1968; and of course, numerous imaginative
works by Archigram in the 1960s.

If prefabrication continues to be the next frontier, as suggested by MoMAs recent exhibition

Home Delivery: Fabricating the Modern Dwelling, the increasing popularity of the Recreational
Vehicle (RV) suggests that mobility may be the next frontier for prefabrication. While
prefabrication inherently requires some degree of mobility (since products are manufactured
off-site), it does not necessarily encourage it, with 97% of prefabricated structures moving just
once from factory to installation.10 On the other end of the spectrum, the RV is an often
overlooked form of prefabrication, perhaps simply because it is designed to move. And by any
definitionincluding tax codes which qualify RVs as second homesit is a living environment.
Due to the ease of mobility and the capability for people to remain connected electronically,
increasing numbers are making RVs their full-time homes as is seen by websites like
www.escapees.com and www.fulltimerver.com. Despite impressions, this is a lifestyle embraced
by many who are far from retirement. Mobile lifestyles are becoming so popular that the US
postal service announced Premium Mail Forwarding in May, 2005,11 a service that continually
forwards mail no matter where one goes or how often this might be.

While various forms of mobile dwellings have been in existence since the beginning of the 20th
century both culture and technology may have evolved to a point where a more mobile solution
for urban environments may no longer be a technicolor proposition from the late 1960s but
one that is technically feasible, desirable, and perhaps even inevitable.

3. Challenges and Opportunities

To better enable mobile urban prefabrication and navigate the stylistic divide between existing
RVs and the Jump Box typology three primary aspects must be addressed: A) improving
desirability of prefabrication through branding; B) the development of uniform standards for
new structures to host these dwellings; and C) enabling personalization and customization
through interchangeable components.

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 196

Figure 10. Leatherman Deploy 3, by A. Longenbach

3.1 Branding and Desirability:

Figure 7. Leatherman Emergency Relief Unit, by A. Despite numerous well-designed examples,

Longenbach prefabrication as a whole in the US has
struggled with perceptual challenges for
decades. Initial objectionsformed during
WWII when mobile homes and travel
trailers served as barracks for soldiers
have only deepened due to perceptions of
shoddy workmanship, Byzantine tax codes,
class segregation, and more. Recently,
elevated toxicity for FEMA trailers deployed
after Hurricane Katrina have only reinforced
these negative perceptions.12 However,
prefabrication is not a pre-determined
Figure 8. Leatherman Deploy 1, by A. Longenbach product, but rather a method. This method
has potential for increased quality,
integration of sustainable materials and
technologies, and diminished waste
compared to site-built housing, to name but
a few advantages.13 Curiously, not all forms
of prefabrication are viewed suspiciously.
With one in twelve vehicle-owning
Americans also owning an RV, it is possible
that this form of mobile dwelling is the most
widely accepted and desirable form of
prefabricated dwelling in existence.14

According to an article in The Journal of

Consumer Behavior by business professor
Figure 9. Leatherman Deploy 2, by A. Longenbach Banwari Mittal, our culture relies heavily
upon brand-name products for self-
identity.15 Membership in todays consumer

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 197

collective is gained through the purchase of celebrated popular products.

Oxymoronically, people assert their individuality through brands and accessories that
customize these purchasesthink, Harley Davidson. This desire for personalization is one that
the Jump Box would excel at since branded components would be easily interchangeable. Thus
it is essential to create positive brand identityan aspect that even site-built homes are
beginning to pursue with recent co-branding efforts by Martha Stewart with KB Home16 and
Philippe Starck with Shaya Boymelgreen to create Downtown in Manhattan.17

As Michael Sorkin suggested in his Harvard Design Magazine article Brand Aid, to create the
success of any commercial multiple, the brand is critical. . . . And, of course, celebrity is the
main measure of authority in Brandworld.18 Thus, architects and designers may gain access to
wider markets by branding their efforts for the Jump Box. Instead of trying to launch a brand
from a position of obscurity, architects might associate with already recognized and highly
desirable brand names such as Under Armour or Burberry.

3.2 Uniform Standards:

As Witold Rybczynsky convincingly argues

in One Good Turn: A Natural History of the
Screwdriver and the Screw19, the best
solution is not always the one most widely
adopted. Regarding screws, square drives
are functionally better than slots, but slots
prevailed because the screws were easier to
make and people could use any type of
blade to drive them. However, the mere
fact that a standard becomes widely
accepted (like the slotted screw) enables
the adoption and proliferation of any given
technology. That said, many systems for
current prefabricated dwellings are
Figure 11 Burberry Exterior, by T. Garlewicz proprietary, incompatible, and/or require
sophisticated tooling. Accordingly, todays
offerings are not very different than many
previous efforts at prefabrication (from
Buckminster Fullers Dymaxion House to
Jean Prouves Maison Tropical) which
ultimately failed. Unique standards limit
suppliers, requires sole-source solutions,
limits development of non OEM (Original
Equipment Manufacturer) options, and
prevents greater market penetration than
uniform standards.

With enough shipping containers now in

existence to wrap around the equatortwo
highinventive dwellings made from these
Figure 12. Burberry Opened, by T. Garlewicz modules (by Wes Jones, Jennifer Siegal,

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 198

Hybrid Design, LOT-EK, etc.) makes some sense from a purely economic point of view
but lack broad aesthetic appeal, no matter how much they are customized.20
However, these shipping container designs do offer a valuable lesson for prefabrication, that of
a standardized chassis using existing global transportation techniques. Mobile products based
on such a standardized chassis could be radically customized from the ground up through
online configurators that would allow multiple designers, and producers, to create unique,
environmentally responsible, and technologically advanced products that could easily permit
mass-customization in a way predicted sixteen years ago by Joseph Pine.21

The success of the Jump Box concept relies on the development of rigorous open-source
standards available to all via web distribution. Akin to the bus model of manufacturing in the
computer industry (where various components may be swapped in and out of uniform
connectors) the chassis is the core component of the Jump Box, while all other components
would be configurable. The chassis would serve much like the 2002 GM Hy-wire automotive
frame, which was intended to carry several body types. The Jump Box frame is intended to be
compatible with shipping container standards for ships, trains, and trucks. Thus, it will perform
as a rolling RV chassis, be able to be carried by a variety of global transportation techniques,
be capable of being housed within vertical city structures, and potentially even serve as a
floating deck for a houseboat-like solution.

The second aspect of open-source development

work is geared toward the creation of
dimensional standards above the chassis that
will permit universal connectivity for interior and
exterior systems. This will permit
interchangeability of diverse components. While
the Jump Box may expand in a variety of
fashions to increase the size while in dwelling
mode, when shipped it must fit through the
highway keyhole of the interstate systemand
may not exceed a maximum of 13'-6" in total
shipping height, 8' width, with a length not to
exceed 48 to be compatible with all U.S. state
Figure 13. Hy-Wire Chassis by GM limits for the trucking industry.
(photo cardesignnews.com)

Mobile products based on such a chassis would

allow multiple designers to create parametrically
varied products that could easily fit together to
permit mass-customization. Like the
prefabricated living suites by Piikio Works for
the cruise ship industry22, these creations need
not look anything like shipping containers. Such
a standardized chassis would permit
tremendous stylistic diversity, permitting easy
upgrades over time as fashions, finances, and
technology evolve.

Figure 14. Hy-Wire Chassis by GM

(photo cardesignnews.com)

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 199

aspects are worth noting in these latter

examples. The first is that no matter how radical
the transformation, rarely is the original product
unrecognizableafter all, ones still wishes to be
a member of this brand, even if they also wish
to be unique. The second is that without
uniform standards, many of these modifications
would not be possible, since they are often
sourced from non-OEM suppliers.

Historically, the consumer product industry (and

in particular the automotive industry) has
attempted to mitigate the uniformity of their
goods by offering options. Not only has this
enhanced consumer choices, it also offers
avenues for companies to increase their
revenues. Worth noting is the recent trend
Figure 15. Puma Soccer Dwelling, by G. Colan toward mass customization wherein companies
like Puma, Nike, Freitag, and others allow
consumers to create and purchase a one-of-a-
kind product via interactive websites.
4. Summary

Fixed foundation homes offer many benefits,

but have at least four limitations that will be
increasingly felt by someespecially by those
who are (who wish, or need to be) digital

First, the absence of substantive feedback loops

Figure 16. Puma Interior Gym, by G. Colan (evident in product-design but mostly absent in
architecture) prohibits in-depth analysis,
3.3 Personalization & Customization
adaptation, and evolution of the home. Second,
the lack of mass-production techniques prevents
Consumer products, manufactured en
greater innovation and integration of new
masse, typically lack unique
domestic technologies, reduced prices, recycle-
characteristics. While this is suitable for
ability, and higher quality. Third, consumers
most products (such as televisions,
desire for brand identity and status is unfulfilled
electric toothbrushes, and flashlights) a
by most site-built dwellings. And fourth,
few products are capable of extending
increased mobility occurring among the
ones personality. From keychains to
populace is neither accommodated, nor enabled
decorative carriers for iPods and cell
by fixed dwellingswhich are expensive to
phones many people enthusiastically
acquire and renovate, located further and
extend their personalities. Very ambitious
further from urban centers, as well as time-
people will take recognized brands and
consuming and expensive to move into and out
even modify them to suit their desires.
This is seen in radical transformations of
Harley Davidson motorcycles, VW Bugs,
and most recently, Honda CRXs. Two

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 200

Frei Otto expresses concern for the innovation will also be stimulated as compact
current architectural climate, writing, domestic environments gain in number. In
Todays architecture is at a turning point. addition, the Jump Box concept offers numerous
The big trends of the last decade are design and manufacturing opportunities to
outlived and only a few buildings in the unlimited parties due to the open-source
world manifest architectural perfection standards ensuring dimensional uniformity and
while paving new ways into the future23 a web-based ordering mechanism. Such an
It is time for domestic architecture to approach is more aligned with the processes for
harness emerging technologies and tap design, production, and purchasing of many
more deeply into consumer desires. Mass sophisticated consumer productsall of which
production efforts will inevitably give have evolved because of digital techniques.
consumers greater choice in how they
configure their dwellings and permit
improved technological integration. For
some, the creation of a product like the
Jump Box would permit increasing
numbers of highly mobile people to live in
a far more enabling fashion than they do
now. For others who desire (or require) a
more settled existence, it would permit a
fixed home to serve as a hospitable base
camp for explorationswhat Makimoto
and Manners suggest as cerebral
Figure 17. Under Armour Flex Dwelling, by J. Harner
nomadismor what we commonly call

Certainly there are a number of

challenges to this proposition. Today,
numerous governing institutions continue
to reinforce settlement patterns founded
upon agricultural conditions that no longer
exist. Among these are voting boundaries,
land ownership laws, tax structures,
zoning laws, and land based utility
infrastructure. However, in light of current
technological considerations, the cost and
popularity of urban dwelling, predicted
Figure 18. Under Armour Deployed, by J. Harner
environmental changes, and occupational
fluidity, fixed dwellings may at some point
in the near future become less desirable
than options that more easily enable
mobility and technological integration.

If these institutional resistances can

evolve (or be overcome) mobile urban
solutions could proliferate with benefits to
consumers, designers and manufacturers.
With urban structures serving as temporal
docks for dwellings, greater technological

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 201

5. Endnotes
Makimoto, T., and Manners, D. Digital Nomad, New York: Wiley, 1997.
Klingmann, A., Brandscapes: Architecture in the Experience Economy, Cambridge, MA: MIT
Press, 2007.
The International Telework Association and Council, http://www.workingfromanywhere.org/
U.S. Census Bureau, 2005. , Table 35. Movers by Type of Move and Reason for Moving:,
http://www.census.gov/prod/2006pubs/07statab/pop.pdf .
Population Reference Bureau, 2006. World Population Data Sheet.
U.N.-HABITAT, 2004/05. State of the Worlds Cities.
U.S. Department of Housing and Urban Development, May 2001. 1999 American Housing
Brief, AHB/01-2.
U.S. Census BUREAU, Aug, 9, 2006. Facts for Features.[Online],
Johnston, D.C. Average U.S. Income Showed First Rise Over 2000, New York Times, August
25, 2008.
Blum, A. Plug+Play Construction, Wired, Issue 15.01, January 2007,
United States Postal Service, May 11, 2005,
Brunker, M. Are FEMA Trailers Toxic Tin Cans?,
[Online],http://www.msnbc.msn.com/id/14011193/, July 25, 2006.
Building Research Establishment Ltd, 2003. DTI Construction Industry Directorate Project
Report: Current Practice and Potential Uses of Prefabrication Department of Trade and Industry
Report, No. 203032. Scotland, 9,14.
Curtin, R. PhD. Director, Surveys of Consumers, University of Michigan, The RV Consumer:
A Demographic Profile 2005 Survey, Recreation Vehicle Industry Association, 2005,
<http://rvia.hbp.com/itemdisplay.cfm?pid=47 >.
Mittal, B., 2006. I, Me, and MineHow Products Become Consumers Extended Selves, The
Journal of Consumer Behavior, Vol. 5, Issue 6. 550562.
KB Home, 2005. [Online], http://www.kbhome.com/martha.
Dunlap, D., May 11, 2004. Condos, Not Roll-Tops, on Finance's Holiest Corner, New York
Times. Also [Online], http://www.downtownbystarck.com/.
Sorkin, M., Brand Aid Or, The Lexus and the Guggenheim (Further Tales of the Notorious
B.I.G.ness), Harvard Design Magazine 17, Fall 2002/Winter.
Rybczynksi, W., One Good Turn: A Natural History of the Screwdriver and the Screw, New
York: Simon and Schuster, 2001.
Taggart, S., The 20-Ton Packet, Wired, October, 1999.
Pine, J., Mass Customization: The New Frontier in Business Competition, Cambridge, MA:
Harvard Business School Press, 1992.
Schodek, D., Bechthold, M., Griggs, K.J., Kao, K., and Steinberg, M. Digital Design and
Manufacturing: CAD/CAM Technologies in Architecture, New Jersey: John Wiley & Sons, 2005.
Otto, F. [McQuaid, M.], Shigeru Ban, New York: Phaidon Press, Foreword, 2006.

David Celento, Penn State University Harvard GSD, 2009: Who Cares(?)
Second International Conference on Critical Digital: Who Cares (?) 202

Second International Conference on Critical Digital: Who Cares (?) 203

The Fall and Rise of the Amateur

Roel Klaassen
Premsela, Dutch Platform for Design and Fashion

Bart Heerdink
Premsela, Dutch Platform for Design and Fashion

In 1971, Victor Papanek argued that design should not remain separate from the real
world. If it did, it should cease to exist. The authors argue that design as we knew it has
ceased to exist. Technology, meanwhile, has empowered the masses to create. Designers
must rethink their position and join the amateurs once more. Twentieth-century Dutch
design was mainly functionalist until populist movements challenged modernist ideology.
Design then went on to become a commercial, elite institution. Now the masses are
designing, thanks to technological developments like Web 2.0, and professionals have lost
status. Amateurs and craftspeople were historically close to the wider society than the
professional class that developed under industrialism. Today, many people engage in
amateur creation. Recent prominent manifestations have included hacker culture and punk
DIY, and the Internet has enabled true mass creativity. In 2008, Philippe Starck said design
was dead; the authors agree: design as we knew it is a thing of the past. In the new age of
mass creativity, design can recover amateurisms good qualities enthusiasm and
openness. Designers should become amateurs.

All men are designers. All that we do, almost all of the time, is design, for design
is basic to all human activity. (Victor Papanek)

In 1971, the American designer Victor Papanek wrote Design for the Real World. In this
classic book, Papanek argues that design cannot be separated from everyday life, or what
he calls the real world. Making design a thing-by-itself will eventually lower its value. In
an age of mass production and top-down planning, design is a powerful instrument for
shaping tools and environments, and finally society and humanity. Papanek calls the
profession of the industrial designer harmful because it merely brings forth useless and
badly designed gadgets. Instead, design should be used to create objects for the real world,
for real people with real problems. This requires a closer understanding of ordinary people
by designers, and much more insight into the design process on the part of the public.
Design must become an innovative, highly creative, cross-disciplinary instrument aimed at
the true needs of human beings. Otherwise, design as we have come to know it should
cease to exist.1 Although we may prefer to see his message from a broader, more
contemporary perspective, Papanek could not have been more prophetic.

We would like to argue that design as we have come to know it has ceased to exist. As
digital technology has matured and found its way into design practice, so has the public.
People have begun resisting being mere bystanders and consumers of marketing messages
and pre-chewed products. Consumers have become involved in the design process and
become so-called prosumers. And they have become designers: nowadays, people actively

Second International Conference on Critical Digital: Who Cares (?) 204

intervene in the development and improvement of the products they use. Thanks to
evolving digital technology and the Internet, tools and methods that were once the
exclusive property of professional designers are now in the hands of millions of people. In
taking a share of design practice, amateurs are challenging the position of design
professionals. As the effects of the financial crisis unfold, it is becoming clearer that
professionals at large have lost credibility and authority, and this includes designers. More
than anything else, the importance of design depends on its relevance to daily life.
Professionals must rethink their position in the midst of the current situation. At this
juncture, the relationship between professional designers and amateurs is definitely
changing. The practices of the two have always been connected, even when the relationship
has been one of repudiation.2 In our view, looking at amateurism can help us to critique the
role of the professional. As amateurs become more and more professional, perhaps it is
time for professional designers to become amateurs again.

A taste of modernism

At the beginning of the twentieth century, most designers in the Netherlands were engaged
in traditional craftwork. Soon thereafter, design became closely linked to modernist
ideology. In an age of industrialization and mass production, optimism and belief in the
blessings of science and technology were strong. Modernist designers were inspired to take
a functional approach and use a methodology derived from science. Design became
equivalent to problem-solving. The appearance of products also became part of this
functional methodology. The modernist motto form follows function, taken literally, implies
that beauty is, or should be, functional.

Designers in the Netherlands were convinced that beautiful products would contribute to
human happiness and the improvement of society. Good design would eventually create a
better world. Well-designed objects and surroundings would educate and civilize the people.
So far, so good. But all too often, beauty and ugliness became synonymous with elitist (or
middle-class) conceptions of good and bad taste. Beginning in the 1920s, Dutch design
culture was characterized by sociopolitical idealism. For the good of civilization, designers
saw it as their job to propagate beauty. Design was seen as an instrument for integrating
art into daily life.3 But it was at best naive to suppose the masses would ever have the same
tastes as an elite not to say a normative and moralistic group of aesthetic specialists.
For this reason, Dutch modernism is also called moralistic modernism.4 For all their beauty,
modern designs met with resistance for appearing heartless and obsessive.5

Of course, not all designers imposed their ideals in this way. One example is Aldo van Eyck,
who designed playgrounds to occupy destroyed, rubble-strewn sites in Amsterdam shortly
after the Second World War. What is interesting is that he designed the conditions of use for
these places in a way that met the needs of ordinary people and related to their behavior.
Dictating little in the way of form and function, his designs gave the public freedom of use
and room for exploration.6 His particular approach gained Van Eyck respect as an architect
in Europe. Still, it remained uncommon for designers to imagine the real needs and
preferences of the people.

In the sixties, various movements and subcultures began to appear, showing that modernist
design could no longer control the crowds. Its ideal of beauty did not attract ordinary
people, nor was its patronizing nature appreciated. The rise of the masses in the
Netherlands, as in most other western cultures, coincided with progressive and anti-
establishment movements. For instance, Provo (short for provocation) was a nonviolent

Second International Conference on Critical Digital: Who Cares (?) 205

anarchist group connected with street youth subcultures like the nozems as well as to young
artists.7 One of the Provos most notable ideas was the white bicycle plan. By providing
white bikes around Amsterdam for free public use, Provo sought to contribute to a healthy,
unregulated and unmaterialistic environment. Though such ideas may have been naive and
perhaps normative in their own way, at least they arose from the bottom up.

The noble professional

From the fifties onward, design developed into a profession with growing economic and
social value. Designers became culturally and economically influential. Industrial
productivity in the Netherlands reached its highest peak in the sixties. Although it had been
ideologically motivated in the beginning, design had by this time become an integral part of
economic strategy and commercial motives. While still struggling with popular culture
designers created a corporate mainstream of their own.

In the meantime, industrial designers were becoming well organized, and their profession
was becoming an established one. In the process of defining themselves, they
systematically excluded amateurs and professionals from other domains. In doing so, they
created an independent cultural field, or champ, to use sociologist Pierre Bourdieus term.
This field consists of museums, galleries, events, designers and the public, and functions
like a market in which capital and privilege are at stake. Bourdieu's theory of distinction
shows that the superiority of the cultural elite implies the inferiority of low or mainstream
culture.8 This is why the design profession is often thought of as an avant-garde
metropolitan bourgeoisie with very specific and restrictive ideas about what good design is.
Consciously and deliberately or not, cultural production is predisposed to legitimizing social

Now things are clearly changing: design and fashion are highly visible parts of popular
culture. Werner Sewing argues that the revaluing of vernacular objects is in a way the
return of the repressed, coming as they do from the lower ranks of society.9 As a
phenomenon, populism carries a particular message: it acknowledges that the people in the
mainstream are perfectly normal. Being normal is not a matter of low and high culture but
of the ordinary and the extraordinary. Culturally oppressed by the elite for decades, these
days the public is raising its voice. Now that Web 2.0 technology enables many more people
to speak out, the cultural field is opening up. Many barriers that were deliberately raised by
professionals are disappearing. Is this a case of noblesse oblige? From the professionals
perspective, this was neither foreseen nor intended. In The Cult of the Amateur: How
Todays Internet is Killing our Culture, Andrew Keen writes, Say goodbye to todays experts
and cultural gatekeepers - our reporters, news anchors, editors, music companies, and
Hollywood movie studios.10 In his view, instead of a dictatorship of experts we will have a
dictatorship of idiots.

The skilled amateur

Idiots or not, amateurs can flourish only when specialization, narrow professional training
and the notion of the individual artist as genius are challenged.11 Under the influence of
fast-developing technology, this is precisely what seems to be happening. Until computer
technology enabled more people to design things in the 1980s, the history of design was
usually perceived in terms of the personal genius of individual designers and the objects
they produced. This was caused by both the elite status of design and the predictability with

Second International Conference on Critical Digital: Who Cares (?) 206

which particular designers and their works were foregrounded over others. Furniture design
and highly authorial graphic design that fit into wider art- and architecture-historical
patterns dominated over more ordinary kinds of design.12

Since the late Middle Ages, the use of the term profession had referred to upper-class
fields of work like medicine and law. So strictly speaking, craftsmen werent professionals.
Particular skills were acquired by small communities, which mostly organized themselves
into guilds and were well connected with the wider society. After years of training and
practice, a craftsman could become an expert, a master, but still not a professional. With
the rise of the Industrial Age, these skilled amateurs gradually lost ground. Specialized
industrial knowledge became of higher value than the amateur's skill, curiosity and

But now we can see that modern professionals have lost their connection to society, and
even to their peers.13 In the process of professionalization, creative, nonlinear, trial-and-
error ways of finding solutions to problems were lost. The price was nave passion,
commitment, curiosity and playfulness. To a certain extent, design as a professional activity
became characterized by a mandatory distance of the designer from the users, the
production process, the work of colleagues, and the past.14 Gradually, the professional
became a person whose living, beating heart has been removed and replaced with some
sort of wind-up engine.15 What went missing was a close involvement with society.

At the same time, the mainstream is starting to design enthusiastically in response.

Amateur hour has arrived, and the audience is running the show, as Andrew Keen puts
it.16 The amateur, anything but a professional, never lost this involvement, and he or she is
thus capable of reintroducing commitment, sensitivity and a focus on real needs into a
professional environment that has disengaged from its surroundings.17

The rise of the amateur

People share, in more or less equal measure, the capacity to create. In the Netherlands, one
in six people frequently works on the interior of his or her home; one in ten makes clothes;
one in twelve is involved in graphic design; and one in twenty makes furniture.18 Everyone
seems to be a designer or rather an amateur. An amateur, in the original meaning of the
word, is an enthusiast who does something purely for the love of it. In that sense,
motivation, aspiration and a do-it-yourself attitude are the amateurs most interesting

Do-It-Yourself arose as a byproduct of the punk movement, both born as alternatives to

corporate mainstream culture in the 1970s. The DIY approach, especially in the American
punk scene, resulted in huge numbers of homemade fanzines and provided an outlet for
talented artists and designers as well as photographers and journalists.19 Some were
professionally trained, but most werent. At its most productive when working on the fringes
of the mainstream, DIY advocated instant action, independence and individuality.

DIY has been closely related to hacker culture. For example, a synergy between the two
groups resulted in the development of the Apple computer. In essence, a hacker is
fundamentally an amateur. Whatever he does, he does for reasons of enjoyment and
curiosity, with intrinsic commitment and passion. Originally, hacking referred to finding
smart and sometimes ludicrous solutions to problems. Hacking is not about problem-solving
as a means to a specific end; it is [...] for the active, the hungry, the culturally curious; its

Second International Conference on Critical Digital: Who Cares (?) 207

the catalyst that allows people to take charge of their worlds and transforms them from
being passive consumers into active creators.20

Now the rise of the Internet has allowed amateurs and hackers around the world to connect
to each other. In the 1970s, a new phenomenon appeared in public space. In an empty lot
in Manhattan, Liz Christy started a communal garden. This act eventually led to a worldwide
network of guerrilla gardeners. These urban pioneers made up an activist movement that
followed the trend of the anti-establishment groups of the sixties. They took over any
available space they found in the city for temporary use. In this way, they reclaimed public
space as their domain not to live and work there but to explore, experiment and engage.
The Internet has allowed these local enthusiasts to connect to each other, forming a
worldwide network. While acting locally in the real world, their community grew and
organized in the virtual world.

The early 1990s made it clear that all real innovation happened outside the professional
sphere. As Charles Leadbeater put it in his bestseller We-Think, We are moving from an era
in which innovation and creativity was for an elite of special people, working in special
places, to an era in which innovation and creativity are becoming mass activities.21 The
Internet caused DIY activity to explode. The massive production of information on the Web
creates such an unlimited source of knowledge that its use has acquired a name:
crowdsourcing. Increasingly, ideas emerge out of interaction between users and producers,
amateurs and professionals. Flickr is a well-known and interesting example. Originally
developed by game designers so they could share screen dumps of level designs, the
application became an immensely popular photo-sharing platform for professionals and
amateurs alike. This example shows that professionals are not always the only ones in
charge, especially in the digital world. Amateurs nowadays want to be developers, designers
and publishers.

Design is dead, long live (Dutch) design

In early 2008, the French design star Philippe Starck boldly declared that design was dead.
He said that everything he had created was absolutely unnecessary. [P]eople with more
intelligence than me would have gotten to this point much earlier, he said.22 Papanek was
one who did, as we saw earlier. So has design ceased to exist? In a way, design as we
knew it forty years ago has indeed!

Design based on modernist ideology was unable to convince ordinary people of its good
intentions. Or its good taste, for that matter. By staying aloof from the mainstream, design
separated itself from the real world. Ultimately, designers became so self-engaged that
they failed to address the real problems of real people. Because of its moralistic and
normative nature, designed modernist ideology died in beauty. Nevertheless, design as a
profession took on considerable economic value. Some would say that, as happens in the
arts, its commercial success heralded its end. In losing its sacred role, design died. Turned
into a commodity, design became mainstream itself.

Within the Dutch context, this has only been partly true. The Netherlands is a small country
with a relatively large number of designers but limited industrial production. In general,
professional designers are neither highly specialized nor embedded in industry. So they do
not feel limited to their own spheres of work but free to cross over to other domains. They
often have their own workshops and local production facilities, simply because they have to.
The means to the end are not fixed; their ways of working are conceptual and personally

Second International Conference on Critical Digital: Who Cares (?) 208

motivated. In addition, a large number of designers working professionally are not trained
as such. So there is some truth in saying that Dutch design shows characteristics of
amateurism: nave passion, commitment and playfulness.

This is especially visible in some of the pieces in the Droog collection, such as Tejo Remys
Rag chair and Peter van der Jagts Bottoms Up doorbell. But there is an even more
interesting example. Designer Ester van de Wiel started the Sunday Adventure Club, a
temporary club for urban pioneers.23 At certain spots in Amsterdam, local citizens and
enthusiasts got together to temporarily redesign public spaces. The designer became a
curator and coach, turning amateurs into urban designers and the public into prosumers.

Just as the Industrial Age saw the rise of the professional designer, the digital era will see
the rise of the professional amateur. You cant beat them there are too many so you
might as well join them. Taking part in mainstream culture is a professionals first step
toward getting in touch with everyday life, the real world. Get real and get enthusiastic
about becoming an amateur again. With millions of people out there creating things, design
is more alive than ever.


Papanek, V. Design for the Real World, London: Thames and Hudson Ltd., 1972.
Atkinson, P., G. Beegan. Professionalism, Amateurism and the Boundaries of Design,
Journal of Design History vol 21, 2008, no 4, pp. 305-315.
For an excellent overview of designing in the Netherlands see Simon Thomas, M. Goed in
Vorm: Honderd Jaar Ontwerpen in Nederland, Rotterdam: 010, 2008.
Stiphout, W. van. Stories from behind the Scenes of Dutch Moral Modernism in Trousers:
Stories from Behind the Scenes of Dutch Moral Modernism, Stam, M. Rotterdam: 1999.
For example, architects of the Amsterdam School designed the facades of their buildings in
such way workers families could not be seen from the outside. Tall, brick-walled balconies
and high, small windows made the apartments uncomfortable to live in.
See Lefaivre, L.M. Space, Place, and Play in Aldo van Eyck: The Playgrounds and the
City, Lefaivre, L.M., Roode, I. (eds.) Rotterdam: NAi, 2002.
Nozems were part of what sociologists would later call youth culture. In the USA they
youngsters were called beatniks; in the UK, teddy boys; in France, blousons noirs.
For Bourdieus original theory of distinction see Bourdieu, P. Distinction: A Social Critique
of the Judgement of Taste, Oxon: Routledge, 2007.
Mentioned by Werner Sewing in his Premsela lecture. Sewing, W. Retro Design or
Populism, Amsterdam: Premsela, 2005.
Keen, A. The Cult of the Amateur: How Today's Internet is Killing our Culture, New York:
Doubleday, 2007.
Pointed out by Pat Kirkham, though in a different context. Kirkham, P. Women and the
Inter-war Handicrafts Revival, in A View from the Interior: Women and Design, Attfield, J.,
and Kirkham, P. (eds.), London: Women's Press, 1989, pp. 17483.
See Julier, G. The Culture of Design, London: Sage Publications Ltd., 2008.
For an extensive work on the contemporary craftsman see Sennet, R. The Craftsman,
New Haven: Yale University Press, 2008.
This particular narrow but illustrative definition of design is pointed out by Sybrand
Zijlstra. Zijlstra, S. Reality Check, Morf, Tijdschrift voor Vormgeving, no. 9, Amsterdam:
Premsela, 1999.
Benois, A. Aleksandr Benua Razmyshliaet, Moscow: Sovietskii Khudozhnik, 1968, p. 568

Second International Conference on Critical Digital: Who Cares (?) 209

Boldly stated on the back of Andrew Keens recent bestseller. Keen, A. The Cult of the
Amateur: How Today's Internet is Killing our Culture, New York: Doubleday, 2007.
Mentioned by Gert Staal in an expert meeting on amateurism. Amsterdam: Premsela,
Goedhart, S. et al. Een brede kijk op de belangstelling voor kunst en cultuur: een eerste
verkenning, Amsterdam: Motivaction Research and Strategy, 2007.
For an unusual but interesting reflection on DIY see Turner, C. Planet Simpson, London:
Ebury Press, 2007.
Danyelle, J. Peter Rojas en Jill Fehrenbacher talk hack, interview on:
Leadbeater, C. We-Think, London: Profile Books Ltd., 2008.
Starck, P. Ich schme mich dafr, Die Zeit, vol 14, 2008.
Sunday Adventure Club is a project of Ester van de Wiel. Amsterdam: ExperimentaDesign
Amsterdam in coproduction with Premsela, 2008, www.sundayadventureclub.nl.

Second International Conference on Critical Digital: Who Cares (?) 210

Second International Conference on Critical Digital: Who Cares (?) 211






Second International Conference on Critical Digital: Who Cares (?) 212

Second International Conference on Critical Digital: Who Cares (?) 213

The Handbook for Avoiding Computational Design Fallacies, Vol. 1

Onur Yce Gn
Kohn Pedersen Fox NY Computational Design Specialist


So I would like to advocate less algorithms, more responsiveness, less technological drunkenness and more
direction. Less silicon chippery, more brain. But I cant quite do it.
I do actually think that a massive technological orgy will happen anyway and might get us further, faster even if it
has the directional stability of a bucking bronco.1

Today we portray designs with analytical systems and systems with the emerging
terminology of computational design. Generative, intelligent, digital, parametric, associative,
biomimetic designs (only) sound valuable, whereas the integrity remains questionable.

The tool, enabling designer to play with forms, patterns, models is neither granting him the
knowledge nor teaching him the appropriate technique. So we ran downhill on that bronco
but the dust came off the ground was thicker than expected.

Are we really able to digest and master all the information we're subject to, to be used in
our designs? Or do we have much to learn about the investigations of the renaissance men
to reach to a level of proficiency?

1. Knowledge and licit ways of getting it

1.1. An ocean of images on www

Young architect, with enthusiasm, opens the script editor of a 3D modeling program to be a
part of the ongoing digital design conversation. After the first baby steps, he creates some
points in the virtual 3D space. Writing and manipulating scripts, he later develops lines,
curves, surfaces, and then colors, reflections and parts. Once satisfied with the level of
generated complexity -be it a CAD model or a digital rendering- the young man posts his
creation on internet, to share it with the rest of the world.

Each day we wake up to see a new post on the design blog2 we follow. A collection of
imagery stands before our eyes, tagged with fancy labels related to computation, nature or
else. In the virtually infinite playground of digital design the abstract designs try to stand
bold with the support of the abused terminology of the computational design era. But there
is not necessarily a strong foundation to discuss the value of the proposed design
conception and imagery.

Exploration is certainly a must while moving into the territory of the new3 in design. But
how are we going to question the value of the explored? While searching for richness were
stuck in the mist of digital noise. Were about to mix abstraction up with ambiguity4 and
variety up with richness5. Every new day added another word into the design terminology,
yet made the two of the universal ones diminish: Adequate knowledge and procedure of
proper application of it, technique. Are we being stubborn looking into the expression
(representation), and trying to understand, both the intention, and the reason? Would a
Second International Conference on Critical Digital: Who Cares (?) 214

stronger body of knowledge together with the excitement we have show us the way to the
better design?

1.2. George Stubbs and couple of words on expertise

Each period of art and architecture history features enthusiastic characters willing to explore
and if possible- build the new. Yet history keeps just a couple to be granted. George
Stubbs (1724-1806), the British born painter, is known for his horse paintings. He
resembles Leonardo6 in the way he studies the anatomical features of mammals, mainly
horses. Stubbs had a crane on which he was able to examine several carcasses, once a tiger
and a pregnant woman carcass a long with a horde of horses.

Along with his paintings, Stubbs has technical drawings depicting horses, still or moving,
portrayed in various layers of skin and flesh, starting from the fur into the bone (Figure 1).
Royal Academy of Arts Collection classifies these drawings as working drawings, measured
working drawings, and finished studies. These drawings are the proof of time Stubbs
invested in understanding and discovering the proper way to express and depict a horse's
posture, movement or even feeling and it's reflection on its body. Stubbs builds a body of
knowledge by doing a thorough investigation and uses this knowledge in his paintings with
an ultimate precision.

Its also recorded that Stubbs had a chance to dissect a tiger and used the outcome of his
investigation in the paintings like A Horse Affrighted at a Lion and A Lion Attacking a
Horse (Figure 2). Note that these paintings have various versions, seventeen versions for
the latter one, in which Stubbs tried to overdo himself and enhance his technique.

Figure 1. Two of the anatomical tables by Stubbs

Figure 2. A Lion Attacking a Horse (1765)

Stubbs' paintings have a certain level of precision in depiction. Although arguable, his
paintings could be referred as beautiful, and life-like. Which architectural artifacts will evoke
similar feelings in the future? The ones of the novice-in-precision designer, who keeps
faking it?7 The algorithmically-generated ones justified not by what it is but by what lies
beneath? The ones which were appealing to the market-driven motivations thus got built?
Or the ones which were rapidly modeled and rendered because the know-how was shared
via a wiki-site?8
Second International Conference on Critical Digital: Who Cares (?) 215

Expertise definitely plays an important role in proper execution of design ideas. And it is a
feature which is built over time, as one repeatedly works on a specific subject. In practice,
the field of computational design is mostly driven by young designer-researchers
(sometimes technicians). These young professionals are the ones with adequate technical
skills in use of the emergent design tools (as most of the time they are trained during their
architectural education), however their conceptual design strength and expertise is open to
discussion. To illustrate the idea, we can consider the specialized groups forming in the
large architectural offices9. These architectural offices have specialized groups that focus on
subjects as advanced and computational geometry, geometric optimization and
rationalization, parametric modeling and software and project specific tool-making. The
sizes of these groups vary according to the size of the office (while the group is around or
less than 1% of the population of the office). However the group mainly consists of young
practitioners and expertise couldnt be discussed in the means of decades (the way we could
do for Stubbs).

The argument we generally face is that the information is so easy to access in our time we
spend much less time to gain knowledge compared to the past. Although this argument
could be true, its hard to judge how much time one needs to learn, digest and apply the
body of knowledge.

1.3. Terminology and (mis)interpretation: Voronoi

1854 Soho cholera outbreak showcased an interesting scientific investigation by British

physician John Snow. Snow was one of the first scientists to use a voronoi diagram to
illustrate the comparative distances of the pumps around the neighborhood to the Broad
Street pump, which was suspected to be the main source of the disease.

Unpredictable spread pattern of the disease amongst the dwellings and the households in
the neighborhood made Snow to examine households daily use of the water resources.
Snow found out that different dwellings and households preferred to use different pumps
within the area, either due to the taste or to waters purity. Voronoi diagram was a tool to
prove that the neighborhood would still have access to sufficient water in case of sealing of
the Broad Street Pump. Snow simply outlined the imaginary periphery that was crossing
from the half distance between the Broad Street pump and the other ones (Figure 3).
Second International Conference on Critical Digital: Who Cares (?) 216

Figure 3. Voronoi boundary around the Broad Street pump

Today the word voronoi is now well-known by the design community due to its (proper and
improper) wide range use. When voronoi design is searched online, Google returns 44
pages of images with an additional note: In order to show you the most relevant results,
we have omitted some entries very similar to the 876 already displayed. 10 Only a small
portion of the imagery is vivid and inspirational, most of it is open to interpretation and a
vast amount is hard to relate to the idea of voronoi algorithm. This is understandable as
some of the 3D modeling programs come with built in voronoi plug-ins which makes it
possible to access and play with the tool.

One common use of voronoi diagram is procedural texture mapping in computer graphics.
The results are claimed to be organic looking. Unfortunately in architectural design
exploration too, most of the voronoi algorithm applications cannot necessarily go beyond
being some kind of organic looking [3D] textures11. Then, which voronoi is the good
voronoi? Could we learn anything from Snows analytical approach and go beyond
application of algorithms beyond texture mapping with the depth of skin?

2. Techniques in making
2.1 Simulation vs. Reality

In last couple of years solar analysis tools have become easily accessible. Then it didnt take
too much time for the design studies incorporating the solar simulations to become main
stream. While the basic intuitional results are useful representational methods, advanced
studies require more sophisticated understandings and approaches in design. However, with
market driven motivations, these simulation results are mostly handled in a superficial way.
Derived data is either used for creating simple color charts, or creating apertures or shading
devices without further investigation.

Two very important points are overlooked: First is the accuracy of simulation. The potential
bias between the simulation and reality is neglected. Although these simulations would
enhance the performance of the design, they cannot be taken as the only driver for the
design. Were debating over designs using the climate data that has been collected within
last ten years and taking decisions (or selling designs) depending on these (Figure 4). Most
of the time the buildings that are on the table to be designed today will be designed over
the next a year or so, and will be built in the following couple. The proposed environmental
performance that already depends on the expired12 climate data will over-expire by the time
the building starts its lifecycle. We need to question the reliability of the data we produce
and use it within the awareness of possible performance biases. Second, the solar analyses
studies are mostly handled in one go, although iterative simulations have to be used for
proper optimizations of designs. In successive loops, the design or the design components
should be tested and updated depending on the latest design generation.

Figure 4. Average schedules for design-construction of a large scale building

Second International Conference on Critical Digital: Who Cares (?) 217

Here my aim is not to de-value the ongoing studies about solar investigations in the design
process but rather propose an inquiry towards the value attributed to these studies.

KPFs Eco-City study incorporated genetic algorithms together with solar analyses to explore
optimized organizations in urban scale. Local codes require each apartment to receive
sunlight no less than two hours per day. The building and podium blocks are placed, aligned
and evaluated depending on the shadowing conditions during the day. Limiting factors like
roads, green areas and like act as other drivers (figure 5). The scripts we ran overnight
enabled the design team to review various design iterations without going into the hassle of
trial and error.

Figure 5. Genetic algorithm diagrams for placement of building blocks

This sophisticated study was more interesting for us in comparison to a straightforward

surface insolation study. The configurations we generated with the in-house-built design
tools proved to be right and the results succeeded when checked with conventional solar
shadowing tools. The project was taken further into design development by the team.
however finally the scheme was not appreciated by the client as they were not settling with
the location of the building masses. This highlights the question of the valid as opposed to
the discretionary.

2.2 Discontinuity in Processes

Rule based generative systems not only enable the users explore iterative variations of
designs via use of parameters but also do it so in an automated way. However due to the
level of complexities in different stages of design, no design is resolved purely with
computational methods. Design still is a process ran with hybrid techniques. So a perfectly
structured analytical system does not necessarily generate the perfect possible solution.
Some manual applications inserted in between the automated ones may enrich the design
process. Besides, while genetic algorithms succeed in creating generations of designs that
qualify for the fitness criteria13, not all the criteria could be numerically defined. Thus some
conventional techniques in design are still valid.

Faade pattern studies we did for Kohn Pedersen Fox Associates several high rise buildings
in illustrate the claims above. Design studies done for Songdo Buildings incorporate the
user input within the automated faade-making process. After the first faade generation is
Second International Conference on Critical Digital: Who Cares (?) 218

created with the rule sets embedded in the script, the program pauses for user input and
the user can update several floor patterns depending on individual design ideas. This was an
intentional choice, since the visual qualities of the faade were not evaluated
computationally. So the computational design process was fused with a conventional one to
achieve the better result.

The tool developed for Shenyang towers generates the patterns with specific rule sets and
panel types (figure 6). But because of the discrepancies between the overall tower form and
the panel sizes, the corner intersections of the curtain wall has to be resolved with an
additional function in the script. This secondary part is not necessarily a design tool, rather
a tool to fix unwanted conditions thats emerging from the mismatch of the pattern and the
overall form.

Figure 6. Panel encoding and final outcome

3. Epilogue: Expression through precision

This paper is a product of discomfort. While stepping into the new realm of design, were
grasping computational and digital, and using both to dress our designs to go with the
fashion of our decade. This paper intends to highlight the importance knowing and
proficiency. Renaissance gave the art and architecture history the re-birth, because it was
the time of learning. Today, the architect has an even greater responsibility to become
equipped with broader technical and conceptual skills, thus he can design with precision,
since its the only way to go deeper than skin.

and obviously some do care about all this. Some go into the struggle of thinking and
figuring out the appropriate way to do it with the new techniques. However the design-
construction market moves with very different motivations and invest accordingly. The ones
who do really care, have quite a bit to tell, but does not necessarily have the power to

In retrospect to the last two titles of Critical Digital Conference, what matters? and who
cares?: Some do care, but does it really matter?
Second International Conference on Critical Digital: Who Cares (?) 219

End Notes:
See Hensel M., Menges A. and Weinstock M. Techniques and Technologies in
Morphogenetic Design, Great Britain: Wiley-Academy, 2006, p54. Quote is taken from Prof.
Wises conclusion paragraph. Prof. Chris Wise is the Civil Engineering Design Chair, Imperial
College, London.
World Wide Web hosts a vast amount of design-blogs, consisting of collective or personal
work, which could be accessed and used free of charge. Core.from-ula is a well-known
collective academic discussion platform. Students of Pratt Institute, University of
Pennsylvania and Columbia use the website frequently. [http://www.core.form-ula.com/]
The primary meaning of word new is having recently come into existence or having been
seen, used or known for a short time. However architectural design progresses with
accumulated knowledge, thus the new is the one which is different from one of the same
category that has existed previously, or of dissimilar origin and usually of superior quality.
M.C.Eschers Prentententoonstelling is constructed on a distorted grid, which is generated
by Escher himself. The grid is acknowledged as a mathematical masterpiece.
See Gun, O.Y., Anti UV: Progressive Component Design in Cross Platforms, 2008.
Renaissance man, considered as a mathematician, engineer, inventor, sculptor, painter,
architect, musician, botanist and writer.
In his book Faking It, William Ian Miller talks about the dishonest behavior of people to
make themselves fit into the social roles they feel attached. Shift in realms of digital design
is affecting the portrayed social roles.
Wikis are collaboratively generated and manipulated web pages, related to specific
technical subject. As an example, Rhinoscript Wiki presents immense amount of scripts to
be used in nurbs modeling platform Rhino. [http://en.wiki.mcneel.com/default.aspx/

KPF: Computational Geometry Group, Foster and Partners: Advanced Geometry Group,
SOM Chicago: Black Box, etc...
Likewise, voronoi architecture returns 42 pages with and additional note of In order to
show you the most relevant results, we have omitted some entries very similar to the 826
already displayed.
Explanation found on [http://squ1.org/wiki/WeatherTool_Updates] : Please be aware that
weather data information is provided in good faith, and that Square One Research and its
affiliates cannot be held responsible for any inaccuracies or discrepancies contained in
weather data files. It is strongly recommended that you independently verify the accuracy
of such information prior to use.
Second International Conference on Critical Digital: Who Cares (?) 220

Genetic algorithms (GA) are used in design to generate an exact or approximated result
for an optimized condition.
Second International Conference on Critical Digital: Who Cares (?) 221

< Firmitas - Utilitas - Venustas .Digital-as (?) >

Paolo Fiamma
Civil Engineering Department - University of Pisa, Italy

Digital concerns the substance of the Architecture or is only a passing fashion?
In Architecture, one thing is admitting that Digital changes the specificities of the
program and the modality of the production (i.e. the digital translation from the design
of product to the design of process), the other thing, which is completely different, is
accepting that Digital could imprint itself in the nature of the Architectural-Fact (i.e.
the Vitruvius Triad).
Can Digital actualize the Tradition or goes against it?
- like deus-ex-machina that dematerializes the consolidated architectural typologies
(some buildings with digital shape seem decorations, just in different scale).
- like a modernization (i.e. Collaborative Design, B.I.M.) of the means of design as
verified conception.
Are the students, definitely, softwarefilter-mediated in their normal way of thinking
architectural design or not?
A didactic responsibility: students often still endure the digital, but they dont metabolize
it. Cases study: two experiences with 260 students, made every year for three years.
Could digital allow to human to understand real phenomena in their intrinsic value, merit,
and validity or it is just a game?
Case study: visitors in museum, spontaneously, understand that digital resources can
augment their natural capacities.

1. Inroduction (to the Test)

Digital, as every revolution of strong impact full of implications, has impost itself both as
datum, a progress of knowledge and is neither positive nor negative.
On the contrary it is its methodology that has caused either positive or negative criticism.
Digital, in fact, has made room for new powerful opportunities of expressing what designers
already considered important.
Really digital only accelerated already existing processes. Very often, usage, propagation
and spread of the digital dimension have undergone diverse behaviors and responsibilities.
Some people have found in Digital the possibility to reinforce their personal positions and
thus appreciate this dimension. Others have been hostile to it, considering it a destabilizing
element. (Critical reactions have varied from one part of the world to another, probably due
to the different cultural and technological backgrounds).
In any case, one can say that, generally, the acceptation or rejection of the digital in
architecture, has defined, everywhere, two distinct positions: one more traditional and the
other innovative. Using Digital in Architecture implies a responsibility because digital is an
amplificatory of positions. Therefore some people take care about it but everyone should
have taken care of it, because it is a potentiality which in any case had to be managed.
Basically instead the digital phenomena seams to have arouse extreme reactions,
enthusiastic or of condemnation, more then moderate or simple scientific attitudes.

2. Architectural theory

<Digital concerns the substance of the Architecture (A) or is only a passing fashion (B)?>
Second International Conference on Critical Digital: Who Cares (?) 222

In Architecture, one thing is admitting that Digital changes the specificities of the program
and the modality of the production (i.e. the digital translation from the design of product to
the design of process), the other thing, which is completely different, is accepting that
Digital could imprint itself in the nature of the Architectural-Fact.
In the normal way of thinking the core of Architecture (at least in the western culture), we
know how are important the attributes of the Vitruvius Triad (25 a.C.): Firmitas, Utilitas and
Venustas. [Frampton K. 1993;Alberti L.B].
Also, to think about a new attribute Digital(as), can not be an orthodox concept.
Extreme behaviors care for this theoretical architectural conflict, because digital, in an
architecturally-incorrect way, transforms two fundamentals of Architecture.
These two, matter (unreal and untouchable) and space (delocalized and ubiquitous), are no
more concrete, no more assimilable to the nature of the architectural <fabrica> (the <res
aedificata>) measurable and identifiable. [Baborsky M. S, 2001;Anders P. 1998; Woods L.].
Therefore, some critical positions assert that the Architecture cannot be an output of some
games in software. In addition, as we know, there is just one step from digital to real:
would some designers renovate, perhaps, the myth of the Architect-Demiurgo? Please
attention: more freedom implies more responsibility.
Others critical positions, assert that digital dimension offers new opportunities to designers
to discover new mental space in them self; therefore, the conquest of a new cognitive space
leads to the conception of a new architectural space [Johansen J. M. 2002, Saggio A. 2002].

3. Digital between innovation and tradition

<Can Digital actualize the Tradition (A) or goes against it (B)?>

At this point the focal issue was how architects thought of and viewed Architecture. At
today, designers are using Digital as a powerful activator of their existing attitudes.
1) Digital like deus-ex-machina that dematerializes the consolidated architectural
typologies (please note that some buildings with digital shape seem decorations, just in
different scale). Also, Digital against Tradition.
But, could the Architecture be really digitalfilter-mediated?
Could Digital take the place of Human in the architectural design?
A substantial content of the debate on the digital dimension is the intentionality of
Architecture. Consequently, probably, the problem is not weather or not Digital absorbs
Architecture (an is subsequently rejected), or if Architecture absorbs Digital then (an is
subsequently accepted).
In fact, Digital is on an other plan. It is not intentional as the architecture, does not replace
the intentionality of the human been in Architecture. For this reason, the debate does not
warrant extremely opposite positions.
Is Digital pushing the current Architecture towards an excess of representation? Or, there
are some designers that, thinking buildings-shape just for surprising, are using Digital to
obtain this focus? [Lynn G. 1998; Eloueni H. 2001].
2) Digital like affirmation of the traditional meaning of design as verified conception.
Thinking for some digital approaches like the Collaborative Design, or the B.I.M. [Eastman
C. 1975 and 2006; Kalay 2001 and 2000].
In the past the verification of design was empirical (after the construction), nowadays it
happens (before construction) using the digital modeling of its buildable characteristics.
Digital provides a verification of the design conception through modelling: an iterative
method, a sort of to and from between the conception and its verification.
A possible risk: could the power of digital modeling suffocate human creativity? [Gucci N.
1999, Zevi B. 2004, Piano R. 2005].
And, vice-versa, imposes Digital a current conformism in architecture? At today, are we
going towards a new international-(digital)-style?
Second International Conference on Critical Digital: Who Cares (?) 223

4. Didactic field

<Are the students, definitely, softwarefilter-mediated in their normal way of thinking

architectural design (A) or not (B)?>
Cases study: two test-experiences, made every year for the last three years, with 260
students of my Course Technical Architecture (at the second year).
Please note that these students had already attended the Courses of design and cad.
The results of both tests were the same each year.

Test A - at the beginning of the Course. Architecture: from Real to Digital.

Exercise. Drawing within free hand the faade of a Church refigured in a picture.
The results of the test hade quite important errors.
1) Most of the lines were crooked and some proportions were wrong.
It is not strange, because students are no more accustomed to design by hand free.
But, the most important point has been other: the most diffused error has been an other.
2) The design gave back a crushed architecture from the perspective point of view.
The photo was in perspective and even if perspective restitution was not asked, nearly, all
of them used indistinct graphic signs: the pressure of the hand was the same one in the
different parts of the design.
We know that it is an error at least from two points of view:
- If it is not diversity of signs, it does not only mean that the parts of the design are
indistinct, but, that the construction is indistinct too. The fact that there are no obvious
hierarchies between the signs, means that, for these students, there are no constructive
hierarchies: the molding of a capitel does not have the same sign of a block volumetric,
because, constructively, both of them do not have the same hierarchy.
- the various sign represents, however, the various plans of distance from the observer and,
therefore, of light exposure, of depth. The bell tower on the background, is in second flat"
regarding the facade, therefore cannot to have the same sign.
Nevertheless the students already used the computer in the previous courses to realize
volumes in perspective, space, forms on different plans, ecc also, why this results?
According to the students answers, the point is that they were not accustomed to thinking
about the different ground of representation, because, normally, the software directly did it,
using shadow effects, when they are modeling 3D architectures.
They were accustomed to seeing the models thus, but they have not been minimally worried
about producing the same effects. The conclusion is not only that they saw the effects but
they did not know the causes (i.e. the theory of the shadows), but that they did it, without
understanding: they saw without looking.
In short, they have unconsciously linked the shadowing to the software and not to the
building, no to the built architecture, as if the constructed architecture did not have a
fundamental requirement as the game of the light in the space, as if it had accustomed
them to some special effect.
The always present shadowing data in front of them, in the monitor, has not moved like
they knowledge of been an attribute of the real architecture, constructed. (Substantially, it
would be like re-proposing a song omitting, some important factors: i.e. repeating the
words and music but without timbre, all in a flat way: i.e. saying You are my great love
with the same tone of I am wearing my shoes).

Test B - at the end of the Course. Architecture: from Digital to Real (an opposite test).
Exercise. Same questions about their annual design.
The object of this annual work is the same architectural typology, but developed, during the
year, by the students using only one of different software resources (C.a.d., modeling,
Building Information Modeling).
The models of buildings realized with the various software solutions have been printed.
Second International Conference on Critical Digital: Who Cares (?) 224

With the exception of the usual appraisals, the students were asked to answer, thinking not
of the design but of the construction if it had been made.
To the questions about the reasons of the plan, the style, the shape compositions, the
chosen of materials, the students answered immediately, and it was almost senseless to
think to the real construction
But when asked constructive and executive questions, the students showered an obvious
embarrassment. Thinking about the real construction hindered the answers or, better still,
made them understand that, in many situations, they had not thought about the building
from a constructive point of view (i.e. a facade of cotto without posing the problem of how
the faade was fixing, or a wide premises without to understanding what was better to use
in order to support it). In short, they had not thought constructively, but the rendering of
the building it was beautiful!
An other group of students, that used Building Information Modeling software, has obtained
technical solutions, constructively much more correct. On the contrary, that solutions were
less evolved from a shape point of view, because was very difficult to design no standard
technical components solutions: in fact, the student could only use a limited choose of
constructive parametric components.
Is there a relationship between the software they have used it, and the meaning of they
have lerned? Is it possible to assert that the design was an object of representation more
for the first group of students, then it was for the second group?
All the students concluded that the type of the used software has caused a strong influence
on them self. On the contrary, just few student, said that their were suppose to have known
how to go about it.

Conclusion. A statistic evaluation of (impressive) effects about:

- the relationship between what they do in software and how they understand the
correspondent construction (it is really buildable)? Often, the digital look of building
prevails over the correctness of its technical solutions.
- the relationship between the type of software used (their knowledge of its mechanisms)
and the type/limits of the buildings designed.
A didactic responsibility: students often still endure the digital, but they dont metabolize it.
It is ever necessary to give to students critical criteria to understand and to sift, deeply, the
new. The mission is to educate to the advantages of the Digital in Architecture, but to
prevent its didactic risks. [Schmitt G. 1993; Mitchell W.J. 1997; Donath D. 2002].

5. Knowledge resource

<Could digital allow to human to understand real phenomena in their intrinsic value, merit,
and validity (A) or it is just a game (B)?>
Case study: the design of a new mixed space (architectural and immersive) that was
requested from us by the Director of the Leonardos da Vinci Museum (after his visit in our
Lab). He wants not just to present, to the visitors, the recent studies about Leonardos
cosmological theory (the intuition of the solar system heliocentric model before Galileo), but
that, current visitors, could feel how this discovering was a totally absolute innovation for
his contemporary people.
The VR system (screen, frame, computers, polarized system back-projection), located in a
limited size part of an ancient room, consisted of a usable space of 3x4x2.5m, allowed the
audience to enjoy an interactive navigation, through the two different models of Universe:
Ptolemaic and of Leonardo.
The starting user point of view was based on the position of an imaginary viewer, located
approximately, on the surface of the Earth (Ptolemaic model) and inside the Solar System
(Leonardos model). During the test, in this kind of digital-interaction with the object of
Second International Conference on Critical Digital: Who Cares (?) 225

knowledge, the visitors change his own cognitive method: from the symbolic - reenacting
method to the perceptive motor one. The visitors feel how digital could be natural, not
just to understand, deeply, the knowledge content, but to perceive them self as part of this
content [Forte F. 1997, Niera C. and ot., 2002, Meehan M. and ot. 2002].
It has clearly showed that this is something more than a calculation model: namely, the
transformation of information into a perceptible form. The idea of representation,
experienced just as onlookers, by now belongs to the past. The visitors comments and
reactions were very positive and enthusiastic: digital could give added value to humans
understanding. In fact, the goal of the installation (the mixed environment) was to let the
visitors directly experience the differences between the two cosmological models.
Shortly: visitors, spontaneously, understand that digital resources can augment their
natural capacities. [Grigorovici D. 2003; Schnabel M.A., Kvan T. 2004; Ryu J. and others

6. Test conclusion

A > B : One who cares for

A < B : One who doesnt care
4A : Fan of Digital
4B : Fan of Vitruvius
A=B : Work in progress

For every topics:

if (B): one could as well not care at all but, if (A): each and everyone should care!

7. Endnotes

AA.VV. Official Proceedings of the 29th International Conference on Computer Graphics and
Interactive Tecniques, ACM Siggraph, San Antonio, Texas, 2002.
Dave Bharat, Gerhard Schmitt. Designing in a Networked Society. 8th Int. Conference on
Systems Research, Informatics and Cybernetics; Baden-Baden, Germany, 1995.
Barney Dalgarno, Jhon Hedberg, Barry Harper. The contribution of 3D environments to
conceptual understanding. In: Proc. of ASCILITE 2002, New Zealand 2002.
Dirk Donath. A Building information system based on a planning relevant surveying system
- a module in a comprehensive computer aided project planning. In Proceedings of the
9th International Conference on Computing in Civil and Building Engineering, Taipei,
Taiwan. 2002.
Kenneth Frampton, Storia dellarchitettura moderna, Zanichelli, Bologna, 1993.
John S. Gero. Computational models of creative designing based on situated cognition. In:
Proceedings of the 2002 Conference on Creativity and Cognition 2002.
Peter Anders, Cybrids Integrating Cognitive and Physical Space in Architecture Convergence
in the International Journal of Research into New Media Technologies, 4/1, 1998.
Charles M. Eastman, Paul Teicholz, Rafael Sacks and Kathleen Liston. BIM Handbook: A
Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and
Contractors, John Wiley & Sons, New Jersey, 2008.
Charles M. Eastman and Rafael Sacks, Relative Productivity in the AEC Industries in the
United States for On-Site and Off-Site Activities, Journal of Construction Engineering and
Management, Vol. 134/ 7, 2008.
Gerhard Schmitt. Instruments for Bridging Traditional and Virtual Design Environments,
Proceeding of 15th IABSE Congress, Kopenhagen, 1996.
Lebbeus Woods, John M. Johansen. Nanoarchitecture: A New Species of Architecture,
Princeton Architectural Press, 2002.
Second International Conference on Critical Digital: Who Cares (?) 226

Second International Conference on Critical Digital: Who Cares (?) 227

Academia Should Care:

Moral and Ethical Obligations of Architecture Schools to Society and Future Architects
Shohreh Rashtian, Ph.D, AIA Associate, AWA


This paper provides an overview of US professional degree programs in architecture and

their curriculum. It offers a perspective on major needs of our society and the role of
architects in satisfaction of those needs. It identifies the challenges of architecture
education in preparing students to practice architecture in digital age. Then, author
proposes ideas on how to use new media toward fulfillment of moral and ethical obligations
of architectural schools to society and environment.

1. Introduction

During the past two decades, digital media has had major impact on the process of
architectural design, building construction and building operation. Capacity, accessibility,
flexibility, diversity, manipulability, interactivity and speed of digital media extend
architects design capabilities into new realms and foster collaboration between architects,
engineers and builders. The advent of new media demanded addition of variety of digital
media trainings in architecture curricula. Due to limitation of duration of study and the
amount of courses that can be covered during this time, design studios have focused more
on production of irregular, biomorphic architectural form than equip students with necessary
understanding of their obligations to society and the world around them. Architecture
magazines and their critics have paid more attention to sophisticated building images and
complex forms of the building than how people live and function in those buildings.
Employers have tried to hire new graduates with the most current skills in digital media.
Consequently, Architecture students have been more interested in using new media in
creating 3D images and exploring 3D models than paying attention to how their design
serves the needs of building users. Due to dominancy of visual presentations and
fascination of the observers, more attention has been given to use of digital media as a
device for production of 3D models and sophisticated images than as a tool for offering
architectural solutions that serve major needs of our population. With a quick connection to
the internet, we can gather information about major challenges that our society face in
relation to their habitat. It would be great if we could use digital media and digital
technologies in overcoming those challenges too.
Architecture schools have a major role in shaping future of architecture and fulfillment of
architects responsibility toward society and environment. Emerging technology is rapidly
changing. There is no doubt that architecture school should actively participate in inventing
and using new media. However their focus needs to be shift on how to use new media in
architecture for serving wide variety of human needs and protecting global ecosystem.

2. US professional degree programs in architecture and their curriculums

The National Architectural Accrediting Board (NAAB) is the sole organization that accredits
US professional degree programs in architecture. This agency recognizes three types of
degrees: the Bachelor of Architecture, the Master of Architecture, and the Doctor of
Architecture. The PhD in Architecture is not recognized as a professional degree by the
Second International Conference on Critical Digital: Who Cares (?) 228

NAAB and is not accredited by this agency!! There are currently 151 NAAB accredited
professional programs in architecture provided by 117 schools offering the Doctor of
Architecture (University of Hawaii at Manoa is the only institution that offers NAAB
Accredited D Arch Program), Master of Architecture (94) and Bachelor of Architecture
degree (56).

Doctor of Architecture degree requires a minimum of 120 undergraduate and a minimum of

90 graduate-level semester credit hours, or quarter hours equivalent. Master of Architecture
degree must require a minimum of 168 semester credit hours, or the quarter-hour
equivalent, of which 30 semester credit hours must be at the graduate level. Bachelor of
Architecture degree must require a minimum of 150 semester credit hours, or the quarter-
hour equivalenti. The curriculum of a NAAB-accredited program includes general studies,
professional studies, and electives. All programs require 45 credit hours of general studies
(such as arts, humanities, and sciences) or related courses outside of architectural content.

3. Challenges for academia and schools of architecture in digital age

The National Architectural Accrediting Board (NAAB) requires an accredited program to train
graduates who: are competent in a range of intellectual, spatial, technical, and
interpersonal skills; understand the historical, socio-cultural, and environmental context of
architecture; are able to solve architectural design problems, including the integration of
technical systems and health and safety requirements; and comprehend architects' roles
and responsibilities in societyii. While the author agrees with the competencies that are
necessary for a graduate with a degree in architecture, she questions the amount of credit
hours that are needed for reaching this goal in digital age. In order to answer this question
the author reviewed NAAB Conditions for Accreditation 2004 editioniii and NAAB Procedures
for Accreditation 2008 and 2009 editioniv. In all of those documents NAAB requires thirty
four student performance criteria to be fulfilled by architecture curriculum to reach this goal.
Table1 presents the relationship among NAAB student performance criteria, current
Architect Registration Examination (ARE.4.0) divisionsvand the use of digital media in
architectural practice. Table 1 shows that not only architecture schools should cover all
necessary courses and studios for NAAB requirements but also they need to provide hours
of training for a variety of current and emerging digital medias. Reviewing this table raises
following questions: Are 150 semester credit hours sufficient for teaching all of the required
subjects and training of a variety of related software? Do extra 18 semester credit hours for
Master degree cover the needs for innovation and research in architecture? How
architecture schools can cover all of above necessary trainings for practicing architecture in
just 3 years? How academia can make a balance among teaching digital media, furnishing
students with a comprehensive understanding of social and environmental problems,
improving design skills and facilitate innovation? Is there enough budget and resources to
cover all of those areas? Is there enough collaboration between architecture schools and
community? Is universal design strategies integrated into architecture curriculum? Table 1
shows there are no software to measure relationship between environment and human
behavior in architecture profession. How architects can use the digital media to evaluate
users psychological reactions to architectural creations? How many schools work on
generating new technologies to fulfill the needs of people with sensory and cognitive
impairments? In order to answer all of those questions, a comprehensive study of all
programs need to be done. However, just by simple calculation of necessary hours for
training of each subject, we can realize the challenges that academia face to cover all areas
of trainings in digital age.
Second International Conference on Critical Digital: Who Cares (?) 229

Table1. Architecture curriculum and digital media

NAAB Student Performance Use of digital media in architectural practice
Criteria i ARE.4 ii
Speaking and Writing Skills Communication programs
Critical Thinking Skills
Program Preparation 1 Architectural Programming Software (data entry, data analysis and
Research Skills Computer based information systems, statistical software, variety of
digital media related to the type of the research
Graphic Skills Computer Aided Design, 3D modeling (surface &solid modeling),
vector graphics, digital editing of images, video editing, motion
graphics, rendering, animation
Formal Ordering Skills
Fundamental Skills 1
Site Conditions 2 Computer based information systems and a variety of programs for
site analysis
Collaborative Skills
Use of Precedents 1 Building Code software, Building Energy Code software, Building,
Planning & Zoning Departments document management and archive
Accessibility 2&4 Using United States Access Board on line publications and trainings,

Comprehensive Design 2&4

Knowledge of Western 1,2,4 Computer based information systems, Virtual Modeling, real-time
Traditions simulation, and virtual-reality
Knowledge of Non-Western 1 Computer based information systems, interactive computer models
Traditions of historic environments, Virtual Modeling, real-time simulation,
Knowledge of National and 1,2,4 Computer based information systems, Virtual Modeling, real-time
Regional Traditions simulation, and virtual-reality
Human Behavior
Human Diversity Information technology
Client Role in Architecture 7
Sustainable Design 1 A variety of digital technologies and interactive tools for LEED
Structural Systems 5 Building Information Modeling (BIM)
Environmental Systems 6 A variety of Environmental Management Systems,
Electrical/Mechanical/HVAC Construction Software, Lighting
calculation and visualization software
Life-Safety 1 Building Code software, Building Energy Code software, Building,
Planning & Zoning Departments document management and archive
Building Envelope Systems 4 Computer-aided design and computer-aided fabrication programs
(CAD/CAM fabrication programs),Digital manufacturing programs
Building Materials and 4 Computer-aided design and computer-aided fabrication
Assemblies programs(CAD/CAM fabrication programs)
Building Service Systems 6 Electrical/Mechanical/HVAC Construction Software, Building
Information Modeling (BIM),
Building Systems Integration 6 Building Information Modeling (BIM), Specs/Design/Schematics
Construction Cost Control 1 Construction cost control software, construction accounting software
Technical Documentation Construction specifications; LEED specifications and Integrated BIM
Specification programs, Hospital IT Specs/Design/Schematics
Architects Administrative 7 Construction management , construction accounting, project
Roles management programs
Architectural Practice 7 AIA Contract Documents software
Professional Development 7 Up to date knowledge about all related digital media
Leadership 7 AIA Contract Documents software
Legal Responsibilities 7 AIA Contract Documents software
Ethics and Professional 7 AIA Contract Documents software

Second International Conference on Critical Digital: Who Cares (?) 230

4. Challenges for new architecture graduates

In reality, the amount of knowledge that a architectural students need to have for practicing
architecture cannot be fulfilled in current credit hours of education and current IDP program
cannot completely assist them to complete their knowledge because finding an IDP
supervisor is a big challenge and there is no warranty to find an appropriate IDP supervisor
and mentor. On the other hand we cannot ask an architecture student spend more years at
school considering the fact that after years of hard works at school and amount of skills that
are necessary for practicing architecture, expected salary for architects is much lower than
other profession in field of engineering or medicine : An IDP intern only earn 30,000 dollars
a year, the median expected salary for a typical Architect in the United States is $56,637
and the median expected salary for a typical Asst. Professor in Architecture is $58,167.
While the median expected salary for a typical Asst. Professor in Civil Engineering is
$71,310 and the median expected salary for a typical Pharmacist is $105,689.

5. Architectural Education and society

Architecture intends to create optimum environments that inspire, nurture and satisfy
human needs and desires. According to theories of human motivation, all Human beings
share a basic set of common needs. They need to have an adequate standard of living,
sufficient food, water, air, light and sanitation. They have a right to a safe environment
appropriate for human comfort and wellbeing. Built environment should provide them the
opportunity to learn, explore and acquire knowledge. They have a need for structure,
predictability, stability, and freedom from fear and anxiety. They have a right to be
accepted by others and to have a sense of identity, cultural security and personal ties with
their family and friends. We are living in a world with 1.4 billion people who live with $1.25
a day or below incomevi. An estimated 754,147 homeless persons in a given day live in the
United Statesvii. Over one million people lost their home due to foreclosure in 2008viii. All of
us need a decent and safe place to live.

The United States population of aged 65 and over is expected to double in size within the
next 25 years. By 2030, almost 1 out of every 5 Americans (some 72 million people) will be
65 years or olderix. The realization of the causes and consequences of population aging
shows that disability is part of the life span and is a condition that nearly everyone will
experience. Architectural spaces should accommodate those life span changes.
Approximately 45 million people are blind worldwide. Blindness affects more than 1 million
Americans age 40 and olderx and still, we do not have appropriate wayfinding and spatial
learning material for them. The needs of people with cognitive impairment and people with
dual sensory impairment are unknown for the majority of architects.

Architecture and urban spaces have important roles among the factors that control humans
and their wellbeing. Peoples mood, performance and problem solving ability are affected
by elements of the built environment. The organization of the building influences thinking,
emotions, social interaction and health of its users. Confusion in learning the layout of the
buildings causes fear and anxiety.

With a comprehensive knowledge of the needs and characteristics of our diverse society our
architects can design environments that maximize positive health outcomes and facilitate
positive interactions among community. Architectural creation not only should provide an
environment to satisfy human needs but also should give them a sense of belongings and
affiliation with others. It should maximize their potential. It should facilitate their full
Second International Conference on Critical Digital: Who Cares (?) 231

participation in the society. It should enhance their dignity. It should provide a platform for
the self actualization and finally give a sense of fulfillment.

6. Conclusion and suggestions

Author recommends following steps toward fulfilling our obligations to society and future

Increasing awareness about the role of good architects in enhancing health and
happiness (physicians cure health problems, good design can prevent health
Collaborating with urban designers, environmental psychologists, gerontologists,
healthcare professionals, social workers and computer scientists.
Promoting teaching special field in architecture as a post professional degree
Advocating collaborations between architecture schools and architectural firms for
providing strong internship program
Supporting the use of genetic algorithms as a search technique in architectural
programming and pre design.
Integrating universal design approach and assistive technologies into architecture
Increasing students awareness by including more frequent contact with diverse
communities and by placing more emphasis on "built environment and human
interactions. .
Encouraging the use of digital technologies that are innovative and benefit the
Promoting doctoral researches in innovation of new technologies
Including post occupancy studies into architecture curriculum
Using web camera in conducting research in human- environment interactions and
post occupancy studies.
Using E- Surveys, web blogs for users feed back in post occupancy studies.
Using eye tracking media to learn how users observe their surroundings
Using building automation for environmental adaptations.
Motion captures, cameras and sensors can be used in navigation and wayfinding
studies and at the same time can be used as tools for facilitate wayfinding and
spatial learning.
Building automation, nanotechnology and interactive materials can be our major
resource to adapt buildings and products to users needs.
Sharing our experience and our views with (NAAB)
On February 21, 2009, the Board of Directors of the National Architectural
Accrediting Board approved the first reading of the 2009 Conditions for Accreditation.
The document is now ready for review and comment by the general public. The
deadline for comments is June 1, 2009.xi
On 2/19/09 U.S. Department of Housing and Urban Development (HUD) news
release announced President Obama administration awards nearly $1.6 billion in
homeless grants to thousands of local housing and service programs nationwide and
Recovery Plan provides $1.5 billion in additional funding for homeless prevention. It
is a great opportunity for architecture schools to participate in the efforts for
preventing homelessness. Generating affordable construction materials, creating
temporary and permanent shelters and increasing collaboration with U.S.
Department of Housing and Urban Developmentxii. Can be first steps for this
Second International Conference on Critical Digital: Who Cares (?) 232

Web site > see The National Architectural Accrediting Board (NAAB), http://www.naab.org
Web site > see The National Architectural Accrediting Board , NAAB,
Document from web site> See NAAB Conditions for Accreditation For Professional Degree
Programs in Architecture 2004 Edition, pp 12-16, NAAB,
Document from web site> See NAAB Procedures for Accreditation For Professional Degree
Programs in Architecture 2009 Edition, NAAB,
Web Document> See ARE 4.0 Guidelines , NCARB,
ARE 4.0 divisions: (Division1-Programming, Planning & Practice; Division 2- Site Planning
& Design ; Division 3:Schematic Design; Division 4: Building Design & Construction
Systems; Division 5: Structural Systems; Division 6: Building Systems, Division 7:
Construction Documents & Services)
Web site > see World Banks Poverty Estimates , August 2008,
Document from web site> See HUD 2007 Annual Homeless Assessment Report to
Congress, www.huduser.org/Publications/pdf/ahar.pdf
Business week January 14, 2009
Document from web site> See 65+ in the United States: 2005 report that was prepared
for NIA, a component of the National Institutes of Health (NIH) at the U.S. Department of
Health and Human Services, NIA,http://www.nih.gov/news/pr/mar2006/nia-09.htm
Document from web site> Shohreh Rashtian, Architecture & Spatial Cognition Without
Sight, Universal Design and Visitability: From Accessibility to Zoning Symposuem, 2006,
Web site > see The National Architectural Accrediting Board
Web site > see HUD news release, 2/19/2009, HUD,
Second International Conference on Critical Digital: Who Cares (?) 233

Digital Design Tools that Talk

Rachelle Villalon
in collaboration with Henry Lieberman, Software Agents Group
Massachusetts Institute of Technology, USA

Machines break down. Computers tend to crash. Cellular phones ring in the middle of a class
lecture, shouldnt devices know better? The machine ought to quietly fix itself. The
ought to recognize and save valuable data before it decides to crash, and the cellular phone
should know not to ring during class. People tend to adapt themselves to the devices they
use instead of the device adapting itself to the user. Design applications and other tools will
become more intelligent by learning from the users physical design moves such as the
process of building an architectural model, drawing on paper, drawing on the computer
using design software, and a digital fabrication machine will learn and observe the design
and assembly of an artifact so that it too can participate constructively. If design
applications had common sense, or rather, shared knowledge amongst a group of
designers, then designing with machines would cut down on production time, costs, increase
efficiency, and enable a different dialogue with machines. This paper proposes a system
where the machine is no longer the routine servant of commands, but can also form a
critical and constructive role in the design process. Architectural designs are not just
collections of 3D objects.

1. Introduction
Most people think computers will never be able to think. That is, really think. Not now or
ever. To be sure, most people also agree that computers can do many things that a person
would have to be thinking to do. Then how could a machine seem to think but not actually
-Marvin Minsky, AI Magazine, 1982

When computing is part of the designer's practice, there are opportunities for creative
articulation in digital space, yet the digital presence poses challenges in understanding facts
and details that designers care about. With the advent of digital design tools like Building
Information Modeling (BIM), errors in design coordination between architects and their
consultants are usually now easy to identify. The 3-D virtual parametric environment allows
each player in the Architecture, Engineering, and Construction (AEC) field to resolve design
and construction conflicts before actual construction of the design proposal will occur (Figure
1). Even with BIM, however, a 3-D modeling software environment with embedded object-
oriented intelligence is missing a lot of relevant knowledge that designers possess. This
paper proposes a system for machines to understand the context of a user's task from high-
level architectural reasoning to low-level construction knowledge. Computers need to be
participants in human computer creative systems, as mediators between people.
Second International Conference on Critical Digital: Who Cares (?) 234

Figure 1. BIM for project coordination at CO Architects, Los Angeles.

2. Motivation
A principal problem with design tools occur when the user has to troubleshoot the errors
that occur in the creation of a digital to physical artifact [4]. Assume a user is digitally
creating a series of parametric exterior walls attached from floor to ceiling, but later decides
to reconfigure the aesthetic formalism of the geometry. Doing so in any particular design
software environment could result in lengthy warnings and object errors. The user can heed
the warnings and proceed to amend the object, or recreate the entire geometry from the
beginning. A lot of work is involved to diagnose the problem, making the user concentrate
on the tool instead of the content. How then, can computers play a constructive role in the
design process?

3. Open Mind Common Sense and ConceptNet: Teaching Computers about the
Architect's World
The MIT Media Lab has been collecting common sense facts from the public since 2000
known as the Open Mind Commonsense Project (OMCS). The project is a web-based
interface that allows contributors over the web to input common sense, or shared
knowledge amongst individuals, into a database to teach machines what people would
normally accept as common knowledge (Figure 2). To date, there had been over 13,000
people contributing to OMCS generating numerous common sense concepts since
September 2000[1].

OMCS currently does not contain specialized common sense knowledge such as information
that architects may know. The integration of architectural knowledge in ADEON begins by
gathering a corpus of architectural common sense that a computer would need to know
about making architectural artifacts. In order to do this, a collection of input from
architectural standard texts and feedback from a selected number of architects had been
necessary to create the beginning of an architecture knowledge database. Efforts to create a
specialized common sense knowledge database useful for a professional field like
architecture would mark a new avenue of professional resource using OMCS and
ConceptNet, the natural language processing toolkit [2,5] that encompasses the spatial,
Second International Conference on Critical Digital: Who Cares (?) 235

physical, social, temporal, and psychological aspects of everyday life [through its knowledge
base of a semantic network that contain 1.6 million assertions[3].

e 2.

3. Introducing ADEON: Using OMCS for Architectural Applications

Currently, OMCS had been created for the ADEON project. ADEON is an architectural design
application/software environment that interfaces with a digital fabrication device: a "pick
and place" articulating robot arm for constructing architectural models, the IRB-140 (Figure

Figure 3. IRB-140 Robot stacking a block wall.

The global objective of ADEON is to bridge the gap between design, construction, and
prototype fabrication knowledge using "common sense" facts that architects and fabricators
are familiar.

The goals of ADEON include:

a) A simple to use graphical editor for sketching architectural ideas.
b) Retrieval and display of design, construction, and prototype fabrication
knowledge relevant to the design being drawn.
Second International Conference on Critical Digital: Who Cares (?) 236

c) Provide cost estimates and material usage estimates for the design drawn in the
d) Translate user drawn information in the design editor to machine readable code
for the prototype fabrication robot.

Conventional software design tools do not exhibit knowledge of the design situation at hand
[6]. The proposed system assists the user based on live textual and graphical feedback
about the feasibility of construction and machine fabrication technique (Figure 4).

Figure 4. (a) Drawing walls in the draw editor. (b) Resulting knowledge feedback form.

Adeon is a prototype for a methodology of connecting knowledge across the different levels.
It is not, at present, a general-purpose architectural design system. The goal was to show
that we can provide an end-to-end system for capturing knowledge at all levels from vague
sketches where the concerns are mainly aesthetic, all the way to machine code for
programming a robot constructing architectural models (and perhaps in the far future,
robots actually performing final construction).

The focus is on a particular set of design scenarios, constructing different types of brick
walls. Though brick walls are conceptually simple, there many examples of aesthetically
innovative and historically important brick wall designs (some of which we will explain in
detail). We also chose this domain to simplify the prototype fabrication step (though it still
leaves plenty of room for problems to happen!). Expansion to other architectural domains
and other fabrication techniques is possible with additional efforts in knowledge collection.

More generally, systems like Adeon can show the way towards providing systems that
integrate both high-level and low-level concerns, in a variety of fields beyond architecture.
Designers of consumer electronics devices such as phones or music players, for example,
must think both about the industrial design aesthetics, the functionality, manufacturing, and
the cost of devices. We believe that one of the essential elements of creativity is to be able
to play with high-level and low-level concerns simultaneously, and flexibly go back and forth
between them.
Second International Conference on Critical Digital: Who Cares (?) 237

Figure 5. A semantic network of Design, Construction, and Prototype knowledge.

4. Conclusion: Integrating Intelligent Feedback in Digital Design Models
The emergence of digital techniques in architecture had produced opportunities and
challenges for the practitioner's design process (Figure 6). One of the challenges occur when
a machine misinterprets a designer's intention in a digital design environment. The current
means for remedying the situation is for the user to do any of the following: resort to
technical manuals and/or online support forums, summon technical support, or even hire a
consultant. Machines still require the capability to integrate and adapt itself into
understanding common design practices; to collaborate as an active agent in various design
proceses. ADEON is one example that tries to make that possible, offering a contribution on
how designers can relate to machines by extending an initial formation for a collaborative
approach. Extending the use of collaborative artificial intelligence techniques and
technologies as built into the Open Mind Common Sense project towards digital design
models used by practioners, will attract speculation about the designer's role and
relationship to machines, yet the interaction offers opportunities toward a seamless design
process in the current limits and fragmentation of disparate digital tools in practice.

Figure 6. Digital techniques for representing and building a project type.

Second International Conference on Critical Digital: Who Cares (?) 238

[1] MIT Common Sense Computing Group. 2008. Open Mind Commons.

[2] MIT Common Sense Computing Group. Concept Net.

[3] Liu, H. and Singh, P. ConceptNet: A Practical Commonsense Reasoning Toolkit, BT
Journal 22(4). 2004. Kluwer Academic Publishers.

[4] Minsky, Marvin. AI Magazine, vol. 3 no. 4, Fall 1982. Reprinted in Technology Review,
Nov/Dec 1983.
[5] Mueller Erik T 2006, Common Sense Reasoning. Morgan: New York, 2006.
[6] Villalon, R. and Lobel, J. 2007, Materializaing Design: Contemporary Issues in the Use
of CAD/CAM Technology in the Architectural Design and Fabrication Process. ASCAAD2007.
Second International Conference on Critical Digital: Who Cares (?) 239

Time, Context and Mixed Reality

Jules Moloney
CRIDA University of Melbourne Australia.

Bharat Dave
CRIDA University of Melbourne Australia.

The use of digital technologies in design continues the trajectory of 20th century modernism
where architecture is conceived as an object devoid of context - physical and cultural.
Moreover, there are minimal examples where context is considered over time. We outline
the potential of mixed reality to enhance decision making at the early stages of design,
grounded in two principles: temporal context (the point of view of a moving surveyor within
a dynamic environment); concurrent evaluation (the simultaneous presentation of
qualitative evaluation and functional performance). The discussion is structured around the
rhetoric - who cares about temporal context, why bother with mixed reality?

1. Overview

In order to approach these questions this presentation starts with a cryptic history of design
visualization and the contemporary preoccupation with novel form generated in neutralizing
software. From this, two ideas are introduced temporal visualization and concurrent
evaluation - as the conceptual underpinning to the implementation of mixed reality. In a
concluding section, approaches to using mixed reality are considered and from this we
outline a project that explores these ideas. The research is predicated on the belief that new
media and technologies ought not only facilitate existing modes of practice, but also
engender new modes of designing. In our view, while innovation lies in expanding mixed
reality technology for appropriation by design practices, the wider significance lies in
extending intellectual traditions in representations and their power to encode explain and
transform our world.

2. Impaired vision

The history of design visualization for the bulk of the twentieth century can be directly
related to the cannons of modernism. From its publication in 1932 until the mid 1970s The
International Style was the mantra for a period of impaired vision in which architecture
was conceived as a thing in itself, as if it were the only building in the world 1.
Commiserate with this ethos, architectural drawings were abstract plans and sections
delineating internal function and structure, supplemented with axonometric projections to
show three dimensional relationships as if they were describing engine parts. The socio-
cultural reasons for the failure of functional modernism are understandably complex and
dependent on local circumstances, but one outcome was contextual relationships became of
major concern. The hot house for design theory and the interplay between ideas and
drawing in the 1980s was the London Architectural Association School, particularly in the
design studios of Bernard Tschumi and his student protgs. A key strategy was
experimentation with alternate forms of architectural visualization and design classes at the
architectural association were among the first to inform architectural drawing with a sense
of occupation through time. The term narrative drawing was coined to describe this radical
Second International Conference on Critical Digital: Who Cares (?) 240

approach to design visualisation in which drawings were developed as filmic story boards,
mixing standard architectural conventions with photographs, and music or dance notation .
The promise of narrative drawing was overshadowed in the late 1980s by the widespread
take up of computer aided design (CAD). For contemporary designers CAD has evolved into
Building Information Models (BIM), algorithmic design and rapid prototyping - all of which
are being visualized in three dimensions. However the ease with which complex geometry
and surface qualities can be manipulated has meant that 3D visualization has become, to
some extent, a victim of its own success. In the hands of many there would appear to be a
predilection for abstract form making undertaken against neutral backdrops. Typically,
minimal time is utilized experimenting with a range of realistic viewpoints and lighting
conditions are usually based on one idealized moment in time. Arguably the wholesale take
up of 3D digital modelling within contemporary architectural practice has seen a regression
in terms of context evaluation, and a reinforcement of the paradigm by which architecture is
conceptually and in some cases literally prototyped as a discrete object.

Who cares? Well, if the designer is preoccupied with visual appearance, we would argue that
a thorough understanding of how form and space is read as temporal sequence, or the
manner in which surface deformation occurs in differing light and moisture conditions will be
a concern. While designers concerned with environmental issues may consider performance
over multiple temporal scales and in relation to microclimates to enable a more thorough
evaluation of the sustainability of a project. These issues qualitative visualisation and
performance evaluation over time - are developed below as two interrelated ideas: temporal
context and concurrent evaluation.

2. Temporal Context

The foregrounding of time in relation to context in design is not a new idea, but one that
has struggled against ingrained traditions that privilege the architectural object over context
and the lack of efficient representation techniques. It is self evident that architecture must
perform within a dynamic context but in practice, context is considered at best in relation to
a snapshot in time. The idea of temporal context is developed here as two interrelated
aspects: (1) that architecture is experienced by the body in motion as a spatial sequence
over time; (2) that architecture must perform functionally and in socio cultural terms, in
relation to a dynamic environment. The first aspect of temporal context - spatial sequence -
has a long tradition in terms of how architecture is experienced. Bois and Shepleys A
Picturesque stroll around Clara-Clara traces a genealogy of the peripatetic view, from the
Greek revival theories of Leroy, the multiple perspective of Piranesi, Boulees understanding
of the effect of movement, to the Villa Savoye where architecture is best appreciated,
according to Le Corbusier on the move 2. Generations of 20th century architects have
relied on plan and section, supplemented by cryptic perspective sketches to organize
movement in time and space. Robin Evans eloquently describes this art, the translation
from projective drawing to constructing the experience of architecture through time.

Design is action at a distance. Projection fills the gaps; but to arrange the emanations first
from drawing to building, then from buildings to the experience of the perceiving and
moving subject, in such a way as to create in these unstable voids what cannot be displayed
in designs that was where the art lay. 3

Evans' thesis is an insightful account, but it excludes physical models, the advent of cinema
and the uptake of the computer. As an investigation of the influence of geometry via media
it is a history that stops with the advent of photography. What changes, if anything, by
projecting the digital into Evans' diagram? Discussing this question in the mid-nineties, it
Second International Conference on Critical Digital: Who Cares (?) 241

was apparent that there were three developments that were transforming embedded
practice parametric design, immersive editing and computer aided construction 4.
Parametric design and computer aided construction are now ubiquitous and arguably these
are reasonably well understood. What was meant by the phrase immersive editing? At the
time the hype was of immersive virtual worlds where potentially the designer could design
from the point of view of occupation. Thanks to the suggestion of an insightful student, we
embarked on a series of design studios that used video game technology to provide a low
cost virtual environment. We deliberately used technology that allowed simultaneous
occupation by multiple users and which enabled the visualization of large terrains and
cityscapes with dynamic lighting. The focus was on designing both from within the
architecture and exploring the impact of the design on the virtual site. Such visualization
based on spatial sequence and the ability to freely change camera position and orientation,
represents one aspect of temporal context.

The second aspect is that perception of the built environment and the performance
requirements of buildings vary dramatically over time. A recent design studio focused on the
potential of architecture to exploit this perceptual change by using time-lapse video taken
over daily and weekly cycles. Students manipulated geometry, reflectivity and opacity and
evaluated these temporal shifts against the high fidelity video context. In an attempt to
evoke the constant shifts in activity during daily and seasonal cycles, a second series of
studios based on immersive editing was undertaken, which explored the potential of sound
to convey the atmosphere of occupation over multiple time scales. The best of these
brought the 3D worlds to life 5. As one traversed through the architecture, snippets of
conversation, passing traffic and footsteps evoked changing activities, reinforced a sense of
materiality and the cultural life of the city.

Figure 1: Design studios - multiplayer game environment (left), city soundscape (centre), timelapse video (right).

3. Concurrent Evaluation

The simultaneous evaluation of the qualitative and the quantitative is captured by the
phrase concurrent evaluation. Typically these are separate conversations, with designs often
evaluated in terms of environmental performance or constructional efficiency after key
conceptual ideas are locked in. While it is now acknowledged that architectural design must
perform in an expanded field in which performance in socio-cultural and aesthetic terms
should be considered alongside functional performance, there is a lack of methodologies and
fundamental research that supports concurrent evaluation. From a scientific perspective this
is perhaps understandable as functional performance can be measured, charted and
graphed but qualitative assessment is not readily amenable to calculation or proof.
Primarily it involves subjective decision making designers, clients and stakeholders
discussing the formal merits of one design over another. If objective measure is improbable,
mixed reality systems offer the potential for the simulation of designs in a multi-sensorial
and temporal context that enhances and aids subjective evaluation of designs prior to
construction. In this context, mixed reality technology offer the potential for the concurrent
evaluation of the quantitative and qualitative attributes of design options, in relation to an
Second International Conference on Critical Digital: Who Cares (?) 242

interactive real time parametric model, or what has become known as a digital prototype.
Figure 2 illustrates how the two ideas - temporal context and concurrent evaluation can
potentially extend the current use of parametric digital models to provide a holistic
prototyping environment. The diagram extends the typical understanding of a digital
prototype as a discrete building
model evaluated in the contextual
vacuum of the engineering design
interface, to one where the
qualitative can be evaluated
alongside the quantitative both
in terms of a temporal context,
where evaluation is based on
moving through typical spatial
sequences and the simulation of a
dynamic environment.

Figure 2: Enhanced digital prototype.

4. Mixed Reality

The definition of mixed reality most frequently encountered is based on a taxonomy of

Millgram and Kishino, in which they classify visual displays that allow the combination of
virtual and real space. Rather then articulate hard boundaries between fully synthetic
digital space and the video display of real environments, they develop what they term a
virtuality continuum. The location of display environments along this continuum is
determined by the extent of knowledge present within the computer about the world being
presented 6. At one end of the continuum is virtual reality, where all the information
required to produce the contents of the display environment is known (geometry, location,
surface, etc), while at the other is video projection of a physical environment where all that
is known (by the computer) is that it is a video file. What mix of technologies will enable
design ideas to be evaluated in relation to a dynamic context, from multiple motion paths,
and at the same time allow the superimposition of functional performance data? Perhaps
more importantly for design practice, how might these technologies be implemented in a
studio design environment why bother with mixed reality?

Design review and decision making is reliant on conversation and negotiation, usually
involving a range of representations performance data, drawings, physical and computer
models. Despite advances in distance based collaboration technology, crucial design
decisions are still undertaken in face-to-face review and discussion, trying out ideas and
getting reactions 7. This observation enables a clarification of the potential role of mixed
reality technology at the early stages of design. We should be clear on the distinction
between the individual act of designing the formation of early ideas on paper, and with
physical and digital models and the process of design review in which these sketch
designs are evaluated. In the concept design stages, there is a strong argument that
working with abstractions rather then a fully rendered context is more productive 8. There
is of course no normative design method that captures the range of approaches individuals
use to generate ideas. But as a general theme, whether developing ideas with paint,
charcoal, graphite, Photoshop, Sketchup, Maya or other 3D modelling software, there is a
tendency to deal with abstractions rather then with detail. Typically this is a solitary,
reflective activity, the individual designer developing ideas as an internalized conversation
with media the mark, interpret, mark cycle identified by Herbert in relation to drawing
practice 9. Moreover individuals have favorite tools, hence having to learn new computer
Second International Conference on Critical Digital: Who Cares (?) 243

interfaces may be counterproductive at this conceptual stage. The design review by contrast
is a group activity, in which design options, often from a number of designers, are compared
and discussed in relation to a range of issues. Arguably, it is at this point of the early design
stage, that concurrent evaluation within a temporal context is most productively introduced.
The individual ideas can be evaluated side by side, against the context in which they have to
perform, allowing discussion of qualitative and quantitative performance within a range of
time scales. What mix of technology might meet these requirements?

We are currently developing two prototype systems that explore mixed reality at the early
stages of design: a desktop system that combines site video, virtual design models and
environmental performance databases; and a mobile augmented reality (AR) system for on
site design review. A feature of the development is the involvement of several local
practices in formulating the approach. At present the prototypes are rudimentary, and are
presented here for comment. The desktop system consists of three integrated design work
spaces: pre-recorded camera paths and time lapse video with embedded high resolution
design model; a low resolution real time virtual environment; and the graphic display of key
environmental performance data. Users can evaluate alternate design models within the
three work spaces simultaneously, with options being able to be swapped on the fly. Design
notes and mark-up can be added in relation to particular viewpoints or time lapse
simulations, to provide a design 'history tree'. This ability to locate significant key frames
and temporal thresholds emerged as a significant issue in conversation with practioners.
While architecture may be experienced in a similar manner to film, when designing, there is
a tendency to focus on particular frames. Another factor that re-ocurred in conversations
with practioners was the need for a way of tracking ideas as they develop. Comments,
together with screen grabs of the design, are synchronized with a discussion forum, to
enable input from those away from a workstation, or in different time zones. The design
history functionality, and the ability for design directors to access and comment on design
changes while on the road, were identified as significant issues. The second system, the
mobile AR approach, consists of a mobile touch screen that can be wheeled around a site.
Proposals can be viewed as dynamically rendered models superimposed on streaming video.
Yet to be fully implemented, the idea is that the design review will happen on site, allowing
discussion to occur in relation to a computer simulation, while experiencing the nuance of
place and time the mix of aural, visual and scale cues that are typically absent when
discussing architectural ideas.

Berman, M. 'The Experience of Modernity" in Thackara, J. (ed) Design After Modernism.
London, Thames and Hudson. 1988, p. 26
Bois, P. and Shepley, J. "A Picturesque Stroll around Clara-Clara," October 29 , 1984, pp.
Evans, R. (1995) The Projective Cast: Architecture and its Three Geometries, Cambridge,
Mass., MIT Press, P. 363
Moloney J., Collapsing the Tetrahedron: Architecture with(in) Digital Machines, 1999,
Available: http://www.chart.ac.uk/chart1999/papers/noframes/moloney.html.
Harvey. L. and Moloney, J. "Resounding Cities: Acoustic Ecology and Games Technologies,
in " Environmental and Global Citizenship, Phillips, C. (ed) Rotterdam: Inter-Disciplinary
Press, 2005, pp 21-28.
Milgram, P. and Kishino, F. "A Taxonomy of Mixed Reality Visual Displays," IEICE
Transactions on Information Systems, E77-D.12 ,1994
Mitchell, W. 1995. Kenote Address: Virtual Design Studios. CAAD Futures 95.
Eyil, D. and Gross, M.D. Thinking with Diagrams in Architectural Drawing. Norwell, Mass.:
Kluwer Academic, 2001
Herbert, D. Architectural Study Drawings: John Wiley and Sons, San Franscico,1993