Académique Documents
Professionnel Documents
Culture Documents
may 2013
AR[t]
Magazine about Augmented Reality, art and technology
MAY 2013
2
HOW & NOSm MURAL AUGmEnT, ThE HEAVY PROJECTS, ARTICLE On pAGE 8
Colophon
ISSN
2213-2481
COnTACT
The Augmented Reality Lab (AR Lab) Royal Academy of Art, The Hague (Koninklijke Academie van Beeldende Kunsten) Prinsessegracht 4 2514 AN The Hague The Netherlands +31 (0)70 3154795 www.arlab.nl info@arlab.nl
EDITORIAL TEAm
Hanna Schraffenberger, Mariana Kniveton, Yolande Kolstee, Jouke Verlinden
COnTRIBUTORS
AR LAB & PARTnERS: Wim van Eck, Edwin van der Heide, Pieter Jonker, Maarten Lamers, Maaike Roozenburg GUEST COnTRIBUTORS: Alejandro Veliz Reyes, Antal Ruhl, Lotte de Reus, Matt Ramirez, Oliver Percivall, Robin de Lange, BC Heavy Biermann
GRAphIC DESIGn
Esm Vahrmeijer
pRInTInG
Klomp Reproka, Amersfoort
COVER
Our AR[t]y cover is a work by Royal Academy of Art student Donna van West who participated in the Smart Replicas project, see: www.donnavanwest.nl
www.arlab.nl
Table of contents
30
WELCOmE
to AR[t]
50
WhO OWnS ThE SpACE 2
Yolande Kolstee
06 08 16 18 24 26 30 34
40 44 50 56 62 64 72 76
ThE MISADVEnTURES In AR
Oliver Percivall
AUGmEnTED pEDAGOGIES
Alejandro Veliz Reyes
A STUDY In SCARLET
Matt Ramirez
SUBJECT: InTERVIEW
From: Hanna Schraffenberger To: Lev Manovich
PRE-DIGITAL AR
Maarten H. Lamers
AUGmEnTED EDUCATIOn
Robin de Lange
HOW DID WE DO IT
Wim van Eck
BELIEVABILITY
Edwin van der Heide
TWO GRAPHIC DESIGN STUDENTS (DAVE POPPING AND GABOR KEREKES) MADE A MOIR-PATTERN AS A DECODING KEY
WELCOME... to the third edition of AR[t], the magazine about Augmented Reality, art and technology!
In this issue we present articles by contributors from all over the world who are involved in stretching the borders of augmented reality, on the edge of art and technology. We feature both articles with a philosophical perspective and articles with a more technical point of view. In Re+Publics article we can read about blurring private properties boundaries, in the attempt to leverage AR to allow artists to make incursions into public spaces, and in ways they were previously physically unable to do. We are very pleased to introduce to you the improved version of Marty: the video see-through AR head-up display. These drawings & 3D model will be downloadable from our website, to print out at your local 3D print facility! Hanna Schraffenberger sets about interviewing Lev Manovich, well-known among many people since the publication of his book The Language of New Media in 2001. Maarten Lamers takes us back to pre-digital AR with his story on Peppers Ghost and at the other end of the spectrum Antal Ruhl explores the potential of using vestibular stimulation in order to create new AR experiences. Crucial to AR experiences is the concept of believability, explored by Edwin van der Heide. Lotte de Reus discusses a spatial audio intervention to enhance museum exhibits. A recurring topic is set out by Yolande Kolstee: the legal ramifications of AR initiated in AR[t] #2. Furthermore, we feature a short science fiction sequel by Oliver Percivall. Yolande Kolstee, Head of AR Lab
7
In the second part of this issue, special attention is given to education: Experiences of bringing AR into the classroom are presented in separate articles by Alejandro Veliz Reyes, Matt Ramirez and Yolande Kolstee. The results are a wide variety of implementations and reflections in different creative contexts. Robin de Lange considers the long-term ramifications of extending both the mind and cognition itself with AR. Wim van Eck continues his AR tutorials in the series How did we do it, describing some AR software programmes which are widely available. In our new section AR[t] Pick, we share artworks that caught our eye. In this issue, we have chosen 'Immaterials' the result of a collaboration between the onformative design studio and Christopher Warnow. We invite you to visit our new website, via the same URL www.arlab.nl, which will provide you with all the information on our artistic and technical research but also that of fellow researchers, and information about our experiments in the cultural domain. On this website, in the part News Picks we post short news items on AR artists, scientists, events and developments. Please feel free to contact us to tell us what caught your eye, which might lead to an item there. We are confidant you will enjoy this issue. Should you like to contribute to issue #4, look out for our call for contributions, which will be posted on our website soon!
OUTDOOR ADVERTISING: AR | AD TAKEOVER (NYC, 2011), RON ENGLISH, photo by WILL SHERMAN
RE+PUBLIC
A creative collaboration between The Heavy Projects (Los Angeles) and the Public Ad Campaign (New York City), Re+Public is dedicated to using emerging media technology, and augmented reality (AR) in particular, to alter current expectations of our public media environment generally dictated by property ownership, the ability to pay for its usage, or a willingness to break the law. Blurring private property boundaries, Re+Public seeks to leverage AR in an effort to allow artists to make incursions into public spaces in ways they were previously physically unable to do. With this goal in mind, Re+Public has developed an experimental mobile device application that digitally resurfaces three specific areas of public space: outdoor advertising, murals, and buildings. As such, this article focuses on these domains and demonstrates how Re+Public has used AR to transcend current private property boundaries, which lies at the heart of our endeavor to re+imagine public space as a more open visual commons.
public space, Re+Public investigated how civic authorities allow certain private parties to profit while preventing or discouraging other forms of public media production. To this end, we envision AR as the first step in the evolution of better tools of expression that can democratize public media production. The AR | AD Takeover used street level ads and billboards to trigger a curated digital art installation that displayed on mobile devices. In New York City, we augmented ads in Times Square with artistic content. Our digital infiltration into public space and takeover of commercial ads created a place of dialogic interaction rather than a monologic con-
sumptive message. Specifically, users could trigger web-based information related to the showcased artists whose work has historically addressed commercial advertising in public space such as: Ron English, Dr. D, John Fekner, PosterBoy, and OX. We foresee AR mobile device technology as a first step in the transformation of public space into an arena shaped by user created content. In other words, AR is an incremental step towards showing the public an alternate view of their landscape, which commercial ads do not necessarily have to dominate. Once individuals experience this AR version of reality, they might start demanding a better version of public messaging than the billboard default.
OUtdoor MUrals: Bowery Wall (NYC, 2012) and Wynwood Walls (Miami, 2012)
In contradistinction to the use of AR to problematize the consumptive monologue of outdoor advertising, at both the Bowery and Wynwood Walls sites, we used AR to rupture public space with a new kind of artistic interactivity. The Bowery Mural Wall is an outdoor mural exhibition space in Manhattan. Owned by Goldman Properties since 1984, real estate developer and arts supporter Tony Goldman started the Bowery Mural with Jeffery Deitch and Deitch Projects. In 2008, the mural series commenced with a recreation of Keith Harings famous 1982 mural followed by work by such recognized artists as Os Gemeos, Faile, Barry McGee, Aiko, Kenny Scharf, Retna, and Shepard Fairey. In 2012, Re+Public used AR to resurrect murals that once existed on the wall. In other words, by pointing a mobile device at the present mural, users are able to see the former murals, in situ, that the current artist has painted over. While users could certainly view the previous murals online, AR permits users to see these murals as if they were actually back on the wall, in the space, as Drawing from an international pool of talent, artists who have contributed to the Wynwood Walls include Os Gemeos, Invader, Nunca, Saner & Sego, and Swoon among many others. During Art Basel 2012, Re+Public was commissioned by the Wynwood Walls to create an AR experience. In addiIn 2009, Goldman Properties and Tony Goldman who was looking to transform the industrial warehouse district of Miami also conceived the Wynwood Walls. Beginning with the 25th and 26th Street complex of six separate buildings, Goldman endeavored to create a center that developed the areas pedestrian potential. they move in perspectival relation to the viewer. It is precisely this kind of spatial aura that distinguishes AR from other types of emerging media technology. AR connects the digital with the physical in an intimately present way, whereas other technologies tend to disconnect the viewer from their immediate physical surroundings. Viewing previous murals online, for example, removes the viewer from the space by placing them squarely in the absent, digital world.
10
tion to resurrecting a Shepard Fairey mural that he recently painted over with a new mural in tribute to the recently deceased Tony Goldman, Re+Public created 3D, interactive environments for four other murals by How and Nosm, Ryan McGinness, Aiko, and Retna. Additionally, we worked directly with MOMO and collaborated on an AR version of his indoor mural at the Nicelook Gallery on site. In considering our deployment at the Wynwood Walls in particular, we wrestled with this new type of work and wondered if it constituted a new mode of art. Without pretending to discern any immediate resolution, it is arguable that the AR assets represent original works in that they contain a sufficiently new visual expression of ideas, over and above those embodied in the 2D mural. In the case of each traditional mural, the AR overlay used the 2D paintings as feature tracking markers and source material to produce an original expres-
sion of creative content. In addition to an immersive garden with a bridge, stream, and flowing waterfall, we created a 3D Kabuki theater that allowed users to walk into a digitized version of the
AR is an incremental step towards showing the public an alternate view of their landscape, which commercial ads do not necessarily have to dominate.
mural where all of the elements were separated in Z-space. With the How and Nosm, we created another immersive, rather abstract 3D environment and permitted users to both pull apart and reconstruct the mural elements. With the Retna, we built the mural shapes in 3D and animated them to extend out of the wall and placed them both on the ground and above the wall. Finally, with the McGinness, we made it appear as if the paint colors were draining out of the mural.
11
12
AUGMENTED ARCHITECTURE: PEARL PAINT, WILLIAMSBURG ART & DESIGN BUILDINGS (NYC, 2012) BRADBURY BUILDING (LA, 2012)
In addition to using both outdoor ads and murals as the markers that trigger our AR deployments, we have also experimented with using buildings, and their unique architecture much in the same way. To this end, we digitally resurfaced, or skinned physical buildings in urban centers by overlaying 3D content onto the physical environment. Ultimately, in our attempt to use AR to re+imagine public space, we really see the city as a canvas that allows for a multiplicity of voices to enter into our visual landscape, rather than the current commercial hegemony. Working with muralist MOMO, we chose three buildings that had a particular cultural significance and, using AR, made it possible for MOMO to put his art on buildings that he could not have accessed in his traditional 2D format. Converting MOMOs 2D designs into digital 3D models optimized for mobile, we placed his art on both the Pearl Paint and the Williamsburg Art & Historical Center buildings in New York City. Both structures have a long art history in the city and AR allowed us to blur the lines between private and public space. In Los Angeles, we converted the Bradbury Building, site of many interior shots in the film Blade Runner (1982) into a futuristic version of itself. In this way, instead of placing converted art on the structure, we created the first example of what we refer to as city visions. Specifically, we used AR to provide an artistic rendering of the re+imagined building by projecting it into a Blade Runner style future. This city vision type of deployment potentially provides more practical architectural and urban planning uses and maintains our notion that the AR experience should be spatially relevant in order to maintain the physi-
cal aura that may have drawn the viewer to the building in the first place. In other words, the AR assets should maintain some logical connection to the building or space upon which we have attached them.
FINAL REMARKS
Whether it is outdoor ads, murals, or entire buildings, Re+Public seeks to continue to deploy AR in an effort to democratize access to our shared visual environment and alter the current expectations of urban media in accomplishing our core mission of re+imagining public space. It is vital to the health of any city that its inhabitants are able to participate, in some meaningful way, in the visual urban messaging systems that surround them. With the coming advancements in wearable computing, the digital overlay will become a much more seamless and natural part of our daily existence. It is our hope that these early entrants will help create experiences that consider art and design as an important part of the way the public adopts this technology.
WILLIAMSBURG ART & HISTORICAL CENTER BUILDING AUGMENTED WITH MOMO URBAN ART
13
BC Heavy Biermann
Deriving his pseudonym from his penchant for philosophical discussion, BC Heavy Biermann possesses an interdisciplinary background that comprises technology, academia, and the arts. With a PhD in Humanities [Intermedia Analysis] from the Universiteit van Amsterdam, BC has worked as both a university professor and a tech developer in Anaheim, Prague, and Saint Louis. Since 2007, BC has internationally presented his academic work, which explores augmented reality, art and semiotics in public space. As a kind of synthesis between scholarly inquiry and emerging media, BC founded The Heavy Projects [and its collaborative spin-off Re+Public] to investigate how the fusion of creativity and technology can uncover new modes of relaying ideas. Building upon existing technological and theoretical frameworks, Heavy creates innovative interfaces between digital design and physical worlds in ways that that provoke the imagination and challenge existing styles of art, design, and interaction. After finally giving up his painfully amateur skateboarding career due to a bum right knee BC plans to use his extra time continuing to examine meaningful ways to fuse tech + creativity.
OUTDOOR MURALS: WYNWOOD WALLS (MIAMI, 2012), RESURRECTED SHEPARD FAIREY, photo by JORDAN SEILER
14
15
In AR[t] 2 we announced the new MARTY videosee-through headset as a design based on the Sony HMZ T1. It was designed by Niels Mulder from studio RAAR on assignment of the TU-Delft, partner of the AR Lab, with the aim to do research on co-operative AR. One of the aims of the AR Lab is to stimulate AR in the world by putting our developments in public domain. When the new AR Lab website will be in the air at about the same time as this AR[t] 3 magazine is published, we will post the design files as well as a photo series how to assemble the Marty, so anyone with access to rapid prototyping facilities can reproduce Marty for his/her own scientific or artistic research. Keep an eye on our website as we will also link to the software to do 3D pose tracking based on natural feature tracking.
hence there is extra knowledge on scale. In markerless systems, natural features like dominant corners or lines in the scene are tracked, no assumptions on scale can be made and two cameras that see that same feature are necessary. And finally: Yes we were already decades ago inspired by fighter pilots with heads-up display and of course Steve Mann, the geek that walked around with an AR display. But also lower backpain, cramped shoulders from long computer days and the sighing, why can laying on bed or in an armchair not the ceiling or the white wall, the air, or anything at any time be my display?
DISCLAImER
The industrial design of Marty is meant to do experiments by researchers, artists and designers in the area of AR and stimulate industry to come up with affordable AR equipment. It is, however, subject to copyright, meaning that no large scale industrial production can be based on this design without the consent of the copyright owner. For more information: info@arlab.nl.
We are well aware that the Sony HMTZ T2 is on the market, which has another face mask. We are aiming at adapting Marty to version 2.0 that is based on the HMZ T2. Due to a lack of manpower and funds this might, however, take a while. In the mean time be our guest and contribute to AR to come up with your own solutions. If they are valuable we will publish about it in our next AR[t] magazine. We are also aware of Google glasses and other equipment and we welcome those developments! However, still many companies do not understand that you need two cameras to see virtual objects in 3D and two cameras to track (salient) keypoints in 3D in order to track your head pose while walking around with your headset in any unknown environment. AR-toolkit like markers can do with one camera as the size of the marker is known and
REFEREnCES
Bridging the multiple reality gap: Application of Augmented Reality in new product development. Anna P. Chatzimichali, Wim Gijselaers, Mien S. R. Segers, Piet Van den Bossche, Hetty van Emmerik, Frido E. Smulders, Pieter P. Jonker, Jouke C. Verlinden In proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Anchorage, Alaska, USA, October 9-12, 2011
16
ON OUR NEW WEBSITE (WWW.ARLAB.NL) WE WILL SOON POST A PHOTO SERIES ON HOW TO ASSEMBLE THE MARTY HEADSET.
17
---------- FORWARDED mESSAGE ---------FROm: HAnnA SChRAFFEnBERGER <HAnnA@ARLAB.nL> DATE: 2013/4/3 15:53 SUBJECT: InTERVIEW AR[t] mAGAZInE TO: LEV MAnOVICh
Maybe you remember me from Facebook. I work at the Augmented Reality Lab in The Hague and I am one of the editors of the AR[t] magazine. When I read your article The Poetics of Augmented Space, I realized that I would like to interview you about Augmented Reality for the AR[t] magazine. A short time ago, I finally also read The Language of New Media. As a consequence, Id like to interview you even more. So I hope youll agree to an interview for the magazine? Best regards, Hanna P.S. After my last few interviews, my supervisor (Edwin van der Heide) told me that I could/should be more critical towards my interview partners. So Ill challenge myself to challenge you. P.P.S. Maybe we can print my questions in issue 3 and your answers in issue 4?
18
MAnOVICh, LEV. ThE LAnGUAGE OF nEW mEDIA. ThE MIT pRESS, 2001. ImAGE SOURCE: WWW.JEWIShphILOSOphYpLACE.WORDpRESS.COm
Dear Lev,
Augmented Reality
What is Augmented Reality?
To begin with, I would like to ask you what you consider Augmented Reality (AR) to be. In The Poetics of Augmented Space you describe AR as the laying of dynamic and context-specific information over the visual field of a user. It would be great if youd address the topic once more. Firstly, because our readers might not have read your article. And secondly, because I think that this point of view unnecessarily limits AR to the visual sense. In The Poetics of Augmented Space, you mention Janet Cardiffs audio walks as great examples of laying information over physical space. These walks are designed for specific walking routes. While navigating the environment, one gets to listen to a mix of edited sounds that blend in with the sounds of the surroundings, as well as spoken narrative elements and instructions such as
where to go and what to look at. In contrast to typical visual AR, the user is presented with auditory information that relates to the immediate surrounding space. Personally, I would call this Augmented Reality. Wouldnt you?
Augmented Space
What is special about AR compared to other forms of Augmented Space?
In your article The Poetics of Augmented Space you discuss the concept of Augmented Space. Augmented Space refers to all those physical spaces that are overlaid with dynamic information such as shopping malls and entertainment centers that are filled with electronic screens and all those places where one can access information wirelessly on phones, tablets or laptops. Besides AR, you mention several other technological developments in the context of Augmented Space, among which, for example, monitoring, ubiquitous computing,
19
tangible interfaces and smart objects. Is AR just one of many related recent phenomena that play a role in overlaying the physical space with information? Whats special about AR compared to other forms of Augmented Space?
that in AR, something virtual augments something real. More specifically, the virtual augments that to which it relates. In our view, space is one of the possibilities, but likewise, we have considered things like augmented objects, augmented humans, augmented perception, augmented content and augmented activities. What is augmented depends on what the additional content relates to. I am curious whether youd agree. Do you think that all forms of augmentation bring along an augmentation of space or influence our experience of the immediate surrounding space?
20
I think the same is true for Augmented Space. Often, information and space might be related, even when they dont add up to one phenomenological gestalt. So some questions Id like you to answer with respect to Augmented Space are: When are information and space perceived independently from each other would you still call these occurrences Augmented Space? When are information and space perceived as separate but related layers? And when and why do they add up to one single gestalt?
equally to renaissance paintings and to modern computer displays. When we imagine a typical AR scenario in which virtual objects are integrated into a real scene (e.g. a virtual bird is sitting on a real tree) there is no second space. Its the same physical space, which contains both virtual and real elements. Is this a fundamental change in visual culture?
New Media
One of the main questions I want to ask you is: What makes Augmented Reality special? I have posed that question with respect to other forms of augmented space. Id like to ask it again with respect to the history of new media. Personally, I dont think of AR as a recent phenomenon. Of course, there are more and more so-called AR applications, AR technologies and new media works that work with AR. However, when we consider the concept of AR, we find examples that date back centuries. An example of ancient AR is the Peppers Ghost trick (which is discussed by Maarten Lamers on page 24). It uses a second room, glass and special lighting in order to let objects seem to appear, disappear or morph into each other in an otherwise real, physical environment. But even if the concept isnt new, current manifestations of AR might still bring something new and special to the table. If we look at contemporary AR and compare that with other forms of new media, whats special about it and what isnt?
is something that has always bored me. You note that new technological developments illustrate how unrealistic the previous existing images were. At the same time they remind us that current images will also be superseded. I was wondering: How does AR fit in the widespread aspiration towards realism? On the one hand, visual AR could be considered a huge step back. The 3D models that are usually integrated in real space don't come close to the level of photorealism we know from cinema. On the other hand, the virtual leaves the realm of virtual space and enters our real physical environments with respect to that the images might be experienced as more realistic than ever Will AR take the quest for realism to a new level? I can imagine, when striving for realism, the virtual things that appear to exist in our physical space should not only look like real things ideally they also feel like them, smell like them, taste like them and behave like them. Will photorealism be traded in for a form of realism that encompasses all senses? Do you think new media will develop towards a more multimodal form?
AR & cinema
In The Language of New Media, you relate different forms of new media e.g. Virtual Reality, websites and CD-ROMs to cinema. How about the relation between AR and cinema?
21
Im certainly not a cinema expert, but I guess most of what we see in visual AR has been present in cinema for a long time. For example, AR research is very concerned with registering virtual objects in real space. As far as I understand it, this can be seen as an analogy to compositing in films: an attempt to blend the virtual and the real into a seamless whole augmented reality.Do you agree? You oppose compositing to montage: while compositing aims to blend different elements into a single gestalt,montage aims to create visual, stylistic, semantic, and emotional dissonance between them. Do we have montage in AR as well? (You give the example of montage within a shot, where an image of a dream appears over a mans sleeping head. The same could easily be done in AR.) Does visual AR use similar concepts as cinema? Does cinema use other techniques to create fictional realities that are not (yet) used in AR? Does AR use techniques that might be adapted by cinema in the future?
lationship to something real. Could we say that when working with AR, artists and designers create a database for an existing interface? I have one more question about databases. In The Language of New Media you write about the elements of a database: If the elements exist in one dimension (time of a film, list on a page), they will be inevitably ordered. So the only way to create a pure database is to spatialise it, distributing the elements in space. In AR, virtual elements are distributed in real space. Can we understand this as a pure database? What are the consequences of working with spatialized elements? What are the inherent limitations and possibilities when working with this form? (I can imagine it has consequences, e.g. for storytelling? As you point out, we cannot assume that elements will form a narrative when they are accessed in an arbitrary order...)
AR as spatialized databases
With The Language of New Media, you did not only One of the main claims in The Language of New Media is that at their basis, all new media works are databases. You argue that what artists or designers do, when creating a new media work, is constructing an interface to such a database. Lets apply this database theory to a typical AR scenario in which virtual objects (seem to) appear in a real environment. We can see this as a database filled with virtual objects. The database might hold a virtual chair, a virtual pen and a virtual painting. These virtual objects are displayed as part of a real room when a user views the augmented environment with a smartphone. (Technically speaking, we could say the real world serves as a database index for those virtual elements.)What is the interface to access the database? Is it my phone? What does the artist create? I think it is usually the virtual content and its re Manovich, L. (2001). The language of new media. The MIT press. Manovich, L. (2006). The poetics of augmented space. Visual Communication, 5(2), 219-240. provide a theory of new media; you also pointed your readers towards aspects of new media that were still relatively unexplored at that time and you suggested directions for practical experimentation. Are there certain aspects of Augmented Reality you consider especially interesting for future experiments and explorations?
References
22
[...] the only way to create a pure database is to spatialise it, distributing the elements in space.
Lev Manovich, The Language of New Media
23
Disneyla
nd, 1978
n Valentij
rten a a M
Image
Sitting beside my brother, a mechanical funhouse car drove us through Disneys Haunted Mansion ride. Scary stuff, of which I can remember only one thing: the car stopped, rotated 90 degrees to the right, facing a large mirror, and on the seat between us appeared the scariest moving ghost ever! Instantly our heads turned, facing each other. Thank God, the ghost wasnt really between us. The mirror showed us some weird illusion, most realistically. Naturally, I asked my dad how the illusion worked. He explained something about glass, darkness and reflecting light. In effect, he explained what is known as Peppers Ghost: a simple but clever technique that creates holographic scenes in 2 or 3 dimensions. Combining this with a large mirror, Disney augmented the reality that we hold our mirrored image to be: pre-digital augmented reality. Peppers Ghost technique was first described in the 16th century and later refined by John Pepper around 1860. It is still used in amuse-
ment parks and museums today, but also as part of modern optical see-through AR technology. In Microsoft Researchs HoloDesk (see research.microsoft.com/projects/holodesk/) project, its use is apparent. However, in current head-up see-through displays Peppers Ghost technology is less apparent but nonetheless used in the same fashion. To me it is interesting that we still rely on John Peppers idea to add digital content to our optical reality. Actually, digital technology lets us define, render and interact with virtual content. But good old-fashioned Peppers Ghost projection is what augments our reality with that content. Who would have guessed that such basic illusionary tricks are crucial to what we now consider cutting-edge technology? In fact, if you know other pre-digital augmented reality techniques, send a short description to lamers@liacs.nl, and help me put AR in perspective.
25
Image
of coUrtesy
26
Setup
To do so, I mounted an accelerometer on top of my head-mounted GVS device in order to measure its orientation. This provided me with the setup as shown in Figure 1. The electrodes are incorporated in a pair of headphones to make sure they are pressed against the mastoids properly. The headphones are in fact merely rings around the ears. The middle part is left uncovered to retain the full use of users auditive orientation, and to make sure this wouldnt influence the results. The data from the accelerometer is sent to the control unit. The control unit calculates the appropriate intensity and direction of the current in real-time and provides feedback to the electrodes. own orientation. There were two possible modi for the control unit. Modus A could counteract my balance, while modus B could amplify it. In other words, if I was tilted to the left in modus A, it would push me back to the center, in modus B, it would push me further to the left. Testing this simple setup immediately revealed some potential AR applications of vestibular stimulation. While testing, I realized that, when put in modus A, it felt like I was moving through a liquid or a thick syrup-like medium. The GVS device counteracted all my movements, so it took more effort to move around. In modus B, it had the opposite effect; it felt like my resistance was really low since the device backed up every movement I made. Without even using external data to augment an extra layer to reality I created an altered world based on data from the physical world. Is this a form of augmented reality or is it an alter-
Conditions
The first step of my research project was measuring the effect of an altered balance, based on my
27
The tests that I have done focused on enhanced performances in everyday life situations. For example, I was wondering whether I can diminish or amplify motion sickness, improve my balancing skills etc. During the testing phase, I have used this device for over two weeks to see if there was any progress. Tests involved bus rides, walking to targets on the street, walking over a balance beam, playing Wii Balance Board games and many other examples. Although we have to be very careful about drawing conclusions from these results (given the self-experimentation and singlesubject constraints), I found that in certain cirnate reality? Be that as it may, it does fall within the definition of mediated reality, which is the broader field of manipulating users sensory perception through a wearable device, and letting them experience their surroundings in a modified fashion. Using only these two simple orientationbased modi, we can already create an enriched environment, but for what use? The examples point towards simulations of the physical world, which could be used to train divers or astronauts who work in other environments. But you can also think about fighter pilots who fly in simulators that dont alter pilots balance when flying upside down. This system could enhance a simulated environment by distorting the balance organ based on the simulators virtual orientation. cumstances my balance actually improved while using this device. This might be of use for people with an impaired balance system or in those situations when accurate balance is crucial.
28
tion effect. You might remember that study from the 1960s where a test subject was wearing reversal goggles. After a few days the brain adapted to the newly displayed environment and reversed the image back to normal. After removing the goggles, the subject saw the world upside down because the body had adapted to this new situation. The same effect occurred with my balance organ, but at a much quicker pace. While my body needed some time to get used to my new and improved balance, it did so quite convincingly. And it did so in two ways. When I used a constant pulse on the electrodes (a constant current in the same direction) the effect diminished quite soon. But when I used an alternating pulse, and only changed the pulse-width to affect my balance, the effect was constant. Apparently I was much more sensitive to a change in stimulation than to the actual stimulation itself. Secondly, when I was wearing the device for about fifteen minutes, and removed it after that time, my body had to adapt to this new non-stimulated situation. I saw that, when I was wearing the device while it was set in modus A, it gave me the experience as if I was moving through a liquid. After removing the GVS device I experienced the exact opposite. So when I turned it off, it felt like I had switched it to modus B; thus, giving off the impression of moving through a low friction environment. Your body is extremely good at adapting to new situations and environments, which are very important design-considerations if we want to use a GVS device as an interface. While using GVS for augmented reality purposes is still in a research stage, the possibilities are endless if you use your imagination. This article is basedon research presented at Chi Sparks 2011 (Chi Nederland 15th conference), June 23, 2011, Arnhem, The Netherlands. The presented paper, Experiments with Galvanic Vestibular Stimulation in Daily Activities: An Informal Look into its Suitability for HCI, can be found here: www.antalruhl.com/media_tech/paper.pdf
Antal Ruhl
Antal Ruhl is a media artist with a background in design, science and art. Antal studied Industrial Design at the The Hague University of Applied Sciences, Media Technology at Leiden University and Design and Media Arts at UCLA, Los Angeles. Antal creates objects that let us rethink our environment. These objects vary from kinetic sculptures to interactive installations. Sometimes they are purely conceptual or formal and sometimes they serve a more commercial purpose, but they always share the goal to intrigue people. Using the technological possibilities at hand we can enrich our work and create an engaging object. Interactive and playful objects are much better in holding someones attention while they convey their message. His work can be described as visually and technically attractive with a focus on natural and physical phenomena. Antal worked at distinguished design companies in Amsterdam and Barcelona, has had several (national and international) performances and exhibition, published a scientific paper, travelled the world and currently works a freelance artist/ designer. Antal has also started a company to develop interactive installation for brands, festivals and event. Creative in Motion is a creative brand activation studio: www.creativeinmotion.nl For an overview of his work or to contact Antal please visit: www.antalruhl.com
29
BELIEVABILITY
Edwin van der Heide
Believability is something we deal with on a continuous basis. When we receive an e-mail that claims to be from our bank with the request to enter our account information on their website in order to upgrade its security, we enter a situation in which the content appears to be believable. Weve learned to get suspicious and need to verify the credibility of the story. The e-mail is believable, appears to be authentic, it pretends to be real, but is in fact fake. When we read a novel, the story can be entirely fictional and it
nevertheless draws us in without questioning its realness or truthfulness. The story is believable, while not real, and we enjoy it. An important factor for a story to be believable is that we can relate ourselves to it (or the story relates to us). The examples from the banks e-mail as well as the novel show us that believability is actually independent from something being real or fake; or in other words, independent from something being real or virtual. What we also learn from
30
Spatial SoUnds (100dB at 100km/h) at Wood Street Galeries, PittsbUrgh, USA, 2009 - Image coUrtesy StUdio Edwin van der Heide
the example of the novel is that things that happen in the story dont have to be possible in reality. In a novel we can meet creatures from Mars, time-travel, or never go to bed. We can simply imagine these things and believe them. When I was writing my paper about the interactive art installation Spatial Sounds (100dB at 100km/h) for the Third International conference on HumanRobot Personal Relationships (HRPR) I was introduced to a topic addressed by Kerstin Dautenhahn
(2007) that fascinated me: The believability of a Robot. The reason that it fascinated me is that it made me not only think about believability in the context of robots, but it also triggered me to think about the believability of an artwork and, more specifically, the believability of the behavior of an interactive artwork (as opposed to the believability of a robots behavior). I realized that the believability (of the behavior) of an artwork was not (yet) seen as a fundamental topic and might deserve its own study and experimentation.
31
Kerstin Dautenhahn has an interest in socially intelligent robots: A robot companion in a home environment needs to do the right things, i.e. it has to be useful and perform tasks around the house, but it also has to do the things right, i.e. in a manner that is believable and acceptable to humans. I tried to imagine an example and came up with the idea of a robot that makes and brings you a cup of coffee. Soon after I had that idea I had to think about a full automatic coffee machine that grinds the beans and steams fresh milk for each individual cup of coffee that it makes. How do the two differ from each other? With the robot we can imagine that he read our mind and therefore made a coffee for us. We imagine that the robot has a certain amount of intelligence and, perhaps, even feels affection for us. In the case of the coffee machine, we dont imagine any intelligence and we simply think of it as a machine without affection for us. If we, however, imagine that the coffee machine made that coffee especially for us things start to change. Its by imagining that we can change what we believe and thereby turn a machine into a believable affective robot. This might be the
reason that in Japan a lot of machines talk or include animations to, for example, welcome and/ or thank the user for using it. It is interesting to question whether these machines indeed make us believe they show affection, and if so, if it lasts or wears off. There is another possible difference between the coffee machine and the robot making coffee. Robots are often made to look and behave like humans (the humanoid) and the coffee machine isnt. Does this then mean that the representation is a requirement for believability? No, a robot (or interactive installation) doesnt have to represent something (else). It can be an abstract work that is believable on its own. In the HRPR paper about Spatial Sounds (100dB at 100km/h) I put it like this: The installation can be seen as a non-verbal abstract robot and does not imitate an animal or human-like look or behavior. It is a machine-like object but does not resemble existing machines. Nevertheless, it allows us to somehow identify ourselves with it. Spatial Sounds (100dB at 100km/h) is an example of a believable robot in the sense that the visitors believe they
32
understand the behavior of the installation and find it worthwhile to interact with. The aspect of believability is so strong that people accept the installation as a real being and want to interact with it over and over (van der Heide, 2011). Interesting to read in this context is Alwin de Rooijs (2010) graduation research project for the Media Technology Master program on Abstract Affective Robotics. Im curious how the abovementioned thoughts apply to augmented realities. What aspects make combinations of, and interactions between, the real and the virtual in augmented reality believable? We have learned that something doesnt have to be true to be believable. We have also learned that something doesn't have to be possible in order to be believable. Besides that, weve learned that something doesnt have to represent something; it can be abstract and nevertheless be believable. The former makes me believe that believability forms an interesting perspective to think about what we can imagine in augmented realities!
There are certain things we believe in that, once we discover they are fake, completely lose their believability, while we keep on believing other things that we know are fake.
References
Dautenhahn, K. (2007). Socially intelligent robots: dimensions of humanrobot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 679-704. van der Heide, E. (2011). Spatial Sounds (100dB at 100km/h) in the Context of Human Robot Personal Relationships. In: Lamers M. H., Verbeek F. J., Human-Robot Personal Relationships (HRPR 2010), LNICST Vol. 59, 27-33. de Rooij, A. (2010). Abstract Affective Robotics, http://mediatechnology.leiden.edu/ research/theses/abstract-affective-robotics
Spatial SoUnds (100dB at 100km/h)at DAF Tokyo, Japan, 2006 Image coUrtesy StUdio Edwin van der Heide
33
Introduction
In museum exhibitions historical objects are usually shown by visual display, in a showcase with extra textual information added to it. Museum visitors can never touch the objects, let alone use them. As a result, visitors scan the displayed objects from a distance, something that needs reconsideration in our present time where the experience is essential. To provide a way around this situation, the so-called smart replica was proposed in the previous issue of Ar[t] (Roozenburg, 2012): a new kind of reproduction, in the shape of a 3D print, that stretches the boundaries of the replicas concept as an autonomous object based on a historical artefact. New methods of access and new digitization strategies based on the study of the relationship between the bits and atoms are being developed. We hypothesize that by using these 3D imaging techniques the value of our cultural heritage can be increased. In other words, the goal is not to make the most realistic
copy of the original, but to analyse, communicate and enhance those qualities of the historical artefact that are the most meaningful to us, now. Here we present the design project of Lotte de Reus in connection with this paradigm shift. Completed as a graduation project, it presents an auditory environment to augment the artefact in an unobtrusive and non-linear way. The objects that are central in this project are seven teacups and saucers that are part of the collection of Museum Boijmans Van Beuningen. These are currently on display in a new exhibition on design and pre-industrial design. Depicted in figure 1, each of these teacup and saucer sets represents a milestone in the Dutch history of porcelain. Starting with the first import of porcelain from China in the seventeenth century by the Dutch East Indies Company; followed by the invention of Delfts blue as an attempt to copy Chinese porcelain, and ending with the small scale production of porcelain in the Netherlands.
34
went on to the next; leaving the exhibition behind, full of untold stories. Our hypothesis is that more active means will lead to a more comprehensive museum experience; thereby increasing the opportunity to reflect and learn even after the visit. An important part of the collection are the nonlinear narratives, networks of information associated with the object, consisting of stories, locations, materials, rituals, the collections past and the like. Augmented matter the mixture between bits and atoms allows novel interaction techniques to embody these networks of information. For this design project our aim was to convey the following qualities of interaction: intrigue, understanding, satisfaction, and integrity. These qualities feed the resulting research questions: How can museums anticipate and facili-
In the spectrum of recreational activities, the exhibitions of Boijmans Van Beuningen can be characterised as enriching experiences: an encounter in which the visitor is conscious of the artefact and the (hi)story it represents. This means that a museum requires the visitor to reach out for information, while the information passively waits for the visitor act upon it. The engagement of visitors is limited as they are not experts on the particular subject of the exhibition. Because of the passive character of the objects and their corresponding information, it takes effort on the part of the visitors to maintain concentration. In the project discussed here, a focus group that visited the museum exactly proved this problem case in hand. Visitors overall interaction with Boijmans exhibitions can mainly be described as scanning. The participants walked into a room, started at a random showcase, looked at an object briefly, and
tate the active assemblage of old and new stories and how do these stories refer back to the replicas original? How can digital databases be employed in linking smart replicas to their collections? On a philosophical level: does the original still attract interest? In the case of pre-industrial utensils such as the teacups and saucers this question is very relevant.
35
ECR_Phase Engagement: Draw the visitors attention; the moment when a visitor has some immediate sensory, emotional or intellectual response to the artefact. Context: Draw the visitors attention; the moment when a visitor has some immediate sensory, emotional or intellectual response to the artefact. Reference: Gives the visitor the opportunity to draw conclusions and connect to related resources. It provides more detailed and interpretive information about the work.
Interaction characteristics Spatialised audio, depending on user location / viewing angle. Triggered by proximity to object.
d. Narration 2 information on specifics that can directly be related to the object (audio). e. Background information (text and images).
In the first phase, visitors are attracted to participate in the experience by other visitors who are listening to the 3D audio clips. These audio clips give the impression that they are experiencing more: it is as if their auditory attention has doubled their visual attention (Erens, 2012). Together with an app that is made available, the visitor is intrigued and triggered to participate in the exhibition. When the visitor is in the proximity of an object, a 3D ambient soundscape that fits in with the history of the specific teacup and saucer will appear once the first soundscape has been heard, it attracts the visitor to go to other objects accompanied by a spatial soundscape. Arriving in close proximity of the teacup and saucer, the spatial audio of a narrator starts. The narrator tells about the role the specific objects (audio story). When the visitor picks up a replica, the narra-
tor focuses on the features of the teacup or saucer (audio specifics). In this stage the visitor is encouraged to turn and explore the object, connecting the narrative with what is visualised about the object. The visitor can put down an object and pick up another object, while retaining the soundscape. The app contains more background information on porcelain and the objects. When the visitor returns home, she can consult the app at her convenience; to browse the additional information for example.
36
dling of an object. Auditory feedback enriches the visitors experience actively, without visual clutter and conserves traditional values on how art should be experienced. Furthermore, audio clips allow temporal cues; it works more associatively, and speaks directly to the visitor (Erens, 2012). The soundscape is the auditory equivalent of an ambient image; non-visual immersive content, typically with so-called earcons which represent specific objects or events. The technology to spatialise audio was developed two decades ago; see Burgess and Verlinden (1993) for example. The character of 3D audio relates to the idea that the sounds seem to come from sound sources placed anywhere in a space (a surround sound effect). When listening through headphones, the brain places the sources of the audio clip in your head. An example of a high quality 3D audio clip is In your head by Big Orange. We used AudioStage to produce our audio clips with a visual interface, cf. Figure 2. To connect this aspect to the exhibition visitor, position and orientation tracking of the human head as well as the objects at hand is required. Here optical or magnetic tracking principles also make sense, as they are fit for indoor use. With the use of tangible replicas and 3D audio clips, scanning behaviour can be transformed into an immersive encounter.
Preliminary evaluation
The core of the concept was tested during the Object design fair (Rotterdam, 7-10 February 2013). Information about the teacups and saucers was presented via text or audio clips. Ambient soundscapes were on or off, yielding four permutations. 140 people interacted with a selection of the configurations presented in pairs, they were asked to choose between the two displays and support their reasoning. The essential observations include: Participants seem to enjoy the ambient soundscape: it triggers the imagination and the recognition that the objects used to be utensils and not art object as they are now. Presenting the information via responsive audio facilitated the visitors to consciously turn the object to find the image that the narrator referred to. Presenting the information via audio, whilst the visitor is holding the object, does not clutter the visual sense. Participants appeared to be pleased with the responsive auditory system, even those visitors who preferred text to audio clips.
Conclusion
Once the teacups and saucers were objects of dailylife and their form, weight, substance, texture, colour, decoration make sense primarily in the context of their functions and relations to other objects, as well as the people who used them. Combining ambient soundscapes, tangible interaction with physical replicas and the connection between information and corresponding visuals, triggers an immersive encounter in which this sensibility is restored: the passive, one-sided encounter with the objects now becomes an active two-sided encounter. The proposed system is by no means the first auditory guide for exhibits; it rather extends the existing strengths with emerging technologies such as indoor tracking and spatialised soundscapes. Furthermore, it is more or less compatible with existing gear already employed by many museums. In essence this proposal presents a new type of relationship between visitor and object that has interaction qualities equal to a human conversation. Firstly, this poses an intriguing quality that pulls the visitor in by using 3D audio clips and ambient soundscapes, creating curiosity and making visitors want to engage with the artefacts. Because the visitor has a direct intellectual sensory dialogue with the object, this 1. Experimentation with augmenting untouchable artwork with ambient soundscapes. 2. Implementation of responsive audio tours in the current exhibition context. will lead to a more meaningful experience. Secondly, understanding is nurtured because the encounter is intuitive and the information presented by the artefact responds to the visitors body language. Thirdly, a satisfactorily quality is propelled, by the layered structure of the narratives, which can be browsed in a non-linear mode by the visitor. Through the app, the experience is saved, and can be connected to various forms of social networking websites and location-based services. Lastly, the experience fits in with the integrity values of the museum Boijmans Van Beuningen the 3D scanned and printed, moulded physical replicas afford what Dutch historian Johan Huizinga has called a historical sensation, the feeling as though you are somehow in touch with the past (Ankersmit, 2005). Future work includes experimenting and researching the effect of the design in the environment of a museum:
38
3. Prototyping indoor tracking and interaction sensing possibilities, with special attention to smartphone infrastructure. 4. Creating guidelines on how the concept could be implemented to suit different kinds of objects in the museum (or maybe even utensils in our everyday context).
More information
Video presentation of this project (headphones required): http://www.youtube.com/ watch?v=enR1Ggbuf_8 Software to render binaural output by visually placing audioclips in 3D: http://www.longcat. fr/web/en/prods/audiostage Interactive installations regarding preindustrial utensils: http://new.pentagram. com/2008/05/new-work-detroit-institute-of-1 In your head; a high quality 3D audio example: http://www.big-orange.nl Official Smart Replica blog: http://smartreplicas.blogspot.nl
Acknowledgements
We would like to thank Alexandra van Dongen, curator at the Museum Boijmans Van Beuningen for her constructive collaboration. Furthermore, we would like to acknowledge the valuable advice and support by Cilia Erens, Professor Joris Dik, Dr. Wolf Song and Ir. Wim Verwaal. Many thanks to DOEN foundation for funding the Smart Replicas project (including the graduation project of Lotte de Reus) and many thanks to Mareco prototyping for their contribution in the form of 3D micro prints.
References
Ankersmit, F.R. (2005). Sublime historical experience, Stanford University Press. Burgess, D. A., Verlinden, J.C. (1993). An architecture for spatial audio servers. Proceedings of Virtual Reality Systems Conference (Fall 93). Erens, C. (2012). Interview on soundscapes, Interviewed by Lotte de Reus. Amsterdam, 14 December. Halbertsma, M. (1995). Themaparken, dierentuinen en musea. Rotterdam: Erasmus Centrum voor Kunst- en Cultuurwetenschappen. Ministerie van Onderwijs, Cultuur en Wetenschap (2005). Bewaren om teweeg te brengen. Museale strategie, Woerden: Drukkerij Zuidam. The Futures Channel, (1999). Conversation with Curtis Wong. [online] Available at: http:// www.thefutureschannel.com/conversations_ archive/wong_conversation.php Van Dongen, A., (2012). Interview on the museum context, Interviewed by Lotte de Reus. Rotterdam, 21 November 2012.
Lotte de Reus
Lotte de Reus recently received her Masters degree in Design for Interaction at the Faculty of Industrial Design Engineering at Delft University of Technology. In July 2012 she started her graduation project for the Smart Replicas project; she was driven by a fascination for porcelain and the wish to create effective storytelling experiences. During the project she was pleasantly surprised by the world of audio, something she had not yet encountered in her studies. New technologies such as augmented audio, immersive soundscapes and 3D audio now hold a new, special interest for her. In the future, Lotte would again like to work in the domains that combine art and technology. Her portfolio can be found at www.lottedereus.nl.
39
40 40
41
to actually see Biggar, is to download this app via Layar. This virtual sculpture has true omnipresence; it floats above holy places in Rome or in the Himalayas, as well as above industrial places, above warzones and above peaceful places Brian Wassom, blogger and specialist in AR Law, predicted five issues related to AR Law for 2012, ranging from public resistance to adult AR (AR porn that is) via licensing, and negligence, to AR patents.
What is at stake is, what happens when this information about our eye-movements is stored in databases, analysed and even sold to marketing companies.
WhAT IS RECORDED ?
With the announced introduction of Google glass, we see a concern regarding the possibility to record without the public knowing this. On March 11,2013, Seattles5 Point Cafebecame the first known establishmentto publicly banGoogle Glass, the highly anticipatedaugmented reality deviceset to be released later this year. The No Glass logo that the caf published on its website http://the5pointcafe.com was developed and released (under a Creative Commons license) by a new London-based group called Stop the Cyborgs.The group is composed of three young Londoners who decided to make a public case against Google Glass and other similar devices. On the Stop the Cyborgs site, the group raises a significant concern: namely, that theres no obvious way to know when the device is on or what its actually doing (recording or not).
EYE-TRACKING
Another type of warning from Mr Wassom is related to the information derived from tracking our eye-movements to detect were we are looking at. Real immersive ARdependson knowing exactly what our eyes are looking at; this information is used to position the virtual content there where our eyes are looking at. Natural feature tracking systems are in rapid development; we can enter a room, let our camera survey the space, detect the salient points, and make a virtual grid on which we position our virtual objects or scenes. Using the eye-movements of the person wearing AR glasses gives extra accuracy.
42 42
PREVALEnCE OF AR
John Moe (the host ofMarketplace Tech Reportand handles web content for the program), wrote in 2011 that Augmented Reality has been the Next Big Thing for a while now, although it never manages to become the Actual Current Big Thing.
This quote gives us a direction for discussing the abovementioned AR related legal issues. The intertwinement of AR in our day-to-day lives is actually quite slow especially compared to the revolutionary predictions. We are getting used to it at a nice pace. As an example, I can take our AR Lab, which has been working in the augmented reality field since 2006, and still for some, even closely related people within our Academy, the concept of AR is completely new. But if the Google glass will become prevalent as one the people from Stop the Cyborg argues in a conversation with a journalist of arstechnica. com and suddenly everyone is wearing it and this becomes as prevalent as smartphonesyou can see it becomes very intrusive very quickly. Its not about the tech, its about the social culture around it.
We might come to the preliminary insight that as for now, our actual privacy laws are sufficient enough to deal with privacy concerns in augmented reality. An incorrect accusation or incorrect information remains incorrect, regardless whether it is posted on Twitter, Facebook or in a space around us, via an AR app. To add correct or incorrect virtual information to a virtual object or a space, is as simple as assigning correct or incorrect information to any subject, object or space in our physical world, and will be considered as such. For the time being, we dont need special AR criminal legislation, our current laws might be sufficient.
FURThER READInG
Critical blog on law and social media: http://wassom.com Critical site on wearable technology: http://stopthecyborgs.org
43 43
OLIVER PERCIVALL, ThE AUGmEnTED STAR, BOOK COVER, ILLUSTRATOR: PAUL HADCOCK
44
I FIRST LEARNED ABOUT AUGMENTED REALITY AROUND TWO YEARS AGO. I DISCOVERED ITS MAIN USES WERE AS A NAVIGATION TOOL USED IN CITY CENTERS, POINTING PEOPLE FOR EXAMPLE TO PLACES OF INTEREST OR NEAREST TUBE STATIONS. THE THOUGHT OF CONCEALED TEXTUAL MARKERS STRUCK A STORY IDEA IN MY MIND. WHAT IF THERE WAS SOME AUGMENTED REALITY TEXT MOST OF US WERE NOT SUPPOSED TO SEE AND YOU HAD A DEVICE THAT WAS ABLE TO READ IT WOULD YOU FOLLOW ITS INSTRUCTIONS? THIS IDEA FASCINATED ME INTO WRITING A SERIES OF STORIES TITLED THE MISADVENTURES IN AUGMENTED REALITY. PART ONE FOLLOWS OUR ACCIDENTAL EXPLORER LYNDEN AS HE POSTS HIS ADVENTURES TO AN ONLINE FORUM.
45
Part One
My uncle was one clever soul. He could probably be best described as a recluse, the shy inventor type. But nearly nine years ago now he disappeared from his hometown and has never been seen since. I dont think hes lost, not wanting to be found is my guess and I think I know why. Using his ingenious engineering skills he somehow created a VAARR (Very Advanced Augmented Reality Reader) It works just the same as other regular Augmented Reality apps I am led to believe but with one unique difference It has a setting that once activated, shows unseen Augmented Reality text none of which is detected by any other device. He created two actually. One was entrusted to me. And because of this I have a story to tell. A few years ago I wasnt exactly a technology fanatic although admittedly I had more than a passing interest in new gadgets. But that was nearly three years ago now before I set foot within the Augmented Star. Now there is probably nothing I dont know about the integrated software algorithms used in Augmented Reality.
Images coUrtesy of GoldRUn, see www.goldrUnner2013.com and www.snapsapp.com
Life in this newly discovered domain is a far cry from home. Theres no one thing that makes it uniquely different. Its everything. The smell, the temperature, the noise levels, the beings, everything is entirely unfamiliar when you first arrive. But once you start exploring all the hidden signs and directions its impossible to stop. Youre immediately plunged into its rich diverse landscapes and to find your way around or locate anything at all you must be in possession of the special device The VAARR or Very Advanced Augmented Reality Reader. But be warned: Augmented Reality text as we know it was not developed within The Augmented Star so it makes it pretty hard to find your way out. Its not easy to write long paragraphs on this VAARR. My uncle made it possible to post directly online from this device. So my blog entries below are concise.
46
Blog Entry #1
Lynden
Apr 12 2011
Its not about gazing into the future, its about seeing the present through a unique lens. Thats the first message you see when you arrive in The Augmented Star. Every time. And you never arrive at the same place twice. Ive come here with a strict agenda. To find my uncle.
Blog Entry #2
Lynden
Jul 12 2011
OK it took me a couple of hours to get acquainted here this time. Theres just so much to get distracted by. From learning a new food recipe to breathtaking advertising campaigns, the Augmented Reality here really is something else. On the downside crossing the street can be something of a perilous activity. It puts things in perspective when vehicles move from 0 to 600 MPH in less time than it takes to chalk a pool cue.
Blog Entry #3
Lynden
Oct 12 2011
Just noticed a glitch with this online forum that I would like to point out to the moderators. The first post I made earlier was date stamped Apr 12 2011 but the post I made just a couple of hours after that is stamped July 12 2011. Three months later? I think not!
47
Blog Entry #4
Lynden
Jan 12 2012
Its funny because since Ive been wandering the streets here the last few hours Ive regretted never going to Cambridge University. Although I feel like I have been. It appears there is nothing you cant learn here by waving the VAARR around. The whole place is an encyclopedia of knowledge. They promote learning a Science degree in Artificial Intelligence can be completed in three months here. Knowledge is uidly brought to astonishing life everywhere by Augmented Reality. I just learned that a thimbleful of a neutron star would weigh over 100 million tons. Must be something in the air here as my clothes are becoming dirty and ragged really quickly. I feel Im ready for that science degree now!
O. L. Percivall
I have always had an interest in technology and gadgets. After leaving school I took a basic Computer studies diploma, which eventually led to a career in various I.T support roles leading up to Project Management. The mysterious side of technology especially intrigued me including the possibilities of where it could take us in What If scenarios. So equipped with a reasonable understanding of Augmented Reality and enjoying a challenge, I set myself the complex task of plotting my science fiction novel and created an alternative fictional world that became The Augmented Star. The Augmented Star is now available on The http://tiny.cc/augmented-star Amazon Kindle Store, jsut follow the QR code on the left!
48
Blog Entry #5
Lynden
Apr 12 2012
The risk of being struck by a falling meteorite for a human is one occurrence every 9300 years. Wow thats a curious fact. I wonder what the odds are for walking into an Augmented Universe like this one. I saw an advert on a billboard for an automobile just now. Using my VAARR triggered a full Augmented Reality breakdown of its features from performance figures to finance options and then it invited you to take a virtual test drive in a car simulation game. The AR campaigns here really resonate with customers in a way that most other ad platforms fall miserably short.
Blog Entry #6
Lynden
Jul 12 2012
Blog Entry #7
Lynden
Oct 12 2012
Just received some news that was pretty hard to comprehend. Ive met a lot of people here within The Augmented Star and most of them know my uncle but apparently hes on the run from some bad people. Someone is coming. Ive gotta move quickly. It feels like Im on the run myself. Ive come to realize now that I may never be able to leave the Augmented Star.
Turns out my VAARR is a pretty valuable commodity here. It can read over 150 types of Augmented Reality text and other types too that alternative devices cant. After an exhausting and bloody battle earlier today, I have learned some creatures here would even kill to own one.
49
AUGmEnTED pEDAGOGIES
Alejandro Veliz Reyes
University of Salford
...augmented reality now gives us the chance to build hybrid, augmented models.
Indeed, this section is titled with the topic of the latest conference of the Association of Education and Research on Computer Aided Architectural Design in Europe (eCAADe) held at the Czech Technical University in Prague (September 2012):
50
TRAINING SESSION ON AUGMENTED REALITY AUTHORING AND AUGMENTED MODELLING, SCHOOL OF THE BUILT ENVIRONMENT, UNIVERSITY OF SALFORD.
suggests some enhancement. As a result, this approach to the augmentation of reality fits with the major aim of educational research which is to enhance and improve educational processes and methods, thus naming this work as Augmented Pedagogies.
diagrams and sketches. This scenario might not be unknown for any architecture student, since the studio teaching scheme has been largely acknowledged as the core practice-based module in which both design (composition, planning, representing) and high-order cognitive skills (critical thinking, analysis, synthesis) are mainly developed in architectural education programs around the world. Actually, the architecture studio and its interactions as a subject matter is a quite complex challenge. As stated by Allen Cunnigham in 2005, after a centennial adaptation and evolution the studio teaching scheme and project-based education around architecture employing the studio system is the most advanced method of teaching complex problem solving that exists.
51
The usefulness of models within the design studio is clear. Beyond the fact that its construction itself entails the development of technical skills, models also embed design information and knowledge, affecting organizational dynamics (the design critique or peer to peer collaborative work), the creation of students toolkits or the final presentation of the design solutions, among other benefits. During this research being conducted at the University of Salford (UK), the extent of the impact of augmented models in this complex studio-system is yet to be depicted. Augmented models will be used, therefore, as a way to understand how new technologies impact design education and how can we describe that impact
from a scientific research perspective, that is following the guiding principles of generalizability, communicability and transferability of that resulting knowledge. The deep impact of new digital tools in design pedagogy has been explored recently by design theorists, such as Dr Rivka Oxman. The particularity of the design studio as a research setting is spiced by theoretical underpins that can potentially lead the path to depict this impact. For example, it has been stated that the studio teaching is usually an unstructured process, in which perceptions and interpretations of information and models play a major role in the students
DRAFT VERSION OF AN AUGMENTED MODEL FOR INTERIOR DESIGN. FURNITURE CAN BE RE-ARRANGED IN A PHYSICAL MODEL OF A HOUSE ROOM (STUDENTS WORK).
52
progression in the courses, mostly based on design dialogs between students, and students and instructors. Also, digital tools have the potential to not only re-shape the toolkits being used for design, but also mediate in the way design methods are structured, offers new digital materials to work with or changes the very nature of the design problems to be faced in different courses. It is not clear, however, how this impacts occurs. The interactions within the studio that make use of representations and models to design are well established rituals such as peer-to-peer collaborative activities or the design critique, but the nature of each studio differs from each other. Variables such as the experience of the instructors, the background of the students, the nature of the design problems to be faced or the institutional standpoint turn the studio into highly context-dependant modules. As a result those variables are usually highly controlled and the study of the impact of different technologies is commonly constrained to the description of technical challenges to be solved, the development of new systems/software or metrics of student satisfaction, rather than on the provision of a theoretical account of their impact into this complex teaching/learning process. The lack of a theory that describes how technology re-shapes the studio results in very limited knowledge re-usability and in turn, into very caged and localized pedagogical frameworks that do not allow cross-institutional or cross-disciplinary collaboration, to evaluate the constant infusion and evolution of new digital tools for educational purposes or to re-use a pedagogical approach and its associated knowledge.
IMAGES OF THE TRAINING SESSIONS ON AUGMENTED REALITY AUTHORING AND AUGMENTED MODELLING. STUDENTS OF THE PROGRAMS MSC DIGITAL ARCHITECTURAL DESIGN AND MSC BUILDING INFORMATION MODELLING AND INTEGRATED DESIGN, SCHOOL OF THE BUILT ENVIRONMENT, UNIVERSITY OF SALFORD.
53
standpoint in terms of validity and fitness to the research problem and the subject matter. As Wanda Orlikowski and Suzanne Iacono (2001) state on their work on information systems theory research, this corresponds to the fact that the use of technologies depend on the context and hence, there is no single, one-size-fits-all conceptualization of technology that will work for all studies. As a result, IS researchers need to develop the theoretical apparatus that is appropriate for their particular types of investigations, given their questions, focus, methodology, and units of analysis. In order to overcome this challenge, this ongoing research proposes a theoretical approach to depict the impact of augmented models in design education. By following a grounded theory methodology, observations and recordings are being collected in diverse settings on an attempt to de-
scribe the resulting studio dynamics by using augmented models. Several trainings on augmented reality and augmented modelling have been made at the University of Salford (MSc Digital Architectural Design, MSc in Building Information Modelling and Integrated Design), and two more experimental settings are now being arranged in different European countries. These multiple settings are not only intended to provide a wide view of the subject being studied, but also fits with the current recommendations for theory construction methodologies, since the manipulation and observation of data in many divergent ways and the juxtaposition of different conflicting realities and sources counteracts the tendency of reaching false or incomplete results, or informationprocessing biases of the investigator. This work is expected to be finished by end-2014.
VIEW OF THE STUDENTS TOOLKIT DURING THE FIRST AR WORKSHOP AT THE UNIVERSITY OF SALFORD.
54
55
56
A STUDY IN SCARLET
MATT RAMIREZ
Laura Skilton Learning and Teaching Co-ordinator Mimas, The University of Manchester Rose Lock Special Collections Supervisor University of Sussex
Jean Vacher Curator, Crafts Study Centre University for the Creative Arts Marie-Therese Gramstadt Educational Technologist, Crafts Study Centre/ Research Associate, Visual Arts Data Service University for the Creative Arts
Introduction
Augmented Reality (AR) was identified in the 2011 Horizon Report1 as a key technology trend with potential impact on education. The report provided the catalyst for the SCARLET (Special Collections using Augmented Reality to Enhance Learning and Teaching) project, as a way of leveraging innovative technology with pedagogical processes. Whenever the words technical innovation is spoken in education circles, educators are understandably cautious electing to concentrate, perhaps rightly, on deep rooted pedagogical benefits rather than short-term gimmickry. Many observers have already buried AR as a fleeting fad in education due to its lack of use cases and documented impact studies. After all, in education technology should be transparent and not an overpowering driver, particularly where the emphasis is on the teaching material. In addition, users do not want to spend time adapting to a new way of learning, new technology should integrate seamlessly into Students can view and touch real manuscripts/ editions in conjunction with guided support from trusted sources, supporting independent learning. The benefits to student learning should always be central to the introduction of any new technology and AR is no different. It is always useful when dealing with new methods of delivery, to be armed with a long list of tangible gains for adopting the technology. Some of the most persuasive arguments are described below: established learning methods and styles. If the focus for the student is the technology, the learning experience can be diluted, this can inevitably lead to dissatisfaction and resistance.
57
Layering AR on texts/images can encourage interaction (e.g. augmented 3D models that overlay the physical image and require user touch gestures to proceed) and spark enthusiasm, preparing them for solo research. AR promotes active teaching, maximizing the opportunity for interaction, encouraging critical response and the adoption of new perspectives and positions. This is in opposition to traditional didactic methods that are predominantly teacher led. Users retain a very small amount of the information that is delivered, and a slightly larger percentage of what is shown to them, but when we become actively involved in an experience, learners will remember and retain the majority of the information presented to them. AR can harness both asynchronous (emailing tutor questions) and synchronous (discussion with peers) e-learning methods. Abstract concepts or ideas that might otherwise be difficult for students to comprehend can be presented through an enhanced learning environment offering access to source historical artefacts and online research in situ. The learning curve for new users engaging with mobile AR through browsers is relatively shallow enabling the learning/pedagogy to be the driver, not the technology.
rare books within the controlled conditions of reading rooms, isolated from much of the secondary supporting materials and a growing mass of related digital assets. Students are used to having access to electronic information on demand, so this experience can be foreign and a barrier to their use of special collections. The SCARLET project, while embracing the potential of AR, concentrated on delivering the benefits to student learning without being a flag bearer for the technology. SCARLET was led by the Learning and Teaching team at Mimas2, a national centre of expertise at The University of Manchester. A mixed team was pulled together dedicated to enhancing the student experience through the application of technology including librarians, academics, learning technologists and students. Sources for primary content were ten key editions of The Divine Comedy by Dante (between 1472 and 1555) particularly important in terms of publishing and intellectual history and the world-renowned oldest fragment of the Gospel of John. At the start of the project in 2011, Junaio was the only AR browser to harness optical tracking functionality, linking 3D models, videos and information to images in the form of GLUE based channels. This coupled with an open API and compatibility on Android, iOS and Nokia devices would prove decisive in the decision to use Junaio. By implementing an object based AR experience, students could simultaneously experience
the magic of original primary materials, such as an early printed book in the library, whilst enhancing the learning experience by surrounding the book with digitised content; 3D models, images, translations, and information on related objects.
58
Evaluation
A dominant theme that became evident in the evaluation was that the two academics found differing responses dependent on student user groups. Students who had little prior subject knowledge found the app most useful, providing a foundation for further investigation and research. The learning experience was most enhanced by AR when information delivered was
FRAGMENT OF THE GOSPEL OF JOHN AR CONTENT MIMAS, UNIVERSITY OF MANCHESTER.
contextual and less generic. Simply adding existing web assets to an object is insufficient; making them unique and packaged in digestible chunks produces more positive feedback and value. In addition, student feedback noted that AR experiences should be best used as part of a learning activity (either independent or group based) acting as enabler to achieving a key course objective (e.g. planning for essay or presentation). AR was most successful layered over the printed marker instead of signposting to other web based resources already accessible using traditional teaching scaffolds (e.g. CMS). Throughout the project lifespan, student evaluation was critical,
The team went on to win the 2nd prize for the ALT learning technologist of the year team award3 and won the Innovation in HE award at Education Innovation 20134. Emphasising the need to align technology to teaching and learning objectives was paramount from the outset to maximise student benefit and impact. Further funding was made available through the SCARLET+ project whose primary focus was to apply the process and framework to other institutions special collections embedding the methodology using a toolkit5 (University of Sussex and the Craft Study Centre at the University for the Creative Arts).
59
The partner universities involved were the Special Collections at the University of Sussex and
6
which will allow students to access surrounding resources, layering anatomical information and reinforcing learning with instructional demonstrations (e.g. cannula application). In the area of geo-spatial mapping, Mimas has
the Crafts Study Centre at the University for the Creative Arts7. focusing on content from 1980s mass observation and 20th century crafts. This project has both developed Mimas understanding of implementing AR in education and embedding best practice and methodology to other institutions. Crucial to the success was ensuring that, as with SCARLET, a multi-disciplinary team approach was adopted. This ensured that the content developed made an impact on learning and teaching as well as enabling AR skills to spread across the institutions.
collaborated with colleagues from the Landmap9 service to create an AR experience around the UKMap dataset. This provides a wealth of rich, multi-layered information accurately locating building types, building heights and ground usage to name a few. The challenge was to incorporate this tabular data in a visual 3D model that a handheld device could render efficiently. It demonstrates a visual representation of raw materials that are often extremely large in size and difficult to comprehend.
Conclusion
To conclude, the two projects have presented important findings in the impact of AR in education, delivering a suite of rich materials especially given the small amount of funding that was available. Both succeeded in providing a showcase for the Special Collections held at their respective institutions using AR, bringing their static objects to life. Sharing was a key element at the heart
60
of SCARLET and SCARLET+, this coupled with a strong team ethic enabling stakeholders to buy into the long-term vision. It is hoped that the legacy from these small projects will be to inspire others to undertake similar work and display the student led benefits AR can offer. AR opens up huge possibilities for creating immersive learning activities. It is particularly effective in explaining abstract concepts visually; allowing active based learners to better absorb the transfer of knowledge. While it may not be suitable for all students and situations, when employed well it can capture the attention like few other technological mediums. Further information: http://teamscarlet.wordpress.com
Matt Ramirez
Matt Ramirez is currently working on the technical development and support of the JISC funded SCARLET+ Augmented Reality project, this follows on from his involvement in the award winning SCARLET project. He has over 15 years experience in web design and e-learning content development for a variety of subject areas including medicine, IT, science, special collections and business. These have used multimedia content authoring tools such as Flex, Flash, Blender, Unity to name a few. Matt's role is also concerned with the research and development of new technologies (e.g. iBook/mobile/multimedia development projects and haptics) with the Manchester Medical and Dental Schools. This aims to improve the student experience by embracing innovative learning methods and providing cutting edge support materials. Read more about Matts work at http://teamscarlet. wordpress.com and @team_scarlet.
References
1. 2011 HORIZON report, section on augmented reality: http://wp.nmc.org/ horizon2011/sections/augmentedreality 2. 3. Mimas: http://mimas.ac.uk SCARLET team are joint second in learning and teaching awards, 13th September 2012: http://teamscarlet. wordpress.com/2012/09/13/scarletteam-are-joint-runner-up-in-learning-technology-awards 4. 5. 6. Education Innovation 2013: http://educationinnovation.co.uk/cms SCARLET Toolkit: http://scarlet.mimas. ac.uk/mediawiki/index.php/Main_Page Special Collections at the University of Sussex: http://www.sussex.ac.uk/library/ specialcollections 7. 8. 9. Crafts Study Centre at the University for the Creative Arts: http://www.csc.ucreative.ac.uk Manchester Medical School: http://www.mms.manchester.ac.uk Landmap: http://landmap.ac.uk
61
Graphic Design students learn how to work with the latest Adobe suit, Photoshop and website development tools. Most important, however, are the courses in which concept development forms the main part. Whether a concept is devised with colour pencils, felt pens or a collage of other material is not the most important issue at that moment. What is important for the Academy, and the department involved, is that they want the students to become acquainted with creative or artistic research. However, the majority of mainstream teachers place analogue and digital techniques at the same level when developing concepts; while, in effect, they often have a (underlying) preference for pen and ink based handwork. When working with Augmented Reality a very suitable medium for graphic designers, by the way new elements should be considered in the concept and design phase. With Augmented Reality new, very new, aspects are added to the world of graphics regarding the Internet and social media, while also taking co-authorship and performance into account.
We, as AR Lab are thoroughly convinced that it is important to let first-year students, freshmen and -women, experience augmented reality, because it will change aspects of their discipline as well as their metier. At various other art academies we see similar projects, one of which is the Scarlet (Special Collections using Augmented Reality to Enhance Learning and Teaching) project, in the UK, at the University for the Creative Arts. At the beginning of the Royal Academy of Arts current study year, we made a head start with the introduction of AR via projects differing in length and with different student groups. The idea was to work towards a special PopUp Gallery, meaning a short, one-day exhibition of the results, in cooperation with the Studium Generale programme, a programme covering lectures throughout the year. Two Graphic Design teachers, both young and digitally savvy, started to work with their classes of 27 students each (!) together with some third year students as well. The search to get to know what AR is and how it could be a
62
meaningful technique for graphic designers began comprised two steps. The first was a very active YouTube-Google search, and the second step was to learn to play with the idea of adding extra information both analogue and/or digital a concept development trajectory of sorts, so to say. The end-results would be displayed during our first Pop-Up Gallery on AR. In the words of the students teachers : Spectacles, headsets, phones and tablets. These are some tools one associates with Augmented Reality (AR). This is obvious because the assumption in AR is that reality must be supplemented with digital information. Or it would be just reality. The rationale underlying the research project therefore lies especially in the use of digital techniques to make this extra layer onto reality visible. Without digital technology, no addition to reality and certainly no interaction would be possible. It is exactly this assumption that we want to investigate. Is Augmented Reality only achievable by the use of digital means? Can hidden information be augmented, without the aid digital techniques and devices? Following that, we invited freshmen and -women from the Graphic Design department to get started with Augmented Reality. We asked them to think about the kind of information they wanted to disclose in specific situations. What type of problem would they want to solve using AR? With what classification should it work analogue and/or digital? With this exercise we hoped to widen the concept; while also seeking to come up with new applications and reasons for the use of Augmented Reality. In completing the first step of the two step project, the students gave us a huge input of examples they encountered: they found all sorts of material via social media or as search results that they consequently discussed. The concept developing part, step two, turned out to be very exciting, for the students themselves as well as for us. We provided technical help when students were really eager to get their ideas
to work as a fully functioning app. We showed them our AR projects and introduced them to e.g. Aurasma a simple and elegant tool for beginners in the AR app world. We were lucky to have two lecturers who gave their talk on AR as part of the Studium Generale programme preceding the Pop-Up Gallery: AR artist Sander Veenhof and data-visualisation expert Niels Schrader. They were part of the jury that awarded the two best projects at the end of the first Pop-Up Gallery. Not one, but two winners were awarded: Jente Hoogeveen en Jorick de Quaatsteniet. Jente won with her project 'Had je dat gedicht?' (roughly translated as 'How does that rhyme with you?'). Error messages we are confronted with on a daily basis are transformed into poems in order to take some of the zing out of stressful situations. Jorick won with his project 'Get your shit together' where animated QR-codes formed text. The variety of final projects was staggering and impressive. Finally, a publication of all the projects was made with a spread for each student in which the description and intention of their project were incorporated. A special recommendation was given for the dyslexia project of Janne van Hooff who visualised how people suffering from dyslexia see letters and numbers. Words emerged by moving strips of black paper over the seemingly incoherent rows of letters. These types of projects show the broad range of ways in which an extra layer on top of the physical world poses questions about what we see. What all the students projects have in common, regardless whether they are analogue or digital, is that they appear to invite spectators to look at the world from another perspective. The same goes for those students who, at first, were not completely convinced about using AR and how they could incorporate this technique in their projects. Ultimately, the idea was to provide an enriching experience for the students and to give them the tools with which they could incorporate AR in their work.
63
64
AUGMENTED EDUCATION
HOW AR TECHNOLOGIES EXTEND OUR MINDS
ROBIN DE LANGE
In the field of education, just as in many other fields, researchers and developers are experimenting with potential applications of Augmented Reality technologies. Many of these experiments draw on of the possibilities to explore virtual information spatially. An interesting exImage by MARCELO GERPE, WWW.CREATIVASTOCK.COM
ample is the 4D Anatomy project by daqri (2012), where you can explore the physiology of a human being by moving the display device along a piece of paper with markers. On the screen you see a 3D model of the human body. With sliders and buttons you can set the transparency of the skin, or switch on the layer showing the nervous system for example. Another interesting project in development is the Sesame Street app Big Birds Words (Qualcomm, 2013), which uses the latest text recognition algorithms. The (young) users of this upcoming app are asked to look for certain words in their home and aim their device at it. When the device recognizes the word, it gives points to the user. This way the user is asked to involve their environment in the process of learning words. These examples show some of the new forms of interaction and presenting information, with which developers are trying to create new, interesting, and memorable learning experiences.
65
In this article I will argue that the developments in AR technologies will make digital information sources much more transparently available to us. In certain cases, this information may even be seen as part of our cognitive process. Because of this change of perspective regarding external information sources, AR technologies could not only lead to new learning methods, but could, and in my opinion should, also trigger debates about the very goals of education itself. To backup this claim, I will first introduce the concept of the Extended Mind.
cal rotation in (b) is actually much faster than the mental rotation. Furthermore, players were not only physically rotating the shapes to fit the slot, but were also trying to determine whether the shape fits in the slot, thereby simplifying the task. (Kirsh and Maglio, 1994) It is this example of the human capacity to manipulate the environment to solve problems, which Clark and Chalmers employ to introduce the Parity Principle: if a part of the world functions as a process
which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process. (Clark and Chalmers, 1998) According to the Parity Principle, the human mind is not bound by the borders of skin and skull. To make this claim plausible, Clark and Chalmers present a thought experiment involving the fictional characters Otto and Inga, who are remembering how to get to the museum. Otto has Alzheimers disease and uses a notebook to serve the function of his memory, while Ingas biological memory is functioning properly. Inga is thought to have a belief about the location of the museum, before she recalls this from her internal memory. In the same manner, Clark and Chalmers argue, Otto can be said to have a belief about the location of the museum before he actually consults his notebook. Thereby, under the right circumstances, the notebook can be seen as an extension of Ottos memory. By showing how beliefs are not bound by the borders of the body, Clark and Chalmers show that true mental events can extend in the environment as well. All the examples of cognitive extension that Clark1 gives in his books and papers are not the typical futuristic technologies that come to mind when thinking about humans merging with tech-
66
nology. Although the possibilities of Brain-Machine Interfaces and neural implants such as in case (c) offer very exciting new ways of communicating with technology, this direct interaction with brains is by no means necessary to become part of the cognitive process (nor are they sufficient for cognitive extension: communicating with technology through a Brain Machine Interface usually still takes too much cognitive effort.) Our brains incorporate the world and some of the technologies therein in their cognitive processes in such an intimate way, that Clark considers us to be natural born cyborgs (Clark, 2003). In fact, the technologies which Clark considers as cognitive extensions of our cyborg minds are hardly identified as technology anymore. One example he mentions is the use of pen and paper when doing long multiplications. To calculate the product of two numbers, we use an algorithm that divides the process of multiplying arbitrary large numbers into very simple steps. By writing down figures in certain locations, we use the pen to manipulate the external memory source, the paper. The writing utensils play a crucial role in this cognitive process and are therefore, according to the Parity Principle, actually part of this process. Another example shows that it has become common to talk about the information that is in some of our technologies as if part of our own knowledge. When somebody asks us on the street whether we know what time it is, and we are wearing our watch, we often answer yes. Subsequently we raise our arm, look at our watch and see what time it is. Now, according to Clark this is not simply loosely formulated informal language. You actually do know what time it is, you is only the hybrid biotechnological system that now includes the wristwatch as a proper part (Clark, 2003). (This proven transparency of the wrist watch is what makes the development of smart watches interesting.)
Now, wrist watches have been around for many decades, writing utensils even for centuries. During this time these technologies have become ubiquitous. They have become socially accepted and actually shaped culture itself. An interesting question is whether more modern external information sources could obtain the same status as these age-old technologies and play a similar, active role in cognitive processes. Could digital information sources, for example parts of the Web, actually become parts of our minds?
67
and scan through the text to find the information he needs. In the widespread current way of interacting, the information access costs of retrieving information from the Web are way too high to be considered as part of the cognitive process.
know the meaning of a certain word that is not in your biological memory, and a short, clear description of the word pops up immediately in the corner of your field of view, would you say that you know the meaning of this word? I can imagine that you after you get more and more used to the device and have experienced this situation a few times already might say yes, similar to the situation with the wrist watch.2 More so, in a very real sense, I think you might start to feel like you really do know it. But what is it with AR technologies, that they could lower information access cost so significantly? Of course Head-Up Displays (HUDs) play a great part in this, by eliminating the physical effort of getting your smartphone from your pocket and having to hold it in within your view. When it does indeed become ordinary to wear HUDs, information can be presented to the user at all
The main goal of education should be to train the technologically extended cognitive system.
tion knows when youre busy driving for example and doesnt bother you then. Now, when a friend (who is not really into new technology and rather asks a friend to help him) asks you whether you
68
times, at the exact moment when it is needed. Another important aspect of AR is the use of information from different sensors and smart algorithms doing image and speech recognition. By combining these, possibilities are created to present information in context-sensitive ways responding both to the environment and the user. Furthermore, digital information can be placed over the world, which is of course the main idea of AR3. By doing so, you can interact with digital information in similar ways to how you interact with the physical world, creating a very natural, intuitive interface. These are the characteristics of AR that create the potential of making digital information much more transparently available to us. I suggest that under certain conditions, well designed, personalized information sources are able to compete with mental resources in terms of costs of information access. According to the Parity Principle,
these digital information sources could then be seen as proper parts of our hybrid minds.
69
this task of storing information could be off-loaded to an external source which is constantly available to us at low information access costs.
The general view on the use of technology in education is quite different from the view expressed in this article though. For the most part of their education, students still only get to use some basic technologies: a pen, a piece of paper and maybe a dictionary or an outdated (graphi4
de Lange, R. (2011). Het Semantic Web en netwerktechnologische cognitieve uitbreidingen. Retrieved from http://www. robindelange.com/wikisofie
Dennett, D. (1996). Kinds of Minds: Toward an Understanding of Consciousness. New York USA: Basic Books.
cal) calculator5. This critical attitude towards the use of technology is very understandable. Digital technology is developing very rapidly, careful decisions have to be made about how to use it in education. To come to these decisions, a lot of research on the use of technology in the learning process is needed. Furthermore, there should be an active discussion on the goals of education and what technologies students can use to reach these goals. An extended view of the mind, in which external resources have an active role in the cognitive process, can offer a valuable perspective in this discussion.
Dror, I. E., & Harnad, S. (2008). Offloading Cognition onto Cognitive Technology. In I. E. Dror, & S. Harnad, Cognition Distributed: How Cognitive Technology Extends Our Minds (pp. 1-23). ESRC/EPSRC Technology Enhanced Learning Programme. (2012). System Upgrade - Realising the Vision for UK Education. London: tel.ac.uk. Kirsh, D., & Maglio, P. (1994). On distinguishing epistemic from pragmatic action. Cognitive science , 513-549.
REFEREnCES
Blomberg, O. (2009). Do socio-technical systems cognise? 2nd Symposium on Computing and Philosophy .
Clark, A. (2003). Natural-Born Cyborgs. Minds, Technologies, and the future of human intelligence. Oxford: Oxford Univ. Press. Clark, A. (2008). Supersizing the Mind. Embodiment, Action and Cognitive Extension. Oxford: Oxford Univ. Press. Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis , 10-23.
Rorty, R. (1999). Education as Socialization and as Individualization. In R. Rorty, Philosophy and Social Hope (pp. 114-126). London: Penguin Books. Smart, P. (2010 (In Press)). Cognition and the Web. Network-Enabled Cognition: The Contribution of Social and Technological Networks to Human Cognition . Smart, P., Engelbrecht, P., Braines, D., Hendler, J., & Shadbolt, N. (2008). The
70
Extended Mind and Network-Enabled Cognition. Retrieved June 24, 2011, from University of Southampton - School of Electronics and Computer Science: http://eprints.ecs. soton.ac.uk/16649/1/Network-Enabled_Cognitionv17.pdf Wheeler, M. (2011, November). Thinking Beyond the Brain: Educating and Building, from the Standpoint of Extended Cognition. Computational Culture .
EnDnOTES
1. The initial paper The Extended Mind was written by Andy Clark and David Chalmers. Because Clark has written many other papers and books on this subject, I will refer to Clark further on. 2. If the word would be jargon of a field you are not familiar with for example, you would probably not understand the meaning directly and need to look up more information, thereby increasing the costs of information access. 3. This characteristic of AR of overlay ing the physical world with virtual objects is not really present in this scenario. For this reason, one might argue that the example does not really show AR. However, it does use certain AR technologies intensively to provide context-sensitive information to the user who interacts with the world. 4. The information access costs of looking up a word in the dictionary go through the roof. 5. Moores law seems to be failing here. The hardware in these devices stays roughly the same, even remains the same price!
Robin de Lange
Robin de Lange has a bachelors degree in Physics and Philosophy and has followed courses on Artificial Intelligence. He is now a student at the Media Technology MSc program at Leiden University and is particularly interested in technologically extended cognition. For his graduation research project he is developing an Augmented Reality application that supports the graphical solving of mathematical equations; thereby showing educational challenges and possibilities. Besides his studies, Robin has taken part in several entrepreneurial projects. Most notably, he was the co-owner of a company that specialized in homework guidance and tutoring. He is a freelance video producer and science communicator. At the moment, he is looking for funding to do a PhD within his field of interest. For more information: www.robindelange.com
71
HOW DID WE DO IT: WhICh AUGmEnTED REALITY SOFTWARE DOES ThE AR LAB USE?
Wim van Eck
72
THIS APPLICATION, COMMISSIONED BY THE RCE (CULTURAL HERITAGE AGENCY OF THE NETHERLANDS), IS DEVELOPED BY THE AR LAB USING VUFORIA AND UNITY 3D. THE PAINTING 'ISAAC BLESSING JACOB' BY THE DUTCH PAINTER GOVERT FLINCK (16151660) HAS BEEN AUGMENTED TO GIVE INSIGHT INTO THE PAINTERS PRACTICE AND OFFERS SCIENTIFIC DATA IN A PLAYFUL WAY. THE APPLICATION WILL PREMIER AT MUSEUM CATHARIJNECOVENT (UTRECHT, NETHERLANDS) ON MAY 17TH.
Often we are asked What is the best augmented reality software? which is a difficult question to answer. First of all, there are so many programs available it is difficult to actually have tried them all, and updates and new programs pop up all the time. Secondly, choosing which software to use really depends on what you want to achieve in the end. At the AR Lab we use a variety of augmented reality programs to realize our projects. I will give an overview of the software we use and why we use it. This doesnt mean we claim that this is the best software and that there are no alternatives, we merely aim to give an insight into
our daily workflow. For this overview we created four categories: easy-to-use for mobile devices, easy-to-use for desktop augmented reality, augmented reality software for interactions and head-mounted augmented reality.
73
The app provides clear information for every step you have to take, removing any technical barriers (Figure 1). Markers can be generated on the spot by taking pictures with the device, and photographs and videos which are stored on the device can be used to augment the chosen scene. When more possibilities are needed there is also a free online application (www.aurasma.com/partners), you only need to register to be able to access it. This well documented application makes it easier to precisely position your virtual objects, and also to import images and videos which are not on your mobile device, as well as 3D models. It is a pity that Aurasma only seems to give good support for 3ds Max and Maya, its more difficult to import 3D models from, for example, Cinema 4d. It is very easy though to use video with transparency, a not very common option which can give great results. The tracking quality of Aurasma is Fig. 1 not the best around, fast camera movements will result in loss of tracking, but it is definitely good enough for most projects.
74
Trial Download is the OSX version and only features marker tracking; the demo-limitations are the same as in the Windows version. Lastly, there is the BuildAR Free version Download (2008 version), which is only available for Windows and is free of charge. The only limitation is a HITLabNZ logo, but many of our students dont mind this logo so much though. BuildAR is easy-to-use; the new version can augment your scene with images, video, audio and 3D models.
it very easy to import 3d-models including animations from almost all 3d-packages. Vuforias image based tracking is extremely stable, the tracking quality stays good even if the tracking image is partly occluded or when there is little light available. Besides image markers there are also frame markers available, which use a pattern of black and white cubes positioned around the image. Semiconductor company Qualcomm offers Vuforia for free and Unity has a free version as well. However, you will need to buy the Unity iOS or Android add-on to be able to export to a mobile device.
Fig. 3
75
AR[t] PICK
ImmATERIALS
BY DESIGn STUDIO OnFORmATIVE AnD ChRISTOphER WARnOW, 2011
76
IMMATERiALS IS ThE RESULT OF A COLLABORATIOn BETWEEn ThE DESIGn STUDIO OnFORmATIVE (WWW.OnFORmATIVE.COm) AnD ThE COmpUTATIOnAL DESIGnER ChRISTOphER WARnOW (ChRISTOphERWARnOW.COm). ThE pROJECT EXpLORES hOW InFORmATIOn CAn BE InTEGRATED InTO phYSICAL SpACE. USInG ThE LIGhT pAInTInG TEChnIqUE, DATA IS pLACED In A ROOm. ThE RESULTInG FORmS DEpICT pOSSIBLE DATA SETS AnD EXAmInE ThE DESIGn pOSSIBILITIES BETWEEn TEChnOID hOLOGRAmS AnD pERSOnAL nOTES. Further information: http://christopherwarnow.com/portfolio/?p=244 http://www.onformative.com/work/immaterials/
77
ContribUtors
WIm VAn ECK
Royal Academy of Art (KABK) w.vaneck@kabk.nl
MARIAnA KnIVETOn
Royal Academy of Art (KABK) m.kniveton@kabk.nl
Wim van Eck is the 3D animation specialist of the AR Lab. His main tasks are developing Augmented Reality projects, supporting and supervising students and creating 3d content. His interests are, among others, real-time 3d animation, game design and creative research.
Mariana Kniveton is currently a master student at Utrecht Universtity, studying New Media and Digital Culture. Since september 2012 she has worked as an intern at the Research Department IVT and the AR Lab. After a brief stint as a cover model for AR[t] #2, Mariana took up editing duties for this current issue.
MAARTEn LAmERS
Leiden University lamers@liacs.nl
Edwin van der Heide is an artist and researcher in the field of sound, space and interaction. Besides running his own studio hes part-time assistant professor at Leiden University (LIACS / Media Technology MSc programme) and heading the Spatial Interaction Lab at the ArtScience Interfaculty of the Royal Conservatoire and Arts Academy in The Hague.
Maarten Lamers is assistant professor at the Leiden Institute of Advanced Computer Science (LIACS) and board member of the Media Technology MSc program. Specializations include social robotics, bio-hybrid computer games, scientific creativity, and models for perceptualization.
PIETER JOnKER
Delft University of Technology P.P.Jonker@tudelft.nl
HAnnA SChRAFFEnBERGER
Leiden University hkschraf@liacs.nl Hanna Schraffenberger works as a researcher and PhD student at the Leiden Institute of Advanced Computer Science (LIACS) and at the AR Lab in The Hague. Her research interests include interaction in interactive art and (non-visual) Augmented Reality.
Pieter Jonker is Professor at Delft University of Technology, Faculty Mechanical, Maritime and Materials Engineering (3ME). His main interests and fields of research are: real-time embeddedimage processing, parallel image processing architectures, robotvision, robot learning and Augmented Reality.
YOLAnDE KOLSTEE
Royal Academy of Art (KABK) Y.Kolstee@kabk.nl
ESm VAhRmEIJER
Royal Academy of Art (KABK) e.vahrmeijer@kabk.nl
Yolande Kolstee is head of the AR Lab since 2006. She holds the post of Lector(Dutch for researcher in professional universities) in the field of Innovative Visualisation Techniques in higher Art Education for the Royal Academy of Art, The Hague.
Esm Vahrmeijer is the graphic designer and webmaster of the AR Lab. Besides her work at the AR Lab, she is a part time student at the Royal Academy of Art (KABK) and runs her own graphic design studio Ooxo. Her interests are in graphic design, typography, web design, photography and education.
78
JOUKE VERLINDEN
Delft University of Technology j.c.verlinden@tudelft.nl
MATT RAMIREZ
teamscarlet.wordpress.com
Jouke Verlinden is assistant professor at the section of computer aided design engineering at the Faculty of Industrial Design Engineering. With a background in virtual reality and interaction design, he leads the Augmented Matter in Context lab that focuses on blend between bits and atoms for design and creativity.
Matt Ramirez has over 15 years experience in web design and e-learning content development for a variety of subject areas including medicine, IT, science, special collections and business.
LOTTE DE REUS
www.lottedereus.nl
BC "Heavy" is founder of The Heavy Projects [and its collaborative spin-off Re+Public]. Heavy creates innovative interfaces between digital design and physical worlds in ways that that provoke the imagination and challenge existing styles of art, design, and interaction.
ROBIN DE LANGE
www.robindelange.com
Robin de Lange is a student at the Media Technology MSc program at Leiden University and part-time entrepreneur. He is looking for funding to do a PhD on technologically extended cognition.
Alejandro Veliz Reyes is a teaching assistant and PhD student on digital architectural design at the University of Salford (United Kingdom). His research interests are design pedagogy, augmented reality, and collaborative technologies for design
ANTAL RUHL
www.antalruhl.com
OLIVER PERCIVALL
Antal Ruhl is a media artist with a background in design, science and art. He creates objects that let us rethink our environment. These objects vary from kinetic sculptures to interactive installations.
Oliver Percivall works in I.T Project Management. He is interested in where technology could take us. This interest has resulted in the science fiction novel "The Augmented Star".
NEXT ISSUE
The next issue of AR[t] will be out in the fourth quarter of 2013.
79