Vous êtes sur la page 1sur 5

2006 International Conference on Information Technology: Research and Education

ARGAMAN: Rapid Deployment Virtual Reality


System for PTSD Rehabilitation
Ehud Dayan
Sonarion-Hadassah Academic Virtual Reality Center.
Hadassah College
Jerusalem, Israel
udi@sonarion.com
and the Desert Storm PTSD simulation system released in
2006 [3]. Polygonal based avatars succeed in creating a degree
of immersion, while the impact still tends to remain far from
seeing real people in action.
The need to provide real animation in cases of public fear
treatments inspired some developers to invest large efforts in
emulating facial expressions using polygonal avatars (D-P.
Pertaub1, M. Slater and C. Barker) [1]. Attempts to reproduce
accurate body movement and facial expressions further
promoted other solutions: one of these was provided by the
Virtual Reality Medical Centers VR system used to treat Fear
of Public Speaking. This system does not rely on the
polygonal technique to display gestures and expressions of
human-like avatars behavior; instead it uses video footage that
is projected onto the virtual environment. This video footage
is then inserted into the Virtual world, and can be controlled to
display various audience behaviors in an accurate way.
The need for a new type of Avatar representation format stems
from the fact that the immersion effect is essential to the
success of treatment, and is the major justification for using
VR.
Current Virtual therapeutic Virtual Reality systems require the
patient to be audio-visually isolated from the real reality by
means of head mounted display and earphones; in these cases,
the patient cannot make use of the support of their therapist
during immersion. We attempt to solve that by projecting the
Psychologists image in RT directly into the VR world. The
obvious benefits are that the Immersion effect is still achieved
audio-visual isolation of the patient from the real world, while
a supportive therapist accompanies the person while he/she
faces and processes challenging sceneries from his/her own
traumatic past.
The Argaman System development guidelines were to develop
quickly deployable and easily configurable software that
supports the creation of VR scenes of specific terror attacks or
other traumatic events in a matter of several hours. The
systems editors enable the user to create the static scene of a
street, passenger bus, coffee shop or any other surrounding.
After the static environment is ready, the user can populate it
with their own clips of life like avatars in accordance with
their vicinity to the site of the incident. The completed scene
can then be used in the therapist's office with minimal VR
equipment, and serve as a therapeutic platform where the
treatment with active participation of both the patient and the

Abstract The Argaman Virtual Reality Software Suite is an


innovative, unique solution providing an immersive virtual
reality therapeutic system for treatment of PTSD (PostTraumatic Stress Disorder) in people who witnessed or suffered
terror attacks or other traumatic experiences. The flexible design
of the tool enables custom reconstruction of the scenery as well as
specific events for each patient. The complete treatment system
projects the therapist into the virtual scene, thus highly
improving the overall impact of the treatment.
The system editor rapidly simulates in 3D the audio-visual
environment where the trauma occurred and provides life-like
avatars that imitate people in that vicinity. When using the
system, the patient can gradually process the experience with the
help of a professional therapist, construct new memories of
alternative scenarios, and deal with previously suppressed
emotions and cognitions.
A central and unique feature of the system is its ability to insert,
in real-time, the therapist or other players into the virtual scene,
by digitally processing a live video stream according to the bluescreen technique. This process ensures that the concept of
personal care in therapy is extended into the virtual world.

Index Terms Virtual Reality, Post Traumatic Stress


Disorder, PTSD, Clinical Interface, PolyClip

I. INTRODUCTION

odern medicine offers a wide array of tools and medical


devices to deal with physical traumas. However mental
traumas including PTSD are mainly treated with the use
of an elementary and basic technological toolkit. The
introduction of VR treatment added an important tool in the
treatment of phobias and PTSD, yet the need to provide fast
immersion into tailored and specific scenery, sometimes for
hundreds of people that were injured at the same time in
various sceneries, call for a different technical solution than
those that currently exist. The Argaman System is an attempt
to meet this need by innovating the current design concepts of
VR treatment
Today's VR systems largely rely on the simplification of
reality. Some of the examples are BusWorld software, a VR
environment for the treatment of PTSD mental injuries
correlated mainly with terror acts such as bus bombing [2],
Virtually better Inc. system of Vietnam VR scenario (1997)

1-4244-0859-8/06/$20.00 2006 IEEE

34

therapist can occur. Thus, the concept of personal care in


therapy is extended into the virtual world.
II. SYSTEM OVERVIEW
The Argaman system is comprised of several subsystems and
software structures that work together to create the material of
the VR scene offline. These subsystems (step1 in figure 1)
include the following components:

PolyClip Editor which is used to create animated


actors, which are then inserted into the 3D
environment of the Virtual reality scene;

Virtual world Editor: the editor produces an XML


file that is later used to describe the layout of the
Virtual world.

Materials prepared offline are later inserted into the VR


software which uses them in order to build the Virtual world
where still images, 3D objects, sounds, events and PolyClips
come together to reconstruct the Real Scene.
The online processes (Steps 2, 3, 4 in figure 1) are outlined as
follows:
The first stage (step 2 in figure 1) is acquiring the video
stream from the camera. Each frame is analyzed and the
background is removed either by use of Chroma-key
technique [6][7] or just by background removal. This action
is performed at the speed of 30 frames per second. Each
frame is processed in such a way that each background pixel
is set to be transparent, and only the therapist's figure is seen.
After processing a frame is completed, the image processing
unit sends the frame data via a communication channel to the
Virtual Reality display machine. The necessity to separate the
system into two machines is dictated by the heavy calculation
and data transfer rates that are involved in the overall
operation of the system. Such design enables the system to be
divided between two
computers, thus appointing them to perform the computation
in tandem.

Figure 1 displays the offline processes (1) and the online


processes (2)(3)(4) .
III. SYSTEM CONFIGURATION
The System runs on the following hardware and software
configuration.
Hardware
Two Dell PowerStation precision 360 each with 1GB of
memory, single core 2.4 GHz CPU and 80MB SATA Hard
Drives.
The two computers were connected via 1GB communication
LAN cards.
The immersion in the VR scene was greatly enhanced with
Intertrax2 head tracker where the user changed their view
point in 3 degrees of freedom (pitch, roll and yaw) using head

35

movement. In order to move around the scene the user can


use Microsoft Joystick or the standard keyboard.
The camera used for capturing the images both for the
PolyClip Editor and for standard webcam - Logitech 4000 pro
was used for Real-Time insertion of the therapists image in to
the Virtual scene.
The VR HMD is from IO-Display systems I-glasses model
capable of 800x 600 (svga) resolution.
(http://www.i-glassesstore.com/iglasses-pc-hr.html).
The setup of the Chroma-Key studio was checked in a
professional blue screen studio and in a temporary setup
under standard office lighting condition with varying blue
backgrounds behind the user; in both cases the results were
satisfying.

Figure 3 The simulated world with animated PolyClip

Software
The system was built using OpenGL for displaying the VR
Scene and directX 8.0 to control the surround sound and to
handle 2-way communication with the Microsoft sidewinder2
Force Feedback Joystick. The environment also contains 3DS
models and PolyClips.
The VR Editor, PolyClips Editor,real-time video processing
and the actual simulation were built using MFC with
Microsoft Visual C++ 6.0. The Real-time video processing
software also used Microsofts Vision SDK ( Vissdk) library,

V. ANIMATED FIGURES
The animated figures are intended to add a life like quality to
the simulated static environment (figure 2) . In order to
achieve a stronger resemblance to reality, each of the figures
the PolyClips creates have their own 3D sound attached
including the effects of :
Muffling: sound coming from behind the listener is
muffled in comparison with the sounds coming from
the front, in recognition of the orientation of the ears.
The same effect will happen for sound coming from
the right in relation to the left ear, and vice versa
Interaural intensity difference: sound coming from a
source positioned to the right of the listener will
sound louder in the right earphone than in the left
one.
Interaural time difference: a source of sound coming
from the right ear will sound slightly earlier then the
sound emitted from the left earphone.
Directional sound (roll-off) the sound of the source
varies in accordance with the distance to it.

IV. VIRTUAL WORLD EDITOR


The Virtual World Editor (Figure 3) enables the user to layout
the visual and audio elements of the virtual scene. The VR
scene is constructed by first taking pictures of the scene and
organizing them according to the geometry of the scene. The
user draws walls in the editor and attaches the matching
picture names to them. In the same fashion the ground and the
sky are formed. The icon that represents the initial point of
view and other 3D objects are also added to the scene. The
schematic final view of the scene is then supplemented with
sound files that are attached to a certain location in the scene.
The sound files are symbolized by icons and can be set to
correspond with a 3D sound effect such as Doppler Effect,
direction and ambiance.

The PolyClips use a standard billboard technique in which a


bit-map graphical element is attached to a polygon situated in
the 3D space. This polygon aligns automatically with the
OpenGL camera thus keeping constant diagonal angle with
the viewer. The PolyClips present video clips of figures on the
billboard so the patient can see single figures or groups of
people acting according to pre-recorded filming.One of the
key considerations behind constructing the PolyClip format
was to give the therapist the option to construct and display
lifelike figures without needing expensive software or
hardware such as Chroma key embedded processors.
Apart from creating life like figures to serve as a background
crowd, the therapist can easily reconstruct figures that are
dressed and behave according to those figures that appeared in
the real traumatic scene. The ability to construct the terrorist
holding the gun, or the paramedic that comes to assist the
wounded can help the therapist to rewrite parts of the story
and provide alternative ends in a cheap and easy way.

Figure 2 The Virtual World Editor

36

the image processing module with the background of the


therapist. It is recommended that the background be of a
monochrome color that is different than the color scheme of
the therapist. Once the calibration process is finished the
therapist can step in the camera field of view and have his
image projected into the VR scene. The image processing is
done at a rate of 30 frames per second. For each frame the
following stages are performed: first, shadows are cleaned
from the image and then each background pixel is replaced
with a transparent pixel thus delivering the image of the figure
on top of a transparent background. The processed frame is
sent via TCP\IP channel to the VR software and the frame is
laid out on a billboard. The entire process occurs 30 times
over in a second thus producing a smooth flicker less video
flow of the projected figure.

Today one can construct polygonal figures using hard to learn


Maya or 3D Studio programs. However, these figures are
already ruled out for being too coarse and cartoon like for true
immersion. A better alternative is to use Adobe Avid or other
software in order to isolate figures from blue screen in such a
way that only the figure will be presentable and the
background of it will be transparent.
However, these types of software are expensive and not
tailored to the task of processing video-clips shot by a
webcam in field conditions for displaying in VR environment.
The Argaman system provides a dedicated simple editing tool
that was constructed to process noisy video clips made with a
standard webcam. Each of these clips is cleaned in a process
of sampling the background and removing it in an iterative
process. The editor enables the user to provide video clips
taken with imperfect lighting and even monochromatic
background, as opposed to professional standard Chroma Key
studios.

VII. TESTING
Currently the tests are carried with the use of the reconstructed
Zion Square situated in downtown Jerusalem. Several
PolyClips animated figures are placed in that scene provided
with the recorded ambient sounds. 15 healthy volunteers were
asked to rate the system in an informal way regarding aspects
of navigation ease, accuracy and immersion feeling. The
input is used now to fine-tune the Argaman system in order to
proceed onto clinical trails.

The System is provided with a bank of PolyClips that can be


situated on the scene to produce various types of crowded
environment.

VIII. FUTURE WORK


The system supports basic scenery scripting. The design
concept of crowd animation as introduced by [5] are luring
especially in term of controlling automatic behavior of the
background crowed that is the PolyClips elements that do
not have direct and complex interaction with the patient
representation in the VR scenery. For more complex and
delicate behavior that can also involve automated learning and
decision making, it might be beneficial to draw on the design
concepts of R. Arkin [4] (Mission Lab). Incorporating these
capabilities will be the focus of future work
IX. CONCLUSION
Future clinical trials will be required to prove the efficacy of
the Argaman system. However, we hope that the introduction
of the new system elements and techniques will enhance the
effect of treatment and enable therapists to provide significant
relief to victims of PTSD.

Figure 4 : The PolyClip Editor is used for substituting the


background with transparent pixels.
VI. CONCEPT OF EMBEDDING THE PSYCHOLOGIST IN REAL
TIME
Immersing the patient involves audio-visual isolation from
the real world. Todays VR Systems enable the psychologist
to view on screen what the patient sees; however, the patient
cannot view the therapist - thus an important aspect of
personal presence and care is eliminated from the treatment. It
is desirable then to enable the therapist to be a part of the VR
scene in a way that will enable him to extend his audio-visual
presence, gesture and attention into the VR world.
Embedding a figure in RT into the Virtual Scene is performed
in several stages. The process begins with the calibration of

Acknowledgments

We wish to thank Natalie Friedman and Shalom Kremer for


helping to develop the PolyClips and to Dana and Daniel
Morris for helping to create the Virtual Editor and Virtual
Simulation.

37

REFERENCES
[1]
[2]

[3]

[4]
[5]
[6]

[7]

D-P. Pertaub1, M. Slater1 and C. Barker "An Experiment on Fear of


Public Speaking in Virtual Reality" in Medicine Meets Virtual Reality
2001, pp372-378.
Josman, N., Somer, E., Reisberg, A., Garcia-Palacios, A., Hoffman H. &
Weiss, P.L. (2005). Virtual reality: innovative technology for the
treatment for victims of terrorist bus bombing with posttraumatic stress
disorder. Paper presented at the 10th Annual Cybertherapy Conference,
Basel, Switzerland, June 13-17, 2005
Rizzo, A., Pair, J., Graap, K., McNerney, P., Wiederhold, B.,
Wiederhold, M., Spira, J. A Virtual Reality Exposure Therapy
Application for Iraq War Military Personnel with. Novel Approaches to
the Diagnosis and Treatment of Posttraumatic Stress Disorder
(Washington DC, March 2006), 235-250.
R. Arkin, T. Collins, Y. Endo, Tactical Mobile Robot Mission
Specification and Execution, Mobile Robots XIV,1999.
Musse, S.R., Babski, C., Capin, T. and Thalmann, D. Crowd Modeling
in Collaborative Virtual Environments. ACM VRST /98, Taiwan
C. Breiteneder, S. Gibbs, and C. Arapis. TELEPORT- an augmented
reality teleconferencing environment. In Proc. 3rd Eurographics
Workshop on Virtual Environments Coexistence & ollaboration, Monte
Carlo, Monaco, February 1996.
Hughes, C. E., Stapleton, C. B., Micikevicius, P., Hughes, D. E., Malo,
S., & OConnor, M. (2004). Mixed Fantasy: An Integrated System for
Delivering MR Experiences. VR Usability Workshop: Designing and
Evaluating VR Systems, Nottingham, England, January 22-23, 2004.
(Proceedings Available on CD.)

38

Vous aimerez peut-être aussi