Mission stateMent

Te Tower is an interdisciplinary research journal for
undergraduate students at the Georgia Institute of
Technology.
Te goals of our publication are to:
■ showcase undergraduate achievements in research;
■ inspire academic inquiry;
■ and promote Georgia Tech’s commitment to under
graduate research endeavors.
the editorial Board
Chuyong Yi
Editor-in-Chief 2008-2010
editor@gttower.org
Michael Chen
Editor-in-Chief 2010-2011
editor@gttower.org
Emily Weigel
Associate Editor for Submission & Review
review@gttower.org
Angela Valenti
Associate Editor for Production
production@gttower.org
Erin Keller
Business Manager
business@gttower.org
Dr. Karen Harwell
Faculty Advisor
advisor@gttower.org
Copyright inforMation
© 2010 Te Tower at Georgia Tech. Ofce of Student
Media. 353 Ferst Drive. Atlanta, GA 30332-0290
Welcome
staff
Undergraduate Reviewers
Michael Chen Helen Xu
Andrew Ellet Shereka Banton
Zane Blanton Alexander Caulk
Azeem Bande-Ali Katy Hammersmith
Ian Guthridge
Graduate Reviewers
Yogish Gopala Michelle Schlea
Dongwook Lim Rachel Horak
Lisa Vaughn Rick McKeon
Shriradha Sengupta Hilary Smith
Nikhilesh Natraj Partha Chakraborty
David Miller
Production
J.B. Sloan Grace Bayona
Matthew Postema
Web Master
Andrew Ash
Business
Jen Duke
AcknoWledgements
speCial thanks
Te Tower would not have been possible without the
assistance of the following people:
Dr. Karen Harwell Faculty Advisor, Undergraduate
Research Opportunities Program (UROP)
Dr. Tyler Walter Library
Dr. Lisa Yasek Literature, Communication &
Culture
Dr. Richard Meyer Library
Dr. Kenneth Knoespel Literature, Communication
& Culture
Dr. Steven Girardot Success Programs
Dr. Rebecca Burnett Literature, Communication &
Culture
Dr. Tomas Orlando Chemistry & Biochemistry
Dr. Anderson Smith Ofce of the Provost
Dr. Dana Hartley Undergraduate Studies
Mr. John Toon Enterprise Innovation Institute
Dr. Milena Mihail Computing Science & Systems
Dr. Pete Ludovice Chemical & Biomolecular
Engineering
Dr. Han Zhang College of Management
Mr. Michael Nees Psychology
Mr. Jon Bodnar Library
Ms. Marlit Hayslett Georgia Tech Research Institute
(GTRI)
Mr. Mac Pitts Student Publications
Faculty Reviewers
Dr. Suzy Beckham Chemistry & Biochemistry
Dr. Tibor Besedes Economics
Dr. Wayne Book Mechanical Engineering
Dr. Monica Halka Honors Program & Physics
Dr. Melissa Kemp Biomedical Engineering, GT/Emory
Dr. Narayanan Komerath Aerospace Engineering
Mr. Michael Nees Psychology
Dr. Manu Platt Biomedical Engineering, GT/Emory
Dr. Lakshmi Sankar Aerospace Engineering
Dr. Franca Trubiano College of Architecture
Review Advisory Board
Dr. Ron Broglio Literature, Communication & Culture
Dr. Amy D’Unger History, Technology & Society
Dr. Pete Ludovice Chemical & Biomolecular
Engineering
Mr. Michael Laughter Electrical & Computer
Engineering
Dr. Lakshmi Sankar Aerospace Engineering
Dr. Jef Davis Electrical & Computer Engineering
Mr. Michael Nees Psychology
Dr. Han Zhang College of Management
Dr. Franca Trubiano College of Architecture
Dr. Milena Mihail Computing Science & Systems
Dr. Rosa Arriaga Interactive Computing
Mr. Jon Bodnar Library
Ms. Marlit Hayslett Georgia Tech Research Institute
(GTRI)
Dr. Monica Halka Honors Program & Physics
Te Tower would also like to thank the Georgia Tech
Board of Student Publications, the Undergraduate Re-
search Opportunities Program, the Georgia Tech Li-
brary and Information Center, and the Student Govern-
ment Association for their support and contributions.
Seneca once said “Every new beginning comes from
some other beginning’s end,” and as I pen this last let-
ter as the Editor-In-Chief of Te Tower, the quote seems
truer by the minute. Refecting back on working with
Te Tower for the last three years, I am fooded with
fond memories.
I remember the weekly Friday meetings where Henry
(former Associate Editor for Production who had a
knack for design), Joseph (former Webmaster who
seemed to be able to translate any idea into codes), and
I worked until midnight in our corner ofce laying out
articles with an occasional Taco Bell break. Ten there
was the occasional lunch break where Martha, Henry,
Erin and I, despite the name of the meeting, skipped
our lunch to have a board meeting. I remember, of
course, the second editorial board and our couch meet-
ings where we shared our personal moments in which
we grew closer than we ever would have been able to by
simply working.
Tis, I believe, is why Te Tower saw such success in its
frst three years. Everyone that was involved in the jour-
nal had a personal connection either to the journal or to
those working on the journal. Such an intimate working
environment allowed each member’s passion and moti-
vation for the journal to propagate freely when the team
needed it most. We performed not for the position that
we each flled but for each other.
Te Tower and I had a rather symbiotic relationship—as
it grew, I grew. I sincerely hope that Michael, the new
Editor-In-Chief, and his new editorial board will enjoy
learning from the journal as much as I did. I have full
faith that, upon my departure, Michael and his team
will bring forth a new beginning in which they will take
the journal to a higher level of success.
I would like to thank each and every individual re-
viewer (faculty and student), production team member,
and business team member who made this journal pos-
sible. I would especially like to thank our faculty advi-
sor, Dr. Karen Harwell, for her indispensable wisdom
that helped me grow into the editor that I needed to
be, and our Director of Student Media, Mr. Mac Pitts,
and his assistants, Ms. Nyla Hughes and Ms. Marlene
Beard-Smith, for their support and guidance. Lastly, I
would like to thank the Georgia Tech Board of Student
Publications for its immense support, the Georgia Tech
Library for providing us with an online platform that is
integral in the workings of the journal, SGA for provid-
ing a generous funding for our print journals, and the
Omega Phi Alpha service sorority for providing us the
manpower when we needed it the most.
Tank you to all who were involved in Te Tower, and I
wish much luck to Michael and his new editorial board!
I believe in you guys!
Best,
Chuyong Yi
Editor-In-Chief, 2008-2010
letters from the editors
Te frst time I met Chuyong Yi, our outgoing Editor-In-
Chief, she was interviewing me at the Barnes & Noble’s
Starbucks. Chu was full of words and highly energetic,
so I naturally thought it was the cafeine talking. Afer
our next couple of encounters, I realized that this was
actually her personality. During my time at Te Tower,
I saw that Te Tower was a refection of our Editor-In-
Chief ’s personality—bursting full of enthusiasm.
Chu presided over the publication of our frst two print
journals. I would like to congratulate her for being our
leader during the greatest expansion our journal, Te
Tower, has ever seen. Under Chu, Te Tower stayed true
to its mission of showcasing undergraduate research
and providing a learning tool to aspiring scientists. Our
journal, initially started by a small group of undergradu-
ates with a tremendous passion for academic research,
now contains nearly 40 staf members. Without her
hard work and the hard work of each of our staf, our
goal of providing an outlet to undergraduate researchers
would not be possible.
Sadly, another member of our family is leaving. Known
for her tremendous enthusiasm for undergraduate re-
search, our faculty advisor, Dr. Karen Harwell, is leav-
ing Georgia Tech to pursue other career aspirations. A
mentor to all of us here at Te Tower, she will be sorely
missed.
I hope to carry on the work of Chu, Dr. Harwell, and
our founding members by increasing the publication
frequency of our print journal and broadening the mate-
rial that we publish. I plan to increase our collaboration
with student publications dedicated to college-specifc
research news. I will work to increase our internet pres-
ence so that our journal is accessible anywhere and at
anytime. With the hard work of our top-notch staf and
the advice of the Student Publications Board, I hope to
further expand Te Tower to greater heights.
Special thanks to our staf reviewers who endured read-
ing submission afer submission without rest, our pro-
duction team who spent countless hours designing the
journal, and our business team members. Tanks to the
editorial board for their repeated commitment to Te
Tower, the Library for providing us an online journal
system, and the Student Government Association for
making Te Tower’s print journal possible. Finally, I es-
pecially thank Mr. Mac Pitts, the Director of Student
Media, for advising us in tough times and his assistants,
Marlene Beard-Smith and Nyla Hughes.
Keep on the lookout for new material, whether that
be online or in print, from Te Tower. I encourage you
to visit our website, gttower.org, for more information
about what we do and how you can get involved.
Cheers,

Michael Chen
Editor-In-Chief 2010-2011
letters from the editors
getting involved
Call for papers
Te Tower is seeking submissions for our Fall 2009 issue. Papers may be submitted in the follow-
ing categories:
Article the culmination point of an undergraduate research project; the
author addresses a clearly defned research problem
Dispatch reports recent progress on a research challenge; narrower in scope
Perspective provides personal viewpoints and invites further discussions through
literature synthesis and/or logical analysis
If you have questions, please e-mail review@gttower.org. For more information, including de-
tailed submission guidelines and samples, visit gttower.org.
Cover design Constest
Tis year’s cover was designed by Esther Chung.
Te Tower is looking for its next cover design. Submission deadline is February 5, 2011. Top de-
sign will win $50 and a t-shirt. Get creative! Te template is available at gttower.org. Please avoid
copyrighted images. Final designs should be submitted to production@gttower.org.
staff
Want to be involved with Te Tower behind the scenes? Become a member of the staf! Te Tower
is always accepting applications for new staf members. Positions in business, production, review,
and web development are available. Visit gttower.org or email editor@gttower.org for more infor-
mation on staf position availabilities.
tAble of contents
PersPectives
9 value sensitive Programming language design
Nicholas marquez
17 synthetic biology: aPProaches of
engineered biology
robert louis fee
articles
25 network forensics using Piecewise Polynomials
seaN marcus saNders
37 characterization of the biomechanics of the
gPiba-vwf tether bond using von willebrand
disease causing mutations r687e and wt vwf a1a2a3
VeNkata sitarama damaraju
47 moral hazard and the soft budget constraint:
a game-theoretic look at the Primal cause
of the sub-Prime mortgage crisis
akshay kotak
57 comPact car regenerative drive systems:
electrical or hydraulic
quiNN lai
71 switchable solvents: a combination of
reaction and seParations
GeorGiNa w. schaefer
sPring 2010: the tower
A programming language is a user interface. In designing a system’s user interface, it is
not controversial to assert that a thoughtful consideration of the system’s users is para-
mount, indeed consideration of users has been a primary focus of Human-Computer
Interaction (HCI) research. General-purpose programming language design has not
had much need for disciplined HCI methodology because programming languages have
been designed by programming language users themselves. But what happens when
programmers design languages for non-programmers? In this paper we claim that the
application of a particular design methodology fom HCI – Value Sensitive Design –
will be valuable in designing languages for non-programmers.
value sensitive Programming
language design
Nicholas marquez
school of computer science
Georgia institute of technology
adVisor:
charles l. isbell
school of computer science
Georgia institute of technology
9

introduCtion
A programming language is a user interface. In design-
ing a system’s user interface, it is not controversial to
assert that a thoughtful consideration of the system’s
users is paramount. Tough there is a large body of re-
search from the Human-Computer Interaction (HCI)
research community studying just how best to consider
a system’s users in the design of its interface there is little
history of applying any of these methodologies from
HCI to the design of general-purpose programming
languages. Ken Arnold has argued that, since program-
mers are human, programming language design should
employ techniques from HCI (Arnold 2005). While
there has been some work in applying HCI to the de-
sign of languages for non-programmers, for example,
for children’s programming environments (Pane et al.
2002), general purpose programming languages have
not sufered much from a lack of HCI methodology
in their design because programming languages have
been designed by programmers, for programmers. In
other words, programming language design has not had
much need for disciplined HCI methodology because
programming languages have been designed by pro-
gramming language users themselves. But what happens
when programmers design languages for non-program-
mers? How does the language designer know which
design decisions to take? We claim that these questions
can and should be answered with the help of a disci-
plined application of design methodologies developed
in the HCI community.
We are designing a language for non-programmers who
use computational models in the conduct of their non-
programming work, in particular social scientists and
game developers who write intelligent agent-based pro-
grams. Agent-based programming has, as one of the pri-
mary abstractions, “agents” who interact with each other
and their environment asynchronously, maintain their
own state, and are generally analogous to individual be-
ings within an environment. In designing this language,
we believe that working closely with our intended us-
ers is crucial to the development of tools that will meet
their needs and be adopted. To guide our design interac-
tions with our users we are applying the Value Sensitive
Design (VSD) ( Friedman et al. 2006; Le Dantec et al.
2009) methodology from HCI. In this paper we give a
short description of VSD and discuss how it may be ap-
plied to the design of our programming language. Tis
work is currently at an early stage, and our understand-
ing and application of VSD is evolving. Nevertheless we
believe that the application of HCI methodologies in
general, and VSD in particular, will be extremely valu-
able in the development of languages and sofware tools
that are intended for non-programmers, that is, profes-
sionals for whom programming is an important activity
but not the primary focus of their work.
In the next section we provide a brief description of
Value Sensitive Design (VSD), then we propose a way
of applying VSD to programming language design and
conclude with a discussion of how we are applying it in
our own language design project.
value sensitive design
In this section we briefy describe VSD as detailed in
Friedman, et. al. (2006). We begin with their defnition
of VSD:
Value Sensitive Design is a theoretically ground-
ed approach to the design of technology that
accounts for human values in a principled and
comprehensive manner.
In this context, a value is something that a person con-
siders important in life. Values cover a broad spectrum
from the lofy to the mundane, encompassing things
like accountability, awareness, privacy, and aesthetics –
PersPective: marquez
10
sPring 2010: the tower
anything a user considers important. While VSD uses a
broader meaning of value than that used in economics,
it is important to rank values so that conficts can be re-
solved when competing values suggest diferent design
choices.
VSD employs an iterative, interleaved tripartite meth-
odology that includes conceptual, empirical, and tech-
nical investigations. In the following sections we de-
scribe each of these types of investigations.
Conceptual investigations
We think of conceptual investigations as analogous to
domain modeling. In conceptual investigations we spec-
ify the components of particular values so that they can
be analyzed precisely. We specify what a value means in
terms useful to a programming language designer. Con-
ceptual investigations may be done before signifcant
interaction with the target audience takes place. As is
characteristic of VSD, however, conceptualizations are
revisited and augmented as the design process proceeds
in an iterative and integrative fashion.
An important additional part of conceptual investiga-
tion is stakeholder identifcation. Direct stakehold-
ers are straightforward – they are the people who will
be writing code in your language using the tools you
provide. However, it is important to consider indirect
stakeholders as well. For example, working programs
may need to be delivered by your direct stakeholders
to third parties – these third parties constitute indirect
stakeholders. Te characteristics of indirect stakehold-
ers will implicate values that must be supported in the
design of your language. If the indirect stakeholders are
technically unsophisticated, for example, then the lan-
guage must support the delivery of code that is easy to
install and run.
empirical investigations
Empirical investigations include direct observations of
the human users in the context in which the technol-
ogy to be developed will be situated. In keeping with the
iterative and integrative nature of VSD, empirical inves-
tigations will refne and add to the conceptualizations
specifed during conceptual investigations.
Because empirical investigations involve the observa-
tion and analysis of human activity, a broad range of
techniques from the social sciences may be applied. Of
all the aspects of VSD, empirical investigation is per-
haps the most foreign to the typical technically focused
programming language designer. However, as computa-
tional tools and methods reach deeper into realms not
previously considered, we believe empirical investiga-
tions are crucial to making these new applications suc-
cessful.
technical investigations
Technical investigations interleave with conceptual and
empirical investigations in two important ways. First,
technical investigations discover the ways in which us-
ers’ existing technology supports or hinders the values
of the users. While these investigations are similar to
empirical investigations, they are focused on technolog-
ical artifacts rather than humans. Te second important
mode of technical investigations is proactive in nature:
determining how systems may be designed to support
the values identifed in conceptual investigations.
applying vsd to prograMMing
language design
In this section we discuss the ways in which we are ap-
plying VSD to the design of a programming language.
First we discuss the language itself and the target audi-
ence of our language
afaBl: a friendly adaptive Behavior language
AFABL (which is the evolution of Adaptive Behavior
Language) integrates reinforcement learning into the
programming language itself to enable a paradigm that
11

we call partial programming (Simpkins et al. 2008). In
partial programming, part of the behavior of an agent is
lef unspecifed, to be adapted at run-time. Reinforce-
ment learning is an area of machine-learning focused on
the technique of having an agent perform actions in its
environment which optimize (usually maximize) some
concept of a reward. Using the reinforcement learning
model, the programmer defnes elements of behaviors
– states, actions, and rewards – and leaves the language’s
runtime system to handle the details of how particular
combinations of these elements determine the agent’s
behavior in a given state. AFABL allows an agent pro-
grammer to think at a higher level of abstraction, ignor-
ing details that are not relevant to defning an agent’s be-
havior. When writing an agent in AFABL, the primary
task of the programmer is to defne the actions that an
agent can take, defne whatever conditions are known to
invoke certain behaviors, and defne other behaviors as
“adaptive,” that is, to be learned by the AFABL runtime
system.
We are designing AFABL for social scientists and other
agent modelers who are not programmers per se but who
employ programming as a basic part of their method-
ological toolkit. We also hope to encourage greater use
of agent modeling and simulation among practitioners
who currently do not use agent modeling, and among
agent modelers who would write more complex models
if given the tools to do so more easily. Since these kinds
of users have very diferent backgrounds from program-
mers it is important to understand their needs and val-
ues in designing tools intended for their use. We believe
that VSD will be one methodological tool among per-
haps many that will help us understand our target audi-
ence and truly incorporate their input into the design
process. In the next section we discuss how this process
is taking place in the design of our language.
using vsd in the design of afaBl
We are working with two diferent groups who are cur-
rently using agent modeling in their work. Te frst
group, the OrgSim group (Bodner and Rouse 2009), is
a team of industrial engineers who are studying organi-
zational decision-making using agent-based simulations
as well as other more traditional forms of simulations.
Te OrgSim group wants to model the human elements
of organization in order to create richer, more realistic
models that can account for human biases, personality,
and other factors that can be simulated only coarsely, if
at all, using traditional simulation techniques. Te sec-
ond group is a team of computer game researchers cre-
ating autonomous intelligent agents that are characters
in interactive narratives (Riedl et al. 2008; Riedl and
Stern 2006).
Both of these groups are using the most advanced agent
modeling language available to them: ABL (A Behavior
Language) (Mateas and Stern 2004). ABL was created
in the course of a Computer Science PhD by a games
researcher to meet his needs in creating a frst-of-its-
kind game, an interactive drama. ABL was not designed
with the help of or goal of assisting non-programming
expert agent modelers. Naturally, both groups have met
with difculty in using ABL. By using VSD in working
with these groups we hope to meet their needs with
AFABL.
Conceptual Investigations of Agent Modelers
As described earlier, conceptual investigations yield
working defnitions of values that can be used in the de-
sign of technological artifacts – in our case the AFABL
programming language. In our conceptual investiga-
tions thus far we have identifed several values, whose
conceptualizations as we currently understand them are
listed below.
• Simplicity. Here simplicity has two essential features.
First, AFABL must be consistent in its design, both in-
ternally and in the extent to which it exploits the users’
current knowledge of programming. Internal consis-
PersPective: marquez
12
sPring 2010: the tower
tency means that when users encounter a new language
construct for the frst time, they should be able to apply
their knowledge of analogous constructs they already
know. External consistency means that AFABL should
use programming constructs that users already know
form other languages and require users to learn as few
new language constructs as possible.
• Power. A language is sufciently powerful if it allows
its programmers to reasonably and easily write all the
programs they want to write in the language. If a lan-
guage makes it hard to write certain types of programs,
then those programs will usually not be written, thus
limiting the scope of use of the language. Naturally,
power trades of with simplicity, but simplicity at the
expense of essential power is unacceptable to our target
audience. In the design implications section below we
discuss strategies for dealing with the power versus sim-
plicity issue.
• Participation. Our user communities are eager to con-
tribute to the design of AFABL and to its documenta-
tion and development of best practices. We welcome
this participation and believe that it will positively im-
pact adoption, both with the users with whom we are
already working and new users that will be infuenced
by our early adopters. VSD directly supports and en-
courages this participation in the design process.
• Growth. Te language we develop and the theoreti-
cal models of intelligent and believable agents that we
employ today may not be the last words. It is important
that AFABL be able to accommodate new models and
applications.
• Modeling Support. A modeling tool imposes a struc-
ture on the way an agent modeler thinks about agents.
AFABL should do so in a helpful way, if possible, but
certainly not hinder particular ways of thinking about
agents.
Empirical Investigations of Agent Modelers
Solving a problem requires an understanding of the
problem. Te problem in our case is the experience of
agent modelers in using the computational tools at their
disposal. To understand the problems agent modelers
face and their desiderata for computational tools, we are
joining their teams and using their existing tools along-
side them. In doing so we hope to gain an appreciation
for the goals of their work, the expertise they bring to
the task, and the difculties they have in using existing
tools to accomplish their goals. We hope to gain a level
of empathy that will help us develop a language and
tools that will meet their needs very well.
Technical Investigations of Agent Modelers
What do they already use? How do their existing
tools support or hinder their values? What technol-
ogy choices do we have at our disposal to support their
values? Tese are the kinds of questions we address in
technical investigations. In our case, there is a rich tap-
estry of sofware tools already in use by our users. Tese
tools include virtual worlds — simulation platforms
and game engines — and editing tools for the programs
they currently write. Some of these tools are essential
to their work and some may be replaced with tools we
develop. One overriding value that stems from our us-
ers’ existing tool base is interoperability. Any language
or tool we develop must support interoperability with
their essential tools.
implications of values on progamming
language design
We are already familiar with many values supported by
the general purpose programming languages we use. C
supports values like efciency and low-level access. Lisp
supports values like extensibility and expressiveness. Py-
thon supports simplicity. In this section we discuss how
some of the values we identifed above may impact the
design of our language.
13

Interoperability It is essential that our language and
tools support interoperability with the virtual worlds
currently in use. In our technical investigations we have
found that the simulation platforms and game engines
in use support Java-based interfaces, and many of them
run on the Java Virtual Machine ( JVM). Since these
projects also use ABL, they have existing code bases that
use the ABL run-time system and bridge libraries that
enable communication between ABL agents and virtual
worlds. Tese technical investigations lead us to the fol-
lowing design decisions for AFABL:
• AFABL will run on the JVM. Currently, we are plan-
ning to implement AFABL as an embedded domain
specifc language (EDSL) written in the Scala program-
ming language (Odersky et al. 2008). Tis will allow us
to interoperate well with Java programs and ABL while
providing advanced language features in the design and
implementation of our language.
• AFABL will use the ABL run-time system. While we
have decided to depart from the syntax and language
implementation of ABL, the agent model and run-time
system of ABL represents a tremendous amount of valu-
able work that we wish to build on, not reinvent. Ad-
ditionally, using the ABL run-time system will allow us
to make use of the existing bridges between ABL agents
and virtual worlds.
Simplicity and Power Simplicity and power ofen op-
pose each other when taking design decisions, so we
discuss these values here together. We hope to maximize
both power and simplicity with the following language
features:
• Provide a simple consistent set of primitives and syn-
tax while providing expressive power through frst class
functions, closures, objects, and modules. Languages like
Ruby and Python can be used by programming novices
as well as expert programmers who use advanced expres-
sive features such as iterators, comprehensions, closures,
and metaprogramming. We intend to employ the same
strategies in the design of AFABL.
• Feel free to make presumptions / Optimize for the
common case — A great majority of the time, mod-
elers will be using similar methods and approaches.
Tere should be as little friction between the modeler’s
thoughts and the compiler’s input as possible. Being
able to make sound prejudgments about the program-
ming language’s users and the patterns of programming
they exhibit is key to opening a whole class of optimiza-
tions and simplifcations that can help both the user and
the compiler. In the context of AFABL, this means that
we very much need to evaluate our design at every step
with our target userbase and should employ, e.g., VSD
in doing so.
• Do not assume anything / Keep uncommon and un-
foreseen cases possible — Only close of the language
where it would create great disparity of future imple-
mentation or for necessary optimizations. In the latter
case (should the common case be in use) the alternate,
optimized, but less extensible implementation can be
used. One should not outright assume anything about
the user (because this would restrict future ways in
which the language could be used), and should take
care to properly document and account for any pre-
sumptions. We must be sure to focus AFABL not too
much towards our VSD-driven presumptions, lest we
unintentionally restrict the ease of use of the language
for other types of modelers and users.
Participation Our users have expressed strong inter-
est in contributing to the design, documentation, and
practices of AFABL. To accommodate our users’ desire
for participation we anticipate the following features of
AFABL:
• Iterative language development. By designing AFBL
around a small set of primitives, we hope to get the lan-
PersPective: marquez
14
sPring 2010: the tower
guage into the hands of users early in its development.
Tat way users can experiment with the language and
provide feedback throughout its development. Put an-
other way, AFABL will be developed with agile sofware
development practices.
• User-accessible documentation system. Many lan-
guages already provide programmers with the means
to automatically generate documentation from source
code. Many language communities also provide user-
accessible documentation systems, such as Wikis and
web forums, whereby users can share their knowledge
and contribute directly to the documentation base of
the language. We will employ similar mechanisms for
AFABL.
Growth New theories of agent modeling and new vir-
tual worlds will be created in the future. To accommo-
date these changes, we will design AFABL for growth in
the following two ways:
• Support new run-time systems. Te ABL run-time sys-
tem represents a particular way of modeling behavioral
agents. It may be possible to support new agent theories
by connecting AFABL with new run-time systems.
• Support the full range of operating system and JVM
intercommunication. By providing a full set of inter-
communication mechanisms, such as pipes, sockets,
fle system access, and JVM interoperability, AFABL
should be able to accommodate new virtual world en-
vironments.
ConClusion
In this paper we have taken the position that design
methodologies from the HCI research community can
be of great beneft in the development of programming
languages. Among the design processes we are employ-
ing, we have singled out Value Sensitive Design and
described how it can be used in the design of program-
ming language and tools for a non-traditional popula-
tion of programmers, in our case agent modelers like
social scientists and game designers.
aCknowledgeMents
I wish to thank David Roberts for suggesting the use
of Value Sensitive Design, and Doug Bodner and Mark
Riedl for allowing us to participate in their projects and
their help in designing AFABL.
15

referenCes
Arnold, K. Programmers are people, too. Queue,
3(5):54–59, 2005.
Bodner, D.A. and Rouse, W.B. Handbook of Systems
Engineering and Management, chapter Organizational
Simulation. Wiley, 2009.

Friedman, B., Jr., Kahn, P.H., and Borning, A. Human-
Computer Interaction in Management Information
Systems: Foundations, chapter 16. M.E. Sharpe, Inc,
NY, 2006.
Le Dantec, C.A., Poole, E.S., and Wyche, S.P. Values as
lived experience: Evolving value sensitive design in
support of value discovery. In Proceedings of the ACM
Conference on Human Factors in Computing Systems
(CHI 2009), Boston, MA, USA, April 2009.
Mateas, M. and Stern, A. Life-like Characters. Tools,
Afective Functions and Applications, chapter A Be-
havior Language: Joint Action and Behavioral Idioms.
Springer, 2004.
Odersky, M., Spoon, L., and Venners, B.. Programming
in Scala. Artima, 1 edition, 2008.
Pane, J.F., Myers, B.A., and Miller, L.B.. Using hci tech-
niques to design a more usable programming system.
In Symposium on Empirical Studies of Programmers
(ESP02), Proceedings of 2002 IEEE Symposia on Hu-
man Centric Computing Languages and Environments
(HCC 2002), Arlington, VA, September 2002.
Riedl, M.O. and Stern, A. Believable agents and intel-
ligent scenario direction for social and cultural leader-
ship training. In Proceedings of the 15th Conference on
Behavior Representation in Modeling and Simulation,
Baltimore, Maryland, 2006.
Riedl, M.O., Stern, A., Dini, D., and Alderman, J. Dy-
namic experience management in virtual worlds for en-
tertainment, education, and training. In International
Transactions on Systems Science and Applications, Spe-
cial Issue on Agent Based Systems for Human Learning,
volume 4(2), 2008.
Simpkins, C., Bhat, S., and Isbell, C.L., Jr. Towards adap-
tive programming: Integrating reinforcement learning
into a programming language. In OOPSLA ’08: ACM
SIGPLAN Conference on Object-Oriented Program-
ming, Systems, Languages, and Applications, Onward!
Track, Nashville, TN USA, October 2008.
PersPective: marquez
16
sPring 2010: the tower
Synthetic biology is expected to change how we understand and engineer biological
systems. Lying at the intersection of molecular biology, physics, and engineering, the
applications of this exploding feld will both draw fom and add to many existing disci-
plines. In this perspective, the recent advances in synthetic biology towards the design of
complex, artifcial biological systems are discussed.
synthetic biology: Approaches and
Applications of engineered biology
robert louis fee
school of chemistry and biochemistry
Georgia institute of technology
adVisor:
friedrich c. simmel
school of Physics
technical university munich
17

the rise of synthetiC Biology
Several remarkable hurdles in the life sciences have been
cleared during the last half of the 20th century, from
the discovery of the structure of DNA in 1959, to the
deciphering of the genetic code, the development of re-
combinant DNA techniques, and the mapping of the
human genome. Scientists have routinely tinkered with
genes for the last 30 years, even inserting a cold-water
fsh gene into wheat to improve weather resistance;
thus, synthetic biology is by no means a new science.
Synthetic biology is a means to harness the biosynthetic
machinery of organisms on the level of an entire ge-
nome to make organisms do things in ways nature has
never done before.
Synthetic biology, despite its long history, is still in the
early stages of development. Te frst international con-
ference devoted to the feld was held at M.I.T in June
2004. Te leaders sought to bring together “researchers
who are working to design and build biological parts,
devices, and integrated biological systems; develop
technologies that enable such work; and place this sci-
entifc and engineering research within its current and
future social context” (Synthetic Biology 101, 2004).
Te feld is growing quickly, as evidenced by the rapidly
increasing number of genetic discoveries, the exploding
number of research teams exploring the feld, and the
funding from government and industrial sources.
Akin to the descriptive-to-synthetic transformation of
chemistry in the 1900s, biological synthesis forces scien-
tists to pursue a “man-on-the-moon” goal that demands
they discard erroneous theories and compels scientists
to solve problems not encountered in observation. Data
contradicting a theory can sometimes be excluded for
the sake of argument, but doing the same while build-
ing a lunar shuttle would be disastrous. Synthetic biol-
ogy comes at an important time; by creating analogous
“man-on-the-moon” engineering goals in the form of
synthetic bioorganisms, it is similarly driving scientists
towards a deeper level of understanding of biology.
appliCations of engineered
organisMs
It is expected that advances in synthetic biology will
create important advances in applications too diverse
and numerous to imagine. Applications of bioengi-
neered microorganisms include detecting toxins in
air and water, breaking down pollutants and danger-
ous chemicals, producing pharmaceuticals, repairing
defective genes, targeting tumors, and more. In 2008,
genomics pioneer Dr. Craig Venter secured a $600 bil-
lion grant from ExxonMobil to develop hydrocarbon-
producing microorganisms as an alternative to crude oil
(Borrell 2009).
Scientists are engineering microbes to perform complex
multi-step syntheses of natural products. Jay Keasling, a
professor at the University of California, Berkeley, re-
cently demonstrated genetically engineered yeast cells
(Saccharomyces cerevisiae) that manufacture the imme-
diate precursor of artemisinin, a malarial drug widely
used in developing countries (Ro et al, 2006). Before,
this compound was chemically extracted from the sweet
wormwood herb. Since the extraction is expensive and
the wormwood herb is prone to drought, the availabil-
ity of the drug is reduced in poorer countries. Once the
engineered yeast cells were fne-tuned to produce high
amounts of the artemisinin precursor, the compound
was made quickly and cheaply. Tis same method could
be applied to the mass-production of other drugs cur-
rently limited by natural sources, such as anti-HIV drug
prostratin and anti-cancer drug taxol (Tucker & Zilin-
skas, 2006).
Te most far-sighted efort in synthetic biology is the
drive towards standardized biological parts and circuits.
Just as other engineering disciplines rely on parts that are
well-described and universally used — like transistors
PersPective: fee
18
sPring 2010: the tower
and resistors — biology needs a tool box of standard-
ized genetic parts with characterized performance. Te
Registry of Standard Biological Parts comprises many
short pieces of DNA that encode multiple functional
genetic elements called “BioBricks” (Registry ofStan-
dard Biological Parts). In 2008, the Registry contained
over 2000 basic parts comprised of sensors, input/out-
put devices, regulatory operators, and composite parts
of varying complexity (Greenwald, 2005). Te M.I.T.
group made the registry free and public (http://parts.
mit.edu/) and has invited researchers to contribute to
the growing library.
Some genetic parts code for a promoter gene that begins
the transcription of DNA into mRNA, a repressor that
codes a protein that blocks the transcription of another
gene, a reporter gene that encodes a readout signal, a
terminator sequence that halts RNA transcription, and
a ribosome binding site that begins protein synthesis.
Te goal is to develop a discipline-wide standard and
source for creating, testing, and combining BioBricks
into increasingly complicated functions while reducing
unintended interactions.
To date, BioBricks have been assembled into a few sim-
ple genetic circuits (McMillen & Collins, 2004). One
creates a flm of bacteria that is sensitive to light so it can
capture images (Levskaya et al). Another operates as a
type of battery, producing a weak electric current. Bio-
Bricks have been combined into logic gate devices that
execute Boolean operations, such as AND, NOT, OR,
NAND, and NOR. An AND operator creates an out-
put signal when it gets a biochemical signal from both
inputs; an OR operator generates an output if it gets a
signal from either input; and a NOT operator changes a
weak signal into a strong one, and vice versa. Tis would
allow cells to be small programmable machines whose
operations can be controlled through light or various
chemical signals (Atkinson et al, 2003).
Despite the enormous progress seen in the last fve years
and some highly publicized and heavily funded feats,
the systematic and widespread design of biological sys-
tems remains a formidable task.
Current Challenges
standardization
Standards underlie engineering disciplines: measure-
ments, gasoline formulation, machining parts, and so
on. Certain biotechnology standards have taken hold
in cases such as protein crystallography and enzyme
nomenclature, but engineered biology lacks a univer-
sal standard for most classes of functions and system
characterization. One research group’s genetic toggle
switch may work in a certain strain of Escherichia coli
in a certain type of broth, while another’s oscillatory
function may work in a diferent strain when cells are
grown in supplemented minimal media (Endy, 2005).
It is unclear whether the two biological functions can
be combined despite the diferent operating parameters.
Te Registry of Standard Biological Parts and new Bio-
fab facilities have recently emerged to begin addressing
this issue, and a growing consensus is emerging on the
best way to reliably build and describe the function of
new genetic components.

abstraction
Drawing again from other engineering disciplines, and
specifcally from the semiconductor industry, synthetic
biology must manage the enormous complexity of natu-
ral biological systems by abstraction hierarchies. Afer
all, writing “code” with DNA letters is comparable to
creating operating systems by inputing 1’s and 0’s. Lev-
els could be defned as DNA (genetic material), Parts
(basic functions, such as a terminating sequence for an
action), Devices (combinations of parts), and Systems
(combinations of devices). Scientists should be able
to work independently at each hierarchy level, so that
19

Figure 1. Te Registry of Standard Biological Parts. Tis registry ofers free access to basic biological functions that are used to
create new biological systems. Pictured is a standard data sheet on a gene regulating transcription, with normal performance and
compatibility measurements, plus an extra biological concern: system performance during evolution and cell reproduction. Te
registry is part of a conscious efort to standardize gene parts in the hopes of creating interchangeable components with well-
characterized functions when implanted in cells. Te project is open source; anybody can freely use and add information to the
Registry.
PersPective: fee
20
sPring 2010: the tower
device-level workers would not need to know anything
about phosphoramidite chemistry, or genetic oscilla-
tors, etc. (Canton, 2005).

engineered simplicity and evolution
Te rapid progress made by mechanical engineering in
this century was made possible by creating easily under-
standable machines. Engineered simplicity is helpful
not only for repairs but for future upgrades and rede-
signs. While a modern automobile may seem complex,
the level of complexity pales in comparison to a living
cell, which has far more interconnected pathways and
interactions. Cells evolved in response to a multitude of
evolutionary pressures and mechanisms were developed
to be efcient, not necessarily easy to understand (Alon,
2003). A related problem is that other engineered sys-
tems don’t evolve. Organisms such as E. coli reproduce
and have genetic mutations within hours. While this of-
fers possibilities to the biological engineer (for instance,
human-directed evolution for fne-tuning organism be-
havior), it also increases the complexity of designing and
predicting the function of these new genetic systems
(Hasteltine, 2007).

risks assoCiated with BiologiCal
engineering

accidental release
Researchers frst raised concerns at the Asilomar Con-
ference in California during the summer of 1975 and
concluded that current genetic experiments carried
minimal risk. Te past 30 years of experience in genet-
ically-manipulated crops demonstrated that engineered
organisms are less ft than their wild counterparts, and
they either die or eject their new genes without con-
stant assistance from humans. However, researchers
concluded that the abilities to replicate and evolve re-
quired special precautions. It was recommended that all
researchers work with bacterial strains that are specially
designed to be metabolically defcient so they cannot
survive in the wild. Still, some have suggested that an
incomplete understanding and emergent properties
arising from unforeseen interactions between new genes
could be problematic. Such dangers have given rise to
fears of a dystopian takeover by super-rugged plants that
overwhelm local ecosystems.

Bioterrorism
Research in synthetic biology may generate “dual-use”
fndings that could enable bioterrorists to develop new
biological warfare tools that are easier to obtain and far
more lethal than today’s military bioweapons. Te most
commonly cited example of this is the resurrection syn-
thesis of the 1918 pandemic infuenza strain by CDC
researchers (Tumpey et al, 2005) and the possibility of
recreating smallpox from easily-ordered DNA (Venter,
2005). Tere has been a growing consensus that not all
sequences should be made publicly available, but the
fact remains that such powerful recombinant DNA
technologies could be used for harm.

Attempts to limit access to the DNA synthesis tech-
nology would be counterproductive, and a sensible ap-
proach might include some selective regulation while
allowing research to continue. Now, as SARS, bird
infuenza, and other infectious disease emerge, these
recombinant DNA techniques enhance our ability to
manage this threat today compared to what was possible
just 30 years ago. Te revolution in synthetic biology is
nothing less than a push in all fronts of biology, whether
that impacts environmental cleanup, chemical synthesis
using bacteria, or human health.

ConClusion
At present, synthetic biology’s myriad implications can
be glimpsed only dimly. Te feld clearly has the poten-
tial to bring about epochal changes in medicine, agri-
21

culture, industry, and politics. Some critics consider the
idea of creating artifcial organisms in the laboratory to
be an example of scientifc hubris, evocative of Faust or
Dr. Frankenstein. However, the move from understand-
ing biology to designing it for our requirements has al-
ways been a part of the biological enterprise and used to
produce chemicals and biopharmaceuticals. Synthetic
biology represents an ambitious new paradigm for
building new biosystems with rapidly increasing com-
plexity in versatility and applications. Tese tools for
engineering biology are being developed and distribut-
ed, and a societal framework is needed to help not only
create a global community that celebrates biology but
also to lead the enormously constructive invention of
biological technologies.
Figure 2. Abstraction Hierarchy. Abstraction levels are important for managing complexity and are used extensively in engineering
disciplines. As biological parts and functions become increasingly complex, writing ‘code’ with individual nucleotides is rapidly
becoming more difcult. Currently, researchers spend considerable time learning the intricacies of every step of the process, and
stratifcation would allow for specialization and faster development. Ideally, individuals can work on individual levels, one can
focus on part design without worrying about how genetic oscillators work, while others could string together parts to construct
whole systems for possible biosensor applications. Image originally made by Drew Endy.
PersPective: fee
22
sPring 2010: the tower
referenCes
(2004). Paper presented at the Synthetic Biology 1.0:
Te First International Meeting on Synthetic Biology,
Massachusetts Institute of Technology
Borrell, B. (2009, July 14). Clean dreams or pond scum?
exxonmobil and craig venter team up in quest for algae-
based biofuels. Scientifc American, Retrieved from
http://www.scientifcamerican.com/blog/60-second-
science/post.cfm?id=clean-dreams-or-pond-scum-exx-
onmobi-2009-07-14
Ro, Dae-Kyun, Paradise, Eric M., Ouellet, Mario, Fish-
er, Karl J., Newman, Karyn L., Ndungu, John M., . . .
Keasling, Jay D. (2006). Production of the antimalarial
drug precursor artemisinic acid in engineered yeast.
[10.1038/nature04640]. Nature, 440(7086), 940-943.
Tucker, J.B., & Zilinskas, R.A. (2006, Spring). Te
Promise and perils of synthetic biology. Te New At-
lantis, 12, 25-45.
Registry of standard biological parts. Retrieved from
http://parts.mit.edu/
Morton, O. (2005, January). How a Biobrick works.
Wired, 13(01), Retrieved from http://www.wired.
com/wired/archive/13.01/mit.html?pg=5
Hasty, Jef, McMillen, David, & Collins, J. J. (2002).
Engineered gene circuits. [10.1038/nature01257]. Na-
ture, 420(6912), 224-230.
Levskaya, Anselm, Chevalier, Aaron A., Tabor, Jefrey J.,
Simpson, Zachary Booth, Lavery, Laura A., Levy, Mat-
thew, . . . Voigt, Christopher A. (2005). Synthetic biolo-
gy: Engineering Escherichia coli to see light. [10.1038/
nature04405]. Nature, 438(7067), 441-442.
Atkinson, Mariette R., Savageau, Michael A., Myers,
Jesse T., & Ninfa, Alexander J. (2003). Development of
Genetic Circuitry Exhibiting Toggle Switch or Oscil-
latory Behavior in Escherichia coli. Cell, 113(5), 597-
607.
Endy, Drew. (2005). Foundations for engineering biol-
ogy. [10.1038/nature04342]. Nature, 438(7067), 449-
453.
Canton, B. (2005). Engineering the Interface Between
Cellular Chassis and Integrated Biological Systems.
Ph.D., Massachusetts Institute of Technology.
Alon, U. (2003). Biological Networks: Te Tinkerer
as an Engineer. Science, 301(5641), 1866-1867. doi:
10.1126/science.1089072
Haseltine, Eric L., & Arnold, Frances H. (2007). Syn-
thetic Gene Circuits: Design with Directed Evolu-
tion. Annual Review of Biophysics and Biomolecular
Structure, 36(1), 1-19. doi: doi:10.1146/annurev.bio-
phys.36.040306.132600
Tumpey, Terrence M., Basler, Christopher F., Aguilar,
Patricia V., Zeng, Hui, Solorzano, Alicia, Swayne, David
E., . . . Garcia-Sastre, Adolfo. (2005). Characterization
of the Reconstructed 1918 Spanish Infuenza Pandemic
Virus. Science, 310(5745), 77-80. doi: 10.1126/sci-
ence.1119392
Venter, J. C. . (2005). Gene Synthesis Technology. Paper
presented at the State of the Science National Science
Advisory Board on Biosecurity. http://www.webcon-
ferences.com/nihnsabb/july_1_2005.htmll
23

24
sPring 2010: the tower
Te information transferred over computer networks is vulnerable to attackers. Network
forensics deals with the capture, recording, and analysis of network events to determine
the source of security attacks and other network-related problems. Electronic devices
send communications across networks by sending network data in the form of packets.
Networks are typically represented using discrete statistical models. Discrete statisti-
cal models are computationally expensive and utilize a signifcant amount of memory.
A continuous piecewise polynomial model is proposed to address the shortcomings of
discrete models and to further aid forensic investigators. Piecewise polynomial approxi-
mations are benefcial because sophisticated statistics are easier to perform on smooth
continuous data , rather than on unpredictable discrete data. Polynomials, moreover,
utilize roughly six times less memory than a collection of individual data points, mak-
ing this approach storage-fiendly. A variety of networks have been modeled, and it is
possible to distinguish network trafc using a piecewise polynomial approach.
Tese preliminary results show that representing network trafc as piecewise polynomi-
als can be applied to the area of network forensics for the purpose of intrusion analysis.
Tis type of analysis will consist of not only identifying an attack, but also discovering
details about the attacks and other suspicious network activity by comparing and dis-
tinguishing archived piecewise polynomials.
network forensics Analysis Using
Piecewise Polynomials
seaN marcus saNders
school of electrical and computer engineering
Georgia institute of technology
adVisor:
heNry l. oweN
school of electrical and computer engineering
Georgia institute of technology
25

introduCtion
problem
Network forensics deals with the capture, recording,
and analysis of network events to determine the source
of security attacks and other network-related problems
(Corey, 2002). One must diferentiate malicious trafc
from normal trafc based on the patterns in the data
transfers. Network communication is ubiquitous, and
the information transferred over these networks is vul-
nerable to attackers who may corrupt systems, steal valu-
able information, and alter content. Network forensics
is a critical area of research because , in the digital age,
information security is vital. With sensitive information
such as social security numbers, credit card information,
and government records stored on a network, the po-
tential threat of identity thef, credit fraud, and national
security breaches increases. During July of 2009, North
Korea was the main suspect behind a campaign of cyber
attacks that paralyzed the websites of US and South Ko-
rean government agencies, banks and businesses (Parry,
2009). As many as 10 million Americans a year are vic-
tims of identity thef, and it takes anywhere from 3 to
5,840 hours to repair damage done by this crime (Sor-
kin, 2009). In order to efectively prosecute network at-
tackers, investigators must frst identify the attack, and
then gather evidence on the attack.
Te process of identifying an attack on a network is
known as intrusion detection. Te two most popu-
lar methods of intrusion detection are signature and
anomaly detection (Mahoney, 2008). Signature detec-
tion is a technique that compares an archive of known
attacks on a network with current network trafc to
discern whether or not there is malicious trafc. Tis
technique is reliable on known attacks but has a great
disadvantage on novel attacks. Although this disadvan-
tage exists, signature detection is well understood and
widely applied. Anomaly detection, on the other hand,
is a technique that identifes network attacks through
abnormal activity, which does not necessarily imply ma-
licious trafc. Anomaly detection is more difcult to
implement compared to signature detection because it
must fag trafc as abnormal and discern the intent of
the trafc. Abnormal trafc does not necessarily imply
malicious trafc.
Electronic devices such as notebooks and cellular
phones communicate by transferring data across the In-
ternet using packets. A packet is an information block
that the Internet uses to transfer data. In most cases,
the data being transferred across the Internet must be
divided into hundreds, even thousands of packets to
be completely transferred. Similar to letters in a postal
system, packets have parameters for delivery such as a
source address and destination address. Packets include
other parameters such as the amount of data being sent
in a packet and a checking parameter to ensure that the
data sent was not corrupted. Te Internet is modeled as
a discrete collection of individual data points because
the Internet uses individual packets to transfer data.
Discrete processes are difcult to model and analyze as
opposed to continuous processes because there is not a
defnite link between two similar events. For example,
the concept of a derivative in calculus can only give a
logical result if the data is continuous. In many cases,
experimental results are given as discrete values. Scien-
tists, engineers, and mathematicians sometimes use the
least squares approximation to give a continuous model
of the data given. Continuous models that represent
discrete data are ofen preferred because they can be
used for diferent types of analysis such as interpolation
and extrapolation.
Many forensic investigators use graphs and statistical
methods, such as clustering, to model network traf-
fc (Tonnard, 2008). Tese graphs and statistics help
classify complex networks into patterns. Tese patterns
are typically stored and represented in a discrete fash-
ion because networks transfer data in a discrete manner.
article: sanders
26
sPring 2010: the tower
Tese patterns are used in combination with signature
and anomaly detection techniques to identify network
attacks (Shah, 2006). In many cases these network
patterns are archived and kept for extended periods
of time. Tis storage of packets is needed to compare
past network trafc with current network trafc, in or-
der to efectively classify network events. Despite this
necessity, the storage of packet captures is not desired
because packet captures use a signifcant amount of
memory storage, a limited and costly resource. Afer a
variable amount of time, the archived network data is
deleted to free memory for future network patterns to
be archived (Haugdahl, 2007). Detailed records of net-
work patterns can be stored for longer periods of time
by increasing the amount of free memory or decreasing
the amount of archived trafc.
A continuous polynomial representation of a network
is preferred to a discrete representation because discrete
representations are limited by the types of analysis and
statistics that can be performed. Polynomial approxima-
tions of data have limitations as well, such as failing to
represent exact behavior, which can be vital depending
on the system being modeled. In order to efectively dif-
ferentiate trafc, a continuous polynomial approxima-
tion must be robust enough to reveal enough details
about network trafc. Polynomial representations of
data should require less memory storage than discrete
representations. For instance, the polynomial, y=x
2
,
could represent a million data points but take up little
memory. Tis observation is important because, in the
area of network forensics, memory storage space is a
critical factor.
related work
Shah et al. (2008) applied dynamic modeling techniques
to detect intrusions using anomaly detection. Tis par-
ticular form of modeling was only used for identifying
intrusions and not for analyzing them or conducting
a forensic investigation. Ilow et al. (2000) and Wang
et al. (2007) both used modeling techniques to try to
predict network trafc. Wang et al. took a polynomial
approach that utilized Newton’s Forward Interpolation
method to predict and model the behavior of network
trafc. Tis technique used interpolation polynomials
of arbitrary order to approximate the dynamic behav-
ior of previous network trafc. Wang et al.’s technique
is useful for modeling general network behavior, but
using the polynomial approach for intrusion analysis is
another issue. Wang et al.’s technique proved that gen-
eral network behavior can be predicted and modeled us-
ing polynomials, but did not prove whether individual
network events can be distinguished and categorized
through the use of polynomials.
proposed solution
Network data is discrete, scattered, and difcult to ap-
proximate; however, approximation and modeling tech-
niques are necessary to defne networks and to perform
important statistics on the network data. Such statistics
include the average amount of data each packet carries,
the average rate packets arrive to a computer, and how
many packets are lost before delivery. Tese values are
used to adequately classify network trafc as normal or
malicious. When a system is approximated as a polyno-
mial, it is faster to perform basic mathematical opera-
tions and statistics such as derivatives, integrals, standard
deviation, and variance. Te ease of the computation of
a parameter allows for a more efcient analysis of the
data. Networks send an enormous amount of data each
day, and precious time is required to process this data.
While the polynomial approximation is fairly accurate,
forming a long, a complex approximated polynomial is
not practical for the purposes of network forensics since
a network will seldom have identical behavior in each
session. Assuming each of the fve segments of points
shown in Figure 1 represents network events (i.e., web
sites visited), investigators can approximate and classify
27

the network activity. Te network trafc modeled in
all plots in this paper represents the same parameters.
Te x-axis represents the packet capture time, where
the unit of time is not represented in seconds (i.e., real
time) but rather a time relative to the order the pack-
ets were captured. In other words, time represented in
the context of this paper does not represent real time,
but serves as a parameter for the data being modeled,
approximated packet data length. Tis parameter is re-
ferred to as time because the data being modeled is time
dependent. Te y-axis represents the data length of the
packet captured in bytes. Troughout this paper the
terms packet, capture time, and time will be used inter-
changeably. In reality, diferent network events require
diferent amounts of time and numbers of packets than
others. For simplicity, all network events plotted in this
paper are scaled so that each network event is modeled
by equal time intervals.
If these segments were in a diferent order (i.e., the
same web sites were visited in a diferent order), then
the single polynomial in Figure 2 would not be able to
compensate for these changes and would be unable to
efciently classify similar network trafc. Essentially, if
this single polynomial method were applied, one would
need 120 (5!) diferent polynomials to represent visit-
ing fve diferent websites in every possible order. To
counter this issue, the idea of approximating network
trafc by using a piecewise polynomial is proposed. A
single polynomial defnes one function to represent
data for all units of time, while a piecewise polynomial
defnes separate functions at distinct time intervals and
connects these respective pieces to form a single contin-
uous data representation. Te property of a piecewise
polynomial is important in modeling network trafc as
opposed to a single polynomial because many diferent
types of network events can occur. A piecewise poly-
nomial can isolate and model the behavior of a single
network event, while a single polynomial is limited
Figure 1. Plot of random discrete data.
Figure 2. Single polynomial approximation of data represent-
ed in Figure 1.
article: sanders
28
sPring 2010: the tower
to modeling clusters of events. Te modeling of event
clusters is not desired because it will increase the dif-
culty in diferentiating network trafc based on a single
event. Such a scenario will result in a malicious event be-
ing clustered with a normal event, which could lead to
failure in identifying an attack. A piecewise polynomial
approximation should efectively classify every network
event that has transpired using a unique piecewise ap-
proximation. Te piecewise polynomial approximation
of the data shown in Figure 1 is shown in Figure 3.
It is clear that while both polynomial approximations
in Figure 2 and Figure 3 can model the data represent-
ed in Figure 1, the piecewise polynomial (Figure 3) is
more accurate and robust than a single polynomial. A
single polynomial should not be used to model more
than one network event, because it will not be able to
represent the individual diferent network events that
it is composed of. Tis example is meant to emphasize
that if a sequence of 100 network events were defned
using one single polynomial, it would be difcult to
identify which network events behaved in a certain way.
A piecewise polynomial model will address this issue by
modeling each network event as an individual polyno-
mial. If the order of the network events (segments) were
changed, the individual polynomials would just occur
at diferent time intervals, but each segment will remain
the same. In other words, in a piecewise polynomial ap-
proximation each segment is represented by a distinct
polynomial.
Te basic concept is that while the network will not
behave the same all the time, it will behave the same in
certain pieces. If network trafc can be quantifed using
piecewise polynomials, investigators can apply signature
and anomaly detection techniques to identify and in-
vestigate events from a forensics perspective. Piecewise
polynomial approximations will be efective because
they should approximate the behavior pattern of a net-
work with enough resolution to diferentiate network
trafc.
Te primary goal is to test whether or not a piecewise
polynomial approach can approximate network data
with enough precision to distinguish network trafc. If
there are no distinct diferences in piecewise polynomi-
al approximated network trafc then this approach will
not be valid for this application. Conversely, if a piece-
wise polynomial approximation can efectively diferen-
tiate network trafc then it can be applied to intrusion
analysis, because intrusion analysis is primarily focused
on classifying trafc. Tis application is benefcial be-
cause polynomial-represented data should occupy less
memory storage than discrete data, and polynomial
data have lessfewer limitations on the type of analysis
that can be performed.
Methodology
tools and algorithms
Wireshark was used to capture network trafc in packet
Figure 3. Piecewise polynomial plot of data represented in
Figure 1.
29

capture fles. A packet capture is a collection of the net-
work trafc that has made contact with a computer and
is stored in a packet capture fle (.pcap fle). Wireshark is
an efective tool for capturing and fltering network traf-
fc, but does not allow for a custom analysis of network
trafc. Te Libpcap library, which is used by Wireshark,
was investigated in order to use the captured network
trafc as an input to a custom parsing algorithm. Tis
algorithm opens .pcap fles that were saved using Wire-
shark and extracts the source address, destination ad-
dress, packet data length, and packet protocol into a for-
mat that can be used for custom processing. Afer these
aspects of the packet were extracted they were saved
in a .csv fle (comma separated fles) for processing in
MATLAB. Although the parameters initially extracted
(source address, destination address, packet data length,
and packet protocol) are not sufcient to analyze and
detect all malicious activity, these parameters are a good
starting point for a proof of concept implementation
and analysis of this approach.
MATLAB was chosen for its versatility, variety of func-
tions, and computing speed in processing large vectors.
MATLAB has two built-in functions called Polyft()
and Polyval() that respectively compute polynomial co-
efcients and evaluate polynomials by using input data.
In MATLAB, the input and output data of polyft() and
polyval() are represented as vectors. Polyft() uses the
least squares approximation to approximate the coef-
cients of a best ft, Nth-order polynomial for the given
vectors of data: X and B. In statistics, the least squares
approximation is used for estimating polynomial coef-
fcients based on discrete parameters. Polyval() can best
be viewed as a support function for Polyft(), which
gives the approximated numerical values of the poly-
nomial approximated in Polyft(),Y. A clearer example
of how Polyft() and Polyval() are related is shown in
equation 1.
(1)

P = polyfit( X, b, N)
y = polyval(P, X)
Piecewise.m is a custom-developed script, written in
MATLAB. Essentially, Piecewise.m uses Polyft() and
Polyval() to create piecewise polynomials. Tis script
was designed to use packet data lengths as the parameter
on the y-axis, and packet capture time as the parameter
on the x-axis.
iMportant deCisions and Causes
for error
An important parameter used to approximate the data
is the order of the polynomial. Typically, the higher the
order of the polynomial, the more accurate the approxi-
mation; in an approximation of network behavior/pat-
terns, though, modeling exact behavior is unnecessarily
complex whereas approximating behavior is more use-
ful. Tus, the orders of the piecewise polynomials are
manually chosen based on the predicted complexity
of the network trafc. More complex trafc should be
approximated with a higher order polynomial than less
complex trafc. Tis is an assumption that will be used
to designate the order of a polynomial given the type of
network being modeled. Network trafc was also mod-
eled using diferent orders to determine the efect(s) that
changing orders have on the approximation of trafc.
When approximating polynomials, ensuring that there
are enough data points to create a reliable approxima-
tion is important. For example, if there is one data point
a frst order polynomial would give an inaccurate ap-
proximation, because at least two points are needed
to approximate a line. Te general rule is that the accu-
racy of the polynomial approximation depends directly
on the order of the polynomial, and number of data
article: sanders
30
sPring 2010: the tower
points used to defne the polynomial. Te number of
data points must be at least one more than the desired
order to yield an accurate polynomial approximation.
In most cases, the higher the order of the polynomial,
the more accurate the approximation is. On the other
hand, a polynomial of too high of an order may yield
unrealistic results. Tus fnding a balance of polynomial
order that yield both of approximate and realistic results
is important.
experiMents
Closed/Controlled network Behovior
Te frst step to determine whether a polynomial can
accurately approximate and diferentiate network be-
havior is to analyze the behavior of a closed/controlled
network. As opposed to open networks, closed net-
works are not connected to the Internet. Te designed
closed network was composed of two Macbooks, with
four virtual machines operating on the separate Mac-
books. Figure 4 gives a visual representation of the de-
signed closed network.
A virtual machine is a sofware implementation of a ma-
chine that executes programs like a physical machine.
Virtual machines operate on a separate partition of a
computer and utilize their own operating system. Due
to the hardware limitations of physical machines, virtu-
al machines and physical machines do not execute com-
mands simultaneously. From a networking perspective
the execution of commands is not a problem, because
once connected, networks utilize protocols to send and
sometimes regulate the fow of network trafc. In other
words, the network does not know that there is a virtual
machine operating on a physical machine and thus sup-
ports multiple simultaneous network connections.
Packet captures were performed using Wireshark on
the Macbook operating with three virtual machines on
the ethernet interface. A variety of packet captures were
made to compare and contrast network behavior using
web pages. If the resulting piecewise polynomials could
efectively compare and contrast network trafc based
on various behaviors, then the polynomial approxima-
tion will be considered a success. Te descriptions of
these packet capture fles are listed below.
• Idleclosed.pcap— a .pcap fle that captures the ran-
dom noise that is captured when the network is idle.
• Icmpclose.pcap— a .pcap fle that is composed pri-
marily of ping commands from one Macbook to the
other. Ping commands are used to test whether a par-
ticular computer is reachable across a network. Tis test
is performed by sending packets of the same length to a
computer, and waiting to receive a reply from that com-
puter.
• Httpclose.pcap— a .pcap fle that includes a brief ping
command being sent from one Macbook to the other
Macbook, but is dominated by HTTP trafc (basic
website trafc). Tis fle also includes a period of idle
behavior where the network is at rest.
• Packet Capture A— a .pcap fle that contains the
network data for visiting a specifc site hosted on one
Macbook.
Figure 4. Visual representation of designed closed network
with virtual machines. VMS circled.
31

• Packet Capture B— a separate .pcap fle that contains
the network data for visiting the same site visited in
Packet Capture A hosted on the same Macbook at a dif-
ferent time.
Idleclosed.pcap and Icmpclose.pcap yield piecewise
polynomials that model the behavior of idle and ping
trafc respectively. Tese piecewise polynomials should
identify both the idle and ping behavior found in Http-
close.pcap. Te piecewise polynomials that model two
separate .pcap fles going to the same pages (i.e. Packet
Capture A and Packet Capture B) should resemble each
other in behavior. A second order piecewise polyno-
mial is used for the closed network analysis because it
is assumed that closed network events should not be
extremely complex. Higher orders are avoided wherever
possible due to reasons explained in Important Deci-
sions and Causes for Error.
open/internet network Behavior
While experimenting with a controlled network is use-
ful, a network that is connected to the Internet will
behave diferently from one that is not. To investigate
a more realistic scenario, one Macbook was utilized to
make diferent packet captures under similar conditions
to those in Closed/Controlled Network Behavior, but
with contact to the Internet. Te details of the packet
capture fles are listed below.
• Internet.pcap— a .pcap fle that contains network data
captured while actively browsing the Internet.
• Packet Capture C— a .pcap fle that contains the net-
work data for visiting a sequence of three web sites on
the Internet in a particular order (google.com, gatech.
edu, and facebook.com).
• Packet Capture D— a separate .pcap fle that con-
tains the network data for visiting the same web sites as
Packet Capture C but in a diferent order (gatech.edu,
facebook.com, and google.com)
Internet.pcap was used to show the efect the order of
a polynomial has on the approximation because it con-
tains the most complex network trafc. Packet Capture
C and Packet Capture D were used to determine if dif-
ferent web sites exhibit distinguishable behavior by us-
ing by piecewise and single polynomials. Tese models
will test the theory of the beneft of piecewise polyno-
mials over single polynomials similar to the example
inthe Proposed Solution. Fourth order piecewise and
single polynomials are used for the open network analy-
sis, as opposed to second order, because it is assumed
that open network events should be more complex than
closed networks.
results
Closed network analysis
Ping Analysis In the closed network case, as defned in
the Closed/Controlled Network Behavior, Httpclose.
pcap and Icmpclose.pcap both contained the same type
of ping trafc going through the network but in difer-
ent packet captures and diferent times. Te resulting
piecewise polynomial that described this trafc in both
packet captures was the constant 98. Tis constant value
of 98 represents that every packet captured had a packet
data length of 98 bytes. A constant piecewise polyno-
mial is an acceptable value because the ping command
constantly sends packets of identical lengths to a single
destination.
Trafc Analysis Packet Capture A and Packet Capture
B are two diferent .pcap fles that were capturing the
same network activity at approximately the same time
interval, and are represented as second order piecewise
polynomials. According to Figure 5, the two packet
captures are represented in a very similar manner. Tis
result is interesting because, while the results are simi-
lar, they are not exact. Tis mismatch is not damaging as
Figure 5 shows the relationship of the two data fles. Te
article: sanders
32
sPring 2010: the tower
relationship of the frst segments of data is that they are
constant around the same value, while the second seg-
ments of the data are both decreasing, concave down,
and share similar values.
trafc analysis of open networks
Te similar packet capture fles, Packet Capture C (the
upper plot) and Packet Capture D (the lower plot), were
plotted in Figure 6 using a fourth order single polyno-
mial . Figure 7 shows the plot of Packet Captures C and
D using a fourth order piecewise polynomial.
Packet Capture C visits google.com frst, followed by
gatech.edu, and ends with facebook.com, while Packet
Capture D visits gatech.edu frst, followed by facebook.
com, and ends with google.com. Figure 7 shows that
each piecewise polynomial gives each website visited a
unique behavior that can be identifed with visual in-
spection.
Google.com behaves in a sinusoidal type manner, Gat-
ech.edu is represented as concave down parabola, and
Facebook.com exhibits a strong linear behavior with a
small positive slope. Although, the three web sites visit-
ed can be clearly identifed in Figure 7, it does not seem
to be the case in Figure 6.
In the single polynomial approximation the data looks
relatively similar , and it is difcult to discern which part
of the polynomial represents which website. Tis result
shows that diferent network events can be approxi-
mated and distinguished using a piecewise polynomial
approach, whereas a single polynomial approximation
is not sufcient to distinguish network events.
signifcance of order
Internet.pcap was plotted using zero, second, and ffh
orders to discern the efect order has in the approxima-
tion of a polynomial.
Figure 8 shows that the higher the order of the polyno-
Figure 5. Second order relationship of similar packet capture
fles.
Figure 6. Single polynomial comparison plots of similar out
of order trafc.
33

mial, the more detail is shown about the network. De-
spite revealing more details of a network, Figure 8 does
not show which order of the polynomial yields better
results. Figure 8 is shown to illustrate the efect order
has on the approximation of network trafc. More de-
tails are not necessarily better, because too many details
may not yield an approximation that is robust enough
to identify similar future network trafc and is difcult
to interpret.
Memory savings
Internet.pcap was saved in two separate fles. One fle
was saved using Internet.pcap’s polynomial representa-
tion, and a separate fle was saved using Internet.pcap’s
representation as a collection of individual data points
(i.e., packets). Te polynomial fle was 12Kb large while
the collection of individual data point fle was 72Kb
large. Tis size diference indicates that saving network
Figure 7. Piecewise polynomial comparison of similar out of
order trafc.
Figure 8. Internet pcap plots of varying orders.
trafc as polynomials instead of a collection of individ-
ual points saves memory.
disCussion of results
Te plots in Results are intended to show whether
piecewise polynomials can efectively diferentiate and
link network trafc. Te ping trafc analyzed in Ping
Analysis was approximated by piecewise polynomials
that exhibited constant behavior. Although this result
is desired, ping trafc is the simplest type of network
trafc and is not sufcient enough to prove the valid-
ity of a piecewise polynomial approach. Trafc analysis
of the closed network yielded similar results to the ping
analysis, by successfully diferentiating and linking net-
work trafc. Although the closed network analysis was
a success, in reality most network trafc occurs on the
Internet. Tus the open network results are of primary
interest.
article: sanders
34
sPring 2010: the tower
Te open network single polynomial approximation
was unable to diferentiate and link network events, as
shown in Figure 6. Te plot given in Figure 6 shows two
similar curves of diferent ordered network trafc. Al-
though this result is not desired, it was expected that a
single polynomial approximation would not be able to
classify out of order trafc efectively. Conversely, Figure
7 shows that a piecewise polynomial approximation was
able to distinguish each section of the network trafc
that was captured. Tese results show that a piecewise
polynomial approximation can be used to classify and
diferentiate network trafc.
Memory storage is also of primary concern when mod-
eling network data. Te Internet packet capture shows
that the discrete representation of data utilized 72Kb of
memory storage, while the polynomial representation
utilized 12Kb of memory storage. Tis result shows that
polynomial processes utilize roughly six times less mem-
ory storage than discrete processes. Tis size diference
indicates that storing network trafc as polynomials
instead of a collection of individual points signifcantly
saves memory. Tis outcome is important in network
forensics because network events can be archived for a
longer amount of time than before. Tis extra storage
allows for more extensive and detailed investigations.
ConClusion
Networks can be approximated using piecewise polyno-
mials with enough detail to aid forensic investigators.
Te precision of the approximation depends directly
on the order of the polynomial used to approximate the
data. In general, the higher the order the more details
are revealed. Networks behave diferently and therefore
every network analyzed needs its own set of polynomi-
als to approximate their respective network events. Te
use of piecewise polynomials is also benefcial because
polynomials use roughly six times less memory than in-
dividual data points.
future work
Piecewise polynomials will be applied to the area of
network forensics for intrusion analysis. Tis analysis
will require collection of known data that are classifed
as either malicious or normal. Also, more information
about packets will have to be quantifed, to further
classify and to distinguish network trafc because ap-
proximating packet length and protocols are not suf-
fcient to perform a thorough analysis. Te malicious
data will be modeled as piecewise polynomials and used
for signature detection. Te normal network trafc will
also be modeled as piecewise polynomials and used for
anomaly detection.
Future research also includes identifying what certain
trafc patterns represent, such as web browsing trafc,
video streaming trafc, or fle downloading trafc. Tis
classifcation of network events will enhance a forensics
investigator’s ability to quickly determine what events
have transpired on a network.
aCknowledgeMents
Tis research was conducted with the guidance of Kev-
in Fairbanks and Henry Owen and supported in part
by a Georgia Tech President’s Undergraduate Research
Award as a part of the Undergraduate Research Oppor-
tunities Program. Tis research was also supported in
part by the Georgia Tech Department of Electrical and
Computer Engineering’s Opportunity Research Schol-
ars Program.

35

referenCes
Vicka C, Peterman C, Shearin S, Greenberg MS, Bok-
kelen J, (2002)Network Forensics Analysis, IEEE Inter-
net Computing, http://computer .org/internet/, De-
cember 2002
Scott HJ, (2007)Network Forensics: Methods, Re-
quirements, and Tools, www.Bitcricket.com, November
2007
Ilow J, (2000)Forecasting Network trafc using FARI-
MA models with heavy tailed innovations, Proceedings
of the Acoustics, Speech, and Signals Processing, 2000.
On the IEEE International Conference-Volume 6, IEEE
Computer Society, Washington DC, pp. 3814-3817
Mahoney MV, Chan PK, (2008) Learning Models of
Network Trafc for Detecting Novel Attacks, Florida
Institute of Technology, www.cs.ft.edu/~mmah oney/
paper5.pdf
Parry RL, (2009) North Korea Launches Massive Cy-
ber Attack on Seoul, Te Times, http://www.timeson-
line.co.uk/tol/news/world/asia/article6667440.ece,
July 2009
Shah K, Jonckheere E, Bohacek S, (2006) Dynamic
Modeling of Internet Trafc for Intrusion Detection,
EURASIP Journal on Advances in Signal Processing
Volume 2007, Hindawi Publishing Corporation, May
2006
Sorkin, (2009) Identity Tef Statistics, spamlaws.com
Tonnard O, Dacier M, (2008) A framework for attack
pattern discovery in honeynet data, Te International
Journal of Digital Forensics and Incidence Response,
Science Direct, Baltimore, Maryland, August 2008
Wang J, (2007)A Novel Associative Memory System
Based Modeling and Prediction of TCP Network Traf-
fc, Advances in Neural Networks, Springer Berlin, July
2007
article: sanders
36
sPring 2010: the tower
Platelet aggregation plays an important role in controlling bleeding by forming a he-
mostatic plug in response to vascular injuries. GPIbα is the platelet receptor that medi-
ates the initial response to vascular injuries by tethering to the von Willebrand factor
(vWF) on exposed subendothelium. When this occurs, platelets roll and then frmly
adhere to the surface through the GPIIb-IIIa integrin present on the platelet surface. A
hemostatic plug then forms by the aggregation of bound and fee platelets which then
seals the injury site.
vWF is a multimer of many monomers, with each containing eleven domains. In this
experiment, biomechanics of two of the eleven domains, gain of function (GOF) R687E
vWF-A1 and wild type (wt) vWF-A1A2A3, were studied using videomicroscopy un-
der varying shear stresses. Tis experiment used a parallel fow chamber coated along
one surface with the vWF ligand. A solution containing platelets or Chinese Hamster
Ovary (CHO) cells was perfused at varying shear stresses (0.5 dynes/cm2 to 512 dynes/
cm2) and cell-ligand interactions were recorded.
Results showed that GOF R687E vWF exhibited slip bond behavior with increasing
shear stress, whereas wt A1A2A3 vWF displayed a catch-slip bond transition with
varying shear stresses. Interestingly, wt A1A2A3 vWF displayed two complete cycles of
catch-slip bond behavior, which could be attributed to the structural complexity of the
vWF ligand. However, more experiments need to be performed to further substantiate
these claims. Information on the bonding behavior of each vWF can aid understanding
of the biomechanics of the entire vWF molecule and associated diseases.
characterization of the biomechanics
of the GPIbα-vWF tether bond using
von Willebrand disease causing mutations
r687e and wt vWf A1A2A3
VeNkata sitarama damaraju
school of biomedical engineering
Georgia institute of technology
adVisor:
larry V. mciNtire
school of biomedical engineering
Georgia institute of technology
37

introduCtion
Circulating platelets have an important role in healing
vascular injuries by tethering, rolling, and adhering to
the vascular surface in response to a vascular injury. Un-
der normal physiological conditions, platelets respond
to a series of signaling events that cause bound plate-
lets to aggregate and spread across the exposed surface
to form a hemostatic plug (Andrews, 1997). Tese re-
sponses are mediated by receptor-ligand interactions
between the platelet and the molecules exposed on
the surface. GPIbα is the platelet receptor that medi-
ates this initial response to vascular injuries. In arteries
this response is initiated when platelet receptor GPIbα
tethers to von Willebrand factor, a blood glycoprotein,
on exposed subendothelium—the surface between en-
dothelium and artery membrane. When GPIbα initially
tethers to von Willebrand factor (vWF), platelets frst
roll and then frmly adhere to the surface through the
GPIbα and GPIIb-IIIa integrins present on the plate-
let. GPIbα and GPIIb-IIIa integrins are the frst two
platelet integrins to interact with vWF molecule (Kroll,
1996). Aggregation of bound platelets with additional
platelets from the plasma forms a hemostatic plug that
seals the injury site (Ruggeri, 1997).
Mutations in either of these binding partners can result
in changes in the initial step of the vascular healing pro-
cess. Diseases associated with these mutations are called
von Willebrand diseases, which can either decrease (loss
of function) or enhance (gain of function) the binding
activity between the GPIbα and vWF molecules. von
Willebrand diseases (VWD) result in a platelet dys-
function that can cause nose bleeding, skin bruises and
hematomas, prolonged bleeding from trivial wounds,
oral cavity bleeding, and excessive menstrual bleeding.
Tough rare, severe defciencies in vWF can have symp-
toms characteristic of hemophilia, such as bleeding into
joints or sof tissues including muscle and brain (Sadler,
1998).
vWF is a multimer of many monomers, with each con-
taining eleven (11) domains (Figure 1) (Berndt, 2000).
In this experiment, biomechanics of two of the 11 do-
mains, in particular, gain of function (GOF) R687E
vWF-A1 and wild type (wt) vWF-A1A2A3 were stud-
ied. Biomechanics of the GPIbα-vWF tether bond of
these molecules was studied using videomicroscopy in
parallel plate fow chamber experiments. One of the two
surfaces of the fow chamber was a 35-mm tissue culture
dish coated with the vWF ligand (Figure 2). Fluid con-
Figure 1. Te vWF molcule. It is a multimer of many mono-
mers, with each containing 11 domains. Image adapted from
Sadler.
article: damajaru
38
sPring 2010: the tower
taining either platelets or Chinese Hamster Ovary cells
were perfused at varying shear stresses across this ligand
coated surface and the interactions were recorded using
high speed videomicroscopy. Analysis of these interac-
tions with cell tracking sofware allowed insight on
bond lifetime of the cells and helped suggest the type of
bond present (Yago, 2004).
By studying the biomechanics of individual vWF do-
mains, it will allow a better understanding of the whole
vWF molecule, and more importantly, VWD. With this
enhanced understanding of vWF, better and more ac-
curate treatments for VWD can be designed in future.
Tis knowledge can also be used in studying and pre-
venting life threatening thrombosis and embolism.
Materials and Methods
All materials were obtained from the McIntire labora-
tory stock room. Proper sterile techniques and precau-
tions were used for each of the following procedures.
Cells used
Either Chinese Hamster Ovary (CHO) cells or fresh
platelets were used to study the vWF ligand interac-
tions. Fresh platelets were isolated from blood donors
an hour before the experiment. For CHO cells, two
specifc lineages, αβ9 and β9, were used. CHO αβ9 cells
contain specifc integrin that interact with the vWF li-
gand, whereas β9 cells do not. Hence, β9 cells served as
a control group when CHO cells were used instead of
platelets.
preparation of groth media
Two types of growth media were prepared for the two
types of CHO cells: αβ9 and β9. Both media formula-
tions consisted of alpha-Minimum Essential Medium
(α-MEM) solution (with 2mM L-glutamine and NaH-
CO3), 10% Fetal Bovine Serum (FBS) solution, peni-
cillin solution (50X), streptomycin solution (50X),
G-418 (Geneticin) solution (50 mg/mL), and metho-
trexate powder. Te only diference between the two
media types was the addition of hygromycin B solution
(50 mg/mL) in the αβ9 media.
passaging cells
Proper sterile techniques and precautions were used
while passaging CHO αβ9 and CHO β9 cells. CHO
cells were cultured in 75 cm
2
fasks and incubated at
37° Celsius, 5% CO2 using the growth media prepared.
Tese cells were passaged every 2-3 days in order to
maintain 80-90% confuency of cells at all time.
hepes-tyrode bufer formations
Hepes-Tyrode bufer (also referred as 0% Ficoll) was pre-





Flow chamber floor
coated with vWF ligand
Upper flow chamber surface
Fluid flow

Interacting Platelets
Non-interacting platelets


Platelets
expressing
GPIbα
Figure 2. TParallel plate fow chamber setup.
Te foor (bottom) plate in the setup was a
35-mm tissue culture dish coated with vWF
ligand. Fluid containing either CHO cells or
platelets was perfused at varying shear stresses
(0.5 dynes/cm
2
to 512 dynes cm
2
) across the
ligand coated surface.
39

pared by mixing the following chemicals in pure deion-
ized water until completely solvated: Sodium chloride
(135 mM), sodium bicarbonate (12 mM), potassium
chloride (2.9 mM), sodium phosphate monobasic (0.34
mM), hepes (5 mM), glucose D (5 mM), and BSA (1%
weight per volume). CHO cells and platelets were sus-
pended in this bufer for the fow chamber experiments.
Tis solution consisting of cells and bufer was pumped
through the fow chamber at various shear stresses.
ficoll solution
A more viscous Hepes-Tyrode bufer was prepared by
adding 6% Ficoll (weight per volume). Te fnal viscos-
ity was 1.8 times that of Hepes Tyrode bufer. Tis fcoll
solution is also referred as 6% Ficoll.
parallel plate fow chamber experiments
A parallel plate fow chamber was used in this experi-
ment. One of the two surfaces of the fow chamber
was a 35-mm tissue culture dish coated with the vWF
ligand. Fluid containing either CHO cells or platelets
was perfused at varying shear stresses (0.5 dynes/cm
2
to
512 dynes/cm
2
) across the ligand coated surface (Figure
2). Te interactions were recorded as 4-second videos at
250 frames/second using high speed videomicroscopy
(Figure 3). Te parallel plate fow chamber set-up was
maintained at 37° Celsius for all experiments.
tracking cell interactions
MetaMorph Ofine sofware was used to track the in-
teractions captured with videomicroscopy. Each 4-sec-
ond video was opened using this sofware and a square
was drawn around the CHO cell or platelet (hereon
referred as “cell”) of interest. Each cell was tracked for
at least 250 continuous frames (1 second). In addition,
it was ensured that no other cell bumped into the cell
of interest while it was being tracked by observing the
video.
data analysis
Te tracked results from MetaMorph Ofine were
saved as multiple number strings in Microsof Excel and
processed through MATLAB to compute mean rolling
velocities for each shear stress. Te mean rolling veloc-
ity suggests how fast the cell is rolling while interacting
with the vWF ligand at each individual shear stress.
Tese rolling velocities were graphed in Microsof Excel
for each shear stress in a logarithmic scale.
results
GPIbα and von Willebrand factor (vWF) interactions
were recorded using videomicroscopy in parallel plate
fow chamber experiments. Tese interactions were
then tracked using MetaMorph Ofine, the tracking
sofware, for at least 250 continuous frames (1 second).

Non-rolling cells
Rolling cells
Figure 3. A snap shot of the
videomicroscopy recording
of CHO cells interacting on
a vWF coated surface. Te
non-rolling cells free fow
over the surface without any
interactions. In contrast, the
rolling cells are visibly fow-
ing slower as they interact
with the vWF ligand.
article: damajaru
40
sPring 2010: the tower
Figure 4. A mean rolling velocity of platelets on gain on function (GOF) R687E A1. Te x-axis rep-
resents the logarithmic shear stress (dynes/cm
2
) while the y-axis represents the mean rolling velocity
(µm/s). Te error bars represent the Standard error of means. SEM. Increasing velocity
suggests a slip bond behaviors of GOF R687E.
Figure 5. Mean rolling velocity of platlets on gain of function (GOF) R687E A1. Te x-axis repre-
sents the logarithmic shear stress (dynes/cm
2
) while the y-axis represents the mean rolling velocity
(µm/s). Error bars represent the Standard error of means, SEM. Te increasity velocity
suggests a slip bond behavior of GOF R687E.
41

Te results from MetaMorph Ofine were processed
through MATLAB to compute the mean rolling ve-
locities for each shear stress used (0.5 dynes/cm
2
to 512
dynes/cm
2
). In order to learn about the GPIbα-vWF
tether bond interaction, mean rolling velocities were
plotted versus shear stress for each individual experi-
ment.
Plotting the results for platelets interacting on gain of
function (GOF) mutant R687E vWF-A1 molecule re-
vealed a trend of increasing mean rolling velocities with
increasing shear stress (Figure 4). Te x-axis represents
the logarithmic shear stress (dynes/cm
2
) while the y-axis
represents the mean rolling velocity (µm/s). Te error
bars are the standard error of mean (SEM), which is cal-
culated by dividing the standard deviation by the square
root of number of samples (stdev/√(N)).
Intuitively, with increasing shear stress the bond lifetime
decreases for each individual bond (one-to-one mole-
cule interaction between GPIbα and vWF ligand); con-
sequently, causing the mean rolling velocity to increase
at higher shear stresses. Tis increase in mean rolling ve-
locity is characteristic of a slip bond interaction, because
the molecules tend to “slip of” the ligand more readily
at higher shear stress than at lower shear stress.
A separate experiment performed with platelets from a
diferent donor on GOF R687E vWF showed a simi-
lar slip bond interaction (Figure 5). Although less data
points were collected in this experiment, it showed a
similar increase in mean rolling velocity with increas-
ing shear stress. A statistical analysis of these two data
sets revealed a pearson correlation factor of 0.98 and a
p-value greater than 0.05 for a paired t-test. Terefore,
reproducability of this trend afrmed the slip bond
characteristic of GOF R687E vWF-A1 molecule.
Outliers at high shear stress are attributed to the fact
that bond lifetime signifcantly decreases at higher shear
stress. As a result, fewer platelets interact at those shears
and thus fewer data points were collected at higher
shear stress compared to at lower shear stresses. Tis
is refected with the large SEM bars for data points at
higher shear stress end. Similarly, mean rolling veloc-
ity at the lowest shear stress are also variable because of
Figure 6a and 6b. Mean rolling velocity platelets on wt A1A2A3 vWF. Y-axis in both 6a and 6b represent the mean rolling velocity
(µm/s), whereas the x-axis in 6a represents the logarithmic shear stress (dynes/cm
2
) and in 6b represents the logarith-
mic shear rate (s-1). Te error bars represent the Standard error of means, SEM. Tis cycle of decreasing and increasing mean
rolling velocity is indicative of a catch-slip bond interaction between GPlba and vWF ligand.
article: damajaru
42
sPring 2010: the tower
the difuclty in distinguishing interacting platelets from
non-interacting platelets. For both experiments, plate-
lets were suspended in Hepes Tyrode bufer.
In contrast, platelets interacting on wild type (wt)
A1A2A3 vWF molecule (Figures 6a and 6b) showed a
diferent trend compared to GOF R687E vWF. Y-axis
in both Figures 6a and 6b represent the mean rolling
velocity (µm/s), whereas the x-axis in 6a represents the
logarithmic shear stress (dynes/cm
2
) and in 6b repre-
sents the logarithmic shear rate (s-1). Te error bars
represent the SEM. As illustrated by the graphs, mean
rolling velocity initially decreased, then increased and
decreased only to increase again with increasing shear
stress (and shear rate). Tis cycle of decreasing and in-
creasing mean rolling velocity is indicative of a catch-
slip bond interaction between GPIbα and vWF ligand.
A decrease in mean rolling velocity correlates with an
increase in bond lifetime of individual bond, and thus
indicating a catch bond because the platelet is “caught”
by the ligand. Likewise, an increasing mean rolling ve-
locity implies a decrease in bond lifetime of the indi-
vidual bond as a slip bond interaction. Platelets on wt
A1A2A3 vWF exhibited two complete cycles of catch-
slip bond interaction for the range of shear stress mea-
sured (0.5 dynes/cm
2
to 512 dynes/cm
2
). For this par-
ticular experiment, platelets were suspended in Hepes
Tyrode bufer (0% Ficoll) and 6% Ficoll solution. Sus-
pending platelets in a more viscous solution was used to
verify whether the catch-slip bond was force dependent
or transport dependent.
A similar catch-slip bond interaction was illustrated
with Chinese Hamster Ovary (CHO) cells interacting
on wt A1A2A3 vWF (Figure 7). Although fewer data
points were collected for this experiment, it still dem-
onstrated two cycles of decreasing and then increasing
mean rolling velocity with increasing shear stress. No
statistical analysis was performed between these two
results because CHO cells contain isolated GPIbα re-
ceptors, whereas platelets have many molecules on their
surface. Tus, the mean rolling velocities are compara-
bly diferent between them and not comparable.
disCussion
Fresh platelets and wild type (wt) Chinese Hamster
Ovary (CHO) cells were used on gain of function
(GOF) R687E vWF or wt A1A2A3 vWF in order to
study some aspects of the GPIbα-vWF tether bond. Par-
allel plate fow chamber experiments were the same for
each vWF molecule. Te only diference was the fuid
passing through had either platelets or wt CHO cells.
All rolling interactions were observed at 250 frames per
second.
It was previously found that wild type-wild type (wt
GPIbα on wt vWF) interactions difer from wt-GOF
(wt GPIbα on GOF vWF) interactions. An additional
experiment (Appendix A, Figure A1) shows platelets
on wt vWF-A1. Tis graph shows a transition of bond-
ing behavior from a region of decreasing rolling veloc-
ity to an increasing rolling velocity as the shear stress
increases. Tis trend is indicative of a catch-slip bond
transition because the rolling velocity decreases (catch
behavior) and then increases (slip behavior) with in-
creased shear stress.
However, results from platelets on GOF R687E vWF
(Figures 4 and 5) showed an increase in rolling veloci-
ties with increased shear stress—indicating only a slip
bond behavior. Tis suggests that a catch bond governs
low force binding behavior between wt GPIbα and wt
vWF-A1; whereas a slip bond governs binding of GOF
R687E at high shear stresses. One possible reason for
this could be the diferential force response of the bond
lifetime.
Results of platelets rolling on wt A1A2A3 vWF (Fig-
ures 6a and 6b) showed two complete cycles of bonding
43

behavior from a region of decreasing rolling velocity to
increasing rolling velocity as shear stress increased. Tis
cycle of decreasing and increasing mean rolling veloc-
ity is indicative of a catch-slip bond interaction between
GPIbα and vWF ligand. When rolling velocity decreas-
es, it is indicative of a catch bond, suggesting the bonds
are stuck or caught on the ligand and hence slowing its
velocity. Likewise, when the rolling velocity increases, it
indicates a slip bond because the bond comes of or slips
of much quicker and consequently increases the roll-
ing velocity. Based on previous knowledge (Figure A1),
this catch-slip bond behavior can be identifed with
the presence of wt A1 thedomain in the wt A1A2A3
vWF ligand. However, two observed complete cycles of
catch-slip bond might be due to the structural complex-
ity of the complete A1A2A3 vWF ligand.
Also, viscosity of the fuid in which platelets were sus-
pended was increased by 1.8 times. By comparing the
results from these two diferent solutions (0% Ficoll
and 6% Ficoll), it helped determine whether if this
catch-slip bond interaction was force dependent or
transport dependent. Figures 6a and 6b show a boxed
Figure 7. Mean rolling velocity of CHO cells on wt A1A2A3
vWF. X-axis represents the logarithmic shear stress (dynes/
cm
2
) while the y-axis represents the mean rolling velocity
(µm/s). Te error bars represent the Standard error of
menas, SEM. Tis cycle of decreasing and increasing
mean rolling velocity is indicative of a catch-slip bond
interaction between GPlba and vWF ligand.
Figure A1. Results of platelets on wt-A1 vWF. Y-axis in both lef and right plots represent the mean rolling velocity (µm/s),
whereas the x-axisin lef represents the logarithmic shear stress (dynes/cm
2
) and in right represents the logarithmic
shear rate (s-1). Tese plots show a catch-slip bond interaction, the the rolling velocity decreases and then increases
with increases shear stress (and shear rate).
article: damajaru
44
sPring 2010: the tower
region where the data points for the two diferent so-
lutions (with 0% Ficoll and 6% Ficoll) align or overlap
when plotted together. Since shear stress data aligns the
best compared to shear rate data, it indicates that force,
which regulates shear stress, is probably what governs
this catch-slip bond interaction.
A similar catch-slip bond interaction was observed be-
tween wt CHO cells on wt A1A2A3 vWF (Figure 7).
Te bond behavior transitions from a region of decreas-
ing rolling velocity to a region of increasing rolling ve-
locity. CHO cells have isolated GPIbα receptors, which
allows for the isolation of the GPIbα receptor’s contri-
bution to the rolling velocity parameter, since platelets
have many molecules on their surface. Tus, this trend
could be attributed to the GPIbα receptor’s interactions
with the vWF molecules and A1A2A3 structure.
Overall, bond behaviors of the two vWF domain, GOF
R687E and wt A1A2A3, were successfully characterized.
Although the bonding trends of the vWF ligand appear
very obvious, more testing will help further substantiate
these claims. Based on the results, the next step would
be assessing how these bond types adversely afect plate-
let aggregation in presence of a vascular injury. By deter-
mining the adverse efects of diferent bond type in each
vWF domain, it will further help understand VWD and
its causes and potentially lead to a treatment.
ConClusion
Some valuable information on tether bonding between
GPIbα-vWF, specifcally GOF R687E vWF and wt
A1A2A3 vWF, was acquired from the four set of ex-
periments performed. Results from two experiments
revealed a pure slip bond behavior for platelets rolling
on GOF R687E (Figure 4-5). Statistical analysis also
showed a strong correlation and p-value greater than
0.05 between the two experiments involving GOF
R687E vWF; hence, confrming the reproducibility of
slip bond behavior. Tis slip bond behavior is attrib-
uted to the diferential force response of bond lifetime
between GPIbα and GOF vWF ligand with increasing
shear stress.
In addition, studying wt A1A2A3 vWF on platelets and
wt CHO cells revealed two complete cycles of catch-slip
bond behavior (Figure 6-7). Based on previous knowl-
edge, this catch-slip bond behavior can be identifed
with the presence of wt A1domain in A1A2A3 ligand.
However, having two cycles of catch-slip bond behav-
ior can be due to the structural complexity of A1A2A3
vWF ligand.
In future studies, more experiments need to be per-
formed with wt A1A2A3 vWF on platelets and CHO
cells in order to confrm the reproducibility of the results
achieved. More data is needed to support the claim that
having two cycles of catch-slip bond can be attributed
to the structural complexity of A1A2A3 vWF ligand.
Similarly, more experiments involving GOF R687E
vWF on platelets and CHO cells will further substanti-
ate the slip bond behavior of GOF vWF. By studying
biomechanics and bond behavior of each domain of the
vWF molecule, it will allow a better understanding of
vWF and VWD.
45

referenCes
Andrews R K, Lopez J A, Berndt M C (1997) Molecu-
lar Mechanisms of Platelet Adhesion and Activation.
International Journal of Biochem Cell Biology 29: 91-
115.
Berndt M, Ward CM (2000) Platelets, Trombosis, and
the Vessel Wall. Vol. 6. Harwood Academic.
Kroll M, Hellums D, McIntire L, Schafer A, Moake J
(1996) Platelets and Shear Stress. Te Journal of Te
American Society of Hematology 88.5: 1525-541.
Ruggeri Z (1997) Von Willebrand Factor - Cell Ad-
hesion in Vascular Biology. Te American Society for
Clinical Investigation 99: 559-564.
Sadler J (1998) Biochemistry and genetics of von ville-
brand factor. Annual Reviews 67: 395-424.
Yago T, Wu J, Wey C, Klopocki A, Zhu C, McEver R
(2004) Catch Bonds Govern Adhesion Trough L-Se-
lectin At Treshold Shear. Te Journal of Cell Biology
166: 913-924.
article: damajaru
46
sPring 2010: the tower
Tis paper addresses one of the major causes of the sub-prime mortgage crisis prevalent
in large American mortgage houses by the end of 2006. Te moral hazard scenario and
consequent malpractices are addressed with respect to the sof budget constraint. Tis
analysis is done by frst looking at the Dewatripont and Maskin model (1995), and
then suitably modifying it to model the scenario at a typical mortgage lender. Tis sim-
plistic model provides useful insight into how heightened bailout expectations, caused
by precedent actions by the Federal Reserve, fueled risky behavior at banks who thought
themselves to be “too-large-to-fail.”
moral hazard and the soft budget
Constraint: A game-theoretic look at the
primal cause of the sub-prime
mortgage crisis
akshay kotak
school of economics and school of industrial & systems engineering
Georgia institute of technology
adVisor:
emilsoN c. silVa
school of economics
Georgia institute of technology
47

introduCtion
Over the last two decades there has been considerable
interest in the study of fnancial crises and instabil-
ity owing largely to the prevalence of fnancial crises in
the recent past. As Alan Greenspan observed, afer the
collapse of the Soviet Bloc at the end of the Cold War,
market capitalism spread rapidly through the develop-
ing world, largely displacing the discredited doctrine of
central planning (Greenspan 2007). Tis abrupt transi-
tion led to explosive growth that was at times too hot
to handle and inadequately controlled, causing several
crises in the Tird World, most notably in East Asia in
1997 and Russia in 1998. Additionally, there have been
periods of economic tumult in the developed world in-
cluding the near collapse of Japan in 1990’s, the bailout
of Long Term Capital Management by the Federal Re-
serve in 1998, and most recently, the subprime mort-
gage crisis of 2007-08.
As Dimitrios Tsomocos highlights in his paper on fnan-
cial instability, “[t]he difculty in analyzing fnancial
instability lies in the fact that most of the crises mani-
fest themselves in a unique manner and almost always
require diferent policies for their tackling” (Tsomocos
2003). Most explanations, however, are modeled on a
game-theoretic framework involving a moral hazard
scenario brought about by asymmetric information.
Tis choice of framework has been popular because of
its ability to predict equilibrium behavior (under rea-
sonable assumptions) for a given scenario and explain
qualitatively and mathematically why and when devia-
tions from this behavior occur.
Tis paper aims to perform a similar introductory anal-
ysis of one of the underlying causes of the current global
economic crisis — subprime mortgage lending activity
in the US from 2001-07 — in light of the sof budget
constraint (SBC). Te sof budget constraint syndrome,
identifed by János Kornai in his study of economic be-
havior of centrally-planned economies (1986), has been
used to explain several phenomena and crises in the
capitalist world. While initially used to explain short-
age in socialist economies, the SBC has been usefully
sought to provide explanations for the Mexican crisis of
1994, the collapse of the banking sector of East Asian
economies in the 1990’s, and the collapse of the Long
Term Credit Bank of Japan.
Te sof budget constraint syndrome is said to arise
when a seemingly unproftable enterprise is bailed out
by the government or its creditors. Tis injection of
capital in dire situations ‘sofens’ the budget constraint
for the enterprise – the amount of capital it has to work
with is no longer a hard, fxed amount. Tere is a host of
literature, primarily developed from a model designed
by Mathias Dewatripont and Eric Maskin, which fo-
cuses on the moral hazard issues brought about when
a government or central bank acts as the lender of last
resort to fnancial institutions (Kornai et al. 2003).
BaCkground
Te subprime mortgage crisis of 2007 was marked by
a sharp rise in United States home foreclosures at the
end of 2006 and became a global fnancial crisis during
2007 and 2008. Te crisis began with the bursting of
the speculative bubble in the US housing market and
high default rates on subprime adjustable rate mortgag-
es made to higher-risk borrowers with lower income or
lesser credit history than prime borrowers.
Several causes for the proliferation of this crisis to all
sectors of the economy have been delineated, including
excessive speculative investment in the US real estate
market, the overly risky bets investment houses placed
on mortgage backed securities and credit swaps, inac-
curate credit ratings and valuation of these securities,
and the inability of the Securities and Exchange Com-
mission to monitor and audit the level of debt and risk
borne by large fnancial institutions. It would be fair to
article: kotak
48
sPring 2010: the tower
say, however, that one of the most fundamental causes
of the entire debacle was the lending practices prevalent
in mortgage houses in the US by the end of 2006 and
the free hand given to these lenders to continue their
practices. While securitization produced complex de-
rivatives from these mortgages that were incorrectly val-
ued and risk-appraised, it was ultimately the misguided
decisions made by mortgage lenders that caused default
rates to rise when the housing bubble burst, eroding the
value of the underlying assets and setting of a chain re-
action in the fnancial sector.
With housing prices on the rise since 2001, borrowers
were encouraged to assume adjustable-rate mortgages
(ARM) or hybrid mortgages, believing they would be
able to refnance at more favorable terms later. How-
ever, once housing prices started to drop moderately in
2006-2007 in many parts of the U.S., refnancing be-
came more difcult. Defaults and foreclosures increased
dramatically as ARM interest rates reset higher. During
2007, nearly 1.3 million U.S. housing properties were
subject to foreclosure activity, up 75% versus 2006. (US
Foreclosure Activity 2007).
Primary mortgage lenders had passed a lot of the de-
fault risk of subprime loans to third party investors
through securitization, issuing mortgage-backed securi-
ties (MBS) and collateralized debt obligations (CDO).
Terefore, as the housing market soured, the efects
of higher defaults and foreclosures began to tell sig-
nifcantly on fnancial markets and especially on major
banks and other fnancial institutions, both domesti-
cally and abroad. Tese banks and funds have reported
losses of more than U.S. $500 billion as of August 2008
(Onaran 2008). Tis heavy setback to the fnancial sec-
tor ultimately led to a stock markets decline. Tis dou-
ble downturn in the housing and stock markets fuelled
recession fears in the US, with spillover efects in other
economies, and prompted the Federal Reserve to cut
down short term interest rates signifcantly, from 5.25%
in August ’07 to 3.0% in February ’08 and subsequently
down all the way to 0.25% in December ’08 (Historical
Changes, 2008).
As the single largest mortgage fnancing institution
in the US, Countrywide Financial felt the heat of the
subprime crisis more than a lot of the other afected f-
nancial institutions. Faced with the double whammy of
a housing market crash and the stif credit crunch, the
company found itself in a downward spiral, with a rise
in readjusted mortgage rates increasing the number of
foreclosures which eroded profts.
In the case of Countrywide Financial and other large
fnance corporations who considered themselves “too-
large-to-fail,” the expectation of the downside risk cov-
erage was raised to a level that promoted substantial
risk-taking. Tis expectation was based of of precedent
actions by the Federal Reserve in bailing out distressed
large frms – dubbed the Greenspan (and now, the Ber-
nanke) put. Tomas Walker (2008), in his article in Te
Wall Street Journal, aptly says,
Tere is tremendous irony, and common sense,
in the realization that multiple successful rescues
of the fnancial system by the Fed over several
decades will eventually create a risk-taking cul-
ture that even the Fed will no longer be able to
single-handedly save, at least not without serious
infationary consequences or help from foreign-
ers to avoid a dollar collapse. Eventually the cul-
ture will overwhelm the ability of the authorities
to make it all better.
Ethan Penner of Nomura Capital provides a succinct
and veracious defnition of the moral hazard dilemma
in saying that, “Consequences not sufered from bad de-
cisions lead to lessons not learned, which leads to bigger
failings down the road (Penner 2008).”
49

the dewatripont Maskin
(dM) Model
Mathias Dewatripont and Eric Maskin developed a
model in 1995 to explain the sofening of the budget
constraint under centralized and decentralized credit
markets (Dewatripont et al. 1995). Te simplest version
of their model is a two-period model, with the key play-
ers being a banker that serves as the source of capital to
each of a set of entrepreneurs that require funding to
undertake projects. At the beginning of period 1, each
of the entrepreneurs chooses a project to submit for
fnancing, and projects may be defned as one of two
types: good or poor. Te probability of a project being
good is α. Te asymmetry in information lies in the fact
that once selected, only the entrepreneur knows of the
type of the project, i.e. the banker is unable to monitor
the project beforehand. Te entrepreneur has no bar-
gaining power and the banker, if he decides to fnance
the project, makes a take-it-or-leave-it ofer.
Set-up funding costs 1 unit of capital. Te banker is
able to learn the nature of a project once he funds set-
up during period 1. A good project, once funded, yields
a monetary return of Rg (>0) and a private beneft Bg
(>0) for the entrepreneur; by the beginning of period 2,
gains can be made through private benefts, which may
include intangibles such as reputation enhancement.
A poor project, on the other hand, yields a monetary
return of 0 by the beginning of period 2. If the banker
ends up dealing with a poor project, he, at the beginning
of period 2, has the option of either liquidating the proj-
ect’s assets to obtain a liquidation value L (>0) while
the entrepreneur earns a private beneft BL (<0), since
liquidation would imply a loss in reputation. Te other
option the banker has is to refnance the project, which
would require the injection of another unit of capital at
the beginning of period 2. Now the gross return is Rp
and private beneft to the entrepreneur is Bp (>0).
A graphical representation of the timing and structure
of the DM model is shown in Figure 1.
Te fairly simple model proposed by Dewatripont and
Maskin, when suitably tweaked, may be used to explain
a number of phenomena in both capitalist and socialist
economies. Te model was originally designed to assess
how decentralizing the credit market (under some fairly
reasonable assumptions about the comparative nature of
Rg and Rp) will harden the budget constraint — mak-
ing markets more efcient — by adding incentive to en-
trepreneurs to not submit poor projects for fnancing.
For specifc application to this study, the model will
be used to study the moral hazard scenario that comes
about when fnancial institutions consider themselves
“too-large-to-fail.” Tese institutions, Long Term Capi-
tal Management in the late 1990’s and, more recently,
Countrywide Financial, are insured to some measure in
the sense that their multi-billion-dollar positions can
afect fnancial markets so heavily that, in the case of a
Figure 1. Te Structure of the Dewatripont Maskin
Model (Source: Kornai et al., 2003)
article: kotak
50
sPring 2010: the tower
downturn, large private banks, the central bank, or the
government would be forced to bail them out to avoid
a fnancial meltdown. Tis insurance against downside
risk stimulates the moral hazard scenario and gives in-
centive to these fnancial institutions to make much
riskier bets with higher potential return.
Methodology
Te game-theoretic model used in this study has two
key players – the borrowing entity (“borrower”) and
the lending entity (“the bank”). Additionally, the study
looks at the efects of the presence of a lender of last
resort. Borrowers, assumed to be identical, can choose
from two types of loans ofered by the bank – a fxed
rate loan with principal L
f
and an adjustable rate loan
with principal L
a
. Customer utility (U(x)) in the typical
concave functional form – increasing with decreasing
marginal returns (i.e. U'>0, U''<0), is simplifed in this
model to be the natural logarithm function.
Te fxed rate loan has an interest rate r
f
. Te adjustable
rate loan is assumed to have an initial low fxed interest
rate r
0
a which is readjusted afer a period λ. Te remain-
der of the adjustable rate loan is paid of at the rate de-
termined at the end of period λ. If market conditions are
good at this time, the interest rate is adjusted to r
1
g, and
if they are bad, the rate is adjusted to r
1
b. Market condi-
tions are represented in the model by an exogenous vari-
able θ, which is the probability of the market conditions
being good, i.e. of the interest rate reset to being r
1
g.
Te bank and customer convene before a loan is ofered
to discuss the terms of the ARM. Based on the bank’s
expectations about the economy (i.e. θ) and of the val-
ues of r
1
b and r
1
g, the bank and the customer decide
on a fxed initial rate r
0
a and a period λ for which the
loan is kept fxed. Te computation for λ also involves a
parameter, δ, which refects the increase in default rate
for bad market conditions. Tis revenue shrinkage fac-
tor (δ) can be thought of as an indicator of the bank’s
downside risk coverage. In the current framework, it is
afected by two key factors:
1. Collateral requirements: Higher collateral would
imply more downside risk coverage (i.e. higher δ) but
would also reduce the quantity of loans demanded since
fewer people would be able to pay the required collater-
al for the same loan. Te bank would therefore evaluate
the beneft (potential revenue) of additional loans with
the cost (increased risk) to choose the ideal collateral
requirement for the ARM. Tis cost-beneft analysis is
however outside the scope of this study, and δ is there-
fore assumed to be exogenous.
2. Bail-out expectations: Increased expectations of a
bail-out (i.e. a cash injection in case of bad market con-
ditions) would also raise the value of δ, but without
shrinkage in loan demand.
Te game is played between borrowers and the bank
with equilibrium being reached by the bank setting λ
such that borrowers are indiferent to either of the two
loans, and the borrowers opting for a mixed strategy.
Te indiferent borrower chooses a fxed rate loan with
a probability α such that the expected payof from either
loan is the same for the bank.
Tis study analyzes the equilibrium of this game under
two scenarios – with and without the presence of a lend-
er of last resort. Te presence of a lender of last resort
who is expected to bail the lender out with a cash injec-
tion increases the (perceived) value of δ even though the
level of protection ofered to the bank through collateral
remains the same. So, in this case, the revenue shrinkage
for the second collection period is reduced (Figure 2).
Te optimal loan amount for a fxed rate loan (L*
f
)
maximizes net utility for the borrower. Net utility is
51

the diference between the utility gained from the loan
amount less the total interest paid over the lifetime of
the load. Te borrower therefore solves,

i.e.

which yields,
(1)

With an adjustable rate loan, the interest payment for
the average borrower would be

Terefore, the optimal loan amount for an adjustable
rate loan,
(2)

In order to ensure that a mixed strategy is employed at
equilibrium i.e. to have 0<α<1, the bank sets λ such that
borrowers are indiferent to fxed and adjustable rate
loans.

Substituting values from (1) & (2), we obtain,
i.e.

Figure 2. Readjustment of adjustable rate loans afer the initial period, λ.
article: kotak
max
L
f f f
f
U L r L
( )
-
{ }
·
max ln
L
f f f
f
L r L
( )
-
{ }
·
L
r
f
f
*
=
1
L r r r
a a g b
· · · · · · λ λ θ θ δ
0 1 1
1 1 + - + -
( ) ( )
( ) ( )
L
r r r
a
a g b
*
( ) ( )
=
+ - + -
( ) ( )
1
1 1
0 1 1
λ λ θ θ δ · · · · ·
U L r L U L r r r L
f f f a a g b a ( )- = ( )- + - + -
( ) ( )
· · · · · · · λ λ θ θ δ
0 1 1
1 1 ( ) ( )
- - = - + - + -
( ) ( )
- ln ln ( ) ( ) r r r r
f a g b
1 1 1 1
0 1 1
λ λ θ θ δ · · · · ·
r r r r
f a g b
= + - + -
( ) ( )
λ λ θ θ δ · · · · ·
0 1 1
1 1 ( ) ( )
52
sPring 2010: the tower
giving us,
(3)

analysis
In deriving equation (3) above, we also fnd that, at equi-
librium, the net interest rate charged for a fxed loan and
an adjustable loan are the same.
i.e.

Since all interest rates and parameters are positive, and
since r
0
a is assumed to be less than r
f
, the above can only
hold true if
(4)

Also, it must hold that,
(5)

Tis is derived from equation (4) and from the fact that,
as market conditions worsen, liquidity becomes harder
to obtain and therefore the cost of debt increases.
Equilibrium behavior that is of interest is the nature of
the change in λ with changes in the exogenous param-
eters – θ and δ. Te rate of change of λ with respect θ
to is
(6)

Given condition (4), the sign on the above expression is
dependent on the sign of (r
1
g – δ·r
1
b). Terefore, if

then the right hand side of equation (6) would be posi-
tive, implying that an increase in the probability of a
good market conditions would cause an increase in the
amount of time that the loan is kept at the low fxed rate
r
0
a. Tis makes intuitive sense because if

then the bank is not adequately covered against down-
side risk, so even though the probability of good market
conditions increases, the bank keeps the loan at the fxed
low rate longer and decreases the length of the period of
uncertain collection, which is subject to downside risk.
One concern that arises is why the bank takes any risk
in the frst place by ofering an adjustable rate loan even
though the payof for this is the same as that for the less
risky fxed rate loan. Te reasoning here would be that
adjustable rate loans earn higher commissions, which
compensates to some level for this risk. Additionally,
ARMs are preferred by more customers, and they there-
fore add intangible value in terms of higher volumes,
which may lead to lower costs, better customer satisfac-
tion and a broader clientele. Also, since the function for
λ is a rational function in θ (see equation 3), we can see
that the values of r
0
a, r
1
b, and r
1
g need to fall within a
certain range to ensure that an ARM is feasible i.e. λ lies
between 0 & 1.
Conversely, if

λ
θ θ δ
θ θ δ
*
( )
( )
=
- + -
( )
- + -
( )
r r r
r r r
f g b
a g b
· · ·
· · ·
1 1
0 1 1
1
1
r r r r
f a g b
= + - + -
( ) ( )
λ λ θ θ δ · · · · ·
0 1 1
1 1 ( ) ( )
r r r r
a f g b
0 1 1
1 < < + -
( )
θ θ δ · · · ( )
r r r r
a f g b
0 1 1
< < <
ð
ð
λ
θ
δ
δ θ δ θ
=
-
( )
-
( )
- - +
( )
r r r r
r r r r
f a g b
a b g b
0 1 1
0 1 1 1
2
· ·
· · · ·
δ <
r
r
g
b
1
1
δ <
r
r
g
b
1
1
δ >
r
r
g
b
1
1
53

then the bank is covered against downside risk. Te
right hand side of equation (6) is now negative, so an
increase in the probability of good market conditions
extends the length of the period of uncertain collection.
Additionally, if

then the bank is independent of nature of market con-
ditions i.e. independent of θ. However, since δ is not
set arbitrarily by the bank, it cannot always pursue this
strategy of hedging against market risk (Figure 3).
As mentioned earlier, the presence of a lender of last
resort infates the value of δ without any collateral in-
crease. From fgure 3 we see that as δ increases, there are
three ways in which the bank begins to take up more
risk. A rise in the value of δ increases the feasibility of
adjustable rate loans – loans that were not feasible for
a given economic outlook (i.e. θ value) now start be-
coming feasible even though the bank is not adequately
covered against this higher level of risk from its collat-
eral collection. Additionally, a rise in δ decreases the
sensitivity of λ with respect to θ. A higher δ therefore
prompts less vigilant observation of market conditions
as small enough changes in market outlook now man-
date less signifcant changes in loan structure. Finally, if
δ is raised to high enough value,
then the bank begins to make counter-intuitive deci-
sions, and a decrease in the probability of good market
conditions now actually brings about an increase in the
period of uncertain collection.
ConClusion
Tis model, albeit simplistic, provides interesting re-
sults. Te model is designed to mimic the basic setup
at a typical mortgage house ofering a fxed rate loan
and a two-step adjustable rate loan. We are able to show,
article: kotak
Figure 3. Period of fxed rate collection, λ v/s probability of good conditions, θ;
rf = 6.2%, r0a = 4.5%, r1b = 10%, and r1g = 7.2%; δ = 0 to 1, in increments of 0.1.
δ =
r
r
g
b
1
1
δ >
r
r
g
b
1
1
54
sPring 2010: the tower
mathematically, that an increase in the expectation of a
bailout by a lender of last resort tends to encourage risky
behavior in such mortgage ofering agencies in multiple
ways. Tat being said, there is plenty of scope for fur-
ther elaboration and sophistication of the model. Te
market structure currently under investigation is both
simplistic and insular, but a more elaborate structure of
markets and corresponding interactions could be de-
signed. For instance, a good example of possible market
stratifcation is illustrated in Tsomocos (2003). Also, the
current loan structure is a two period model with loans
changing rates at the end of period one to a new fxed
rate for period two. A more complex, multi-period loan
structure could be investigated with the adjustable rate
set as a random variable and a Markov chain approach
used to study the equilibrium behavior in this scenario.
In their investigation of “federalism,” Qian and Ro-
land (1988) observe that giving fscal authority to local
governments instead of the central government works
to limit the efects of the sof budget syndrome. Tey
propose a three-tiered structure with local governments
working between the central government and state and
non-state enterprises. Te competition among local
governments to attract enterprises forces funds to be
diverted in infrastructure development, increasing the
opportunity cost of a bailout and thereby hardening
the budget constraint for enterprises. A similar scenario
could be envisioned where the Federal Reserve dis-
tributes the decision making authority (and funds) to
bailout corporations between the twelve regional Fed-
eral Reserve Banks; and would be of interest to study
the subsequent change in the behavior of the lending
banks.
55

referenCes
Demyanyk, Y & Van Hemert, O. (2008). Understand-
ing the Subprime Mortgage Crisis. Retrieved Decem-
ber 9, 2008, <http://ssrn.com/abstract=1020396>
Dewatripont, M. & Maskin, E. (1995). Credit and Ef-
fciency in Centralized and Decentralized Economies.
Te Review of Economic Studies, 62(4), 541-555.
Greenspan, A. (2007, December 12). Te Roots of the
Mortgage Crisis. Te Wall Street Journal, p. A19
Historical Changes of the Target Federal Funds and
Discount Rates: 1971 - present. (2008). Retrieved
December 12, 2008, <http://www.newyorkfed.org/
markets/statistics/dlyrates/fedrate.html>
Kornai, J. (1979). Resource-Constrained versus
Demand-Constrained Systems. Econometrica, 47(4),
801-819.
Kornai, J. (1980). Economics of Shortage. Amsterdam:
North-Holland.
Kornai, J. (1986). Te Sof Budget Constraint. Kyklos,
39(1), 3-30.
Kornai, J., Maskin, E & Roland, G. (2003). Under-
standing the Sof Budget Constraint. Journal of
Economic Literature, 41(4), 1095-1136.
Onaran, Y. (2008, August 12). Banks’ Subprime Losses
Top $500 Billion on Writedowns. Retrieved December
12, 2008, <http://www.bloomberg.com/apps/news?pi
d=20670001&sid=a8sW0n1Cs1tY>
Penner, E. (2008, April 11). Our Financial Bailout
Culture. Te Wall Street Journal, p. A17
Qian, Y. and Roland, G. (1998). Federalism and the
Sof Budget Constraint. American Economic Review,
88(5), 1143-62.
Tsomocos, D. (2003). Equilibrium analysis, bank-
ing and fnancial instability. Journal of Mathematical
Economics, 39, 619-655.
U.S. Foreclosure Activity up 75 Percent in 2007.
(2007). Retrieved March 14, 2008, <http://www.real-
tytrac.com/ContentManagement/RealtyTracLibrary.a
spx?a=b&ItemID=4118&accnt=64953>
Walker, T. (2008, April 23). Our Bailout Culture
Creates a Huge Moral Hazard. Te Wall Street Journal,
p. A16
56
sPring 2010: the tower
Te objective of the research is to address the power density issue of electric hybrids and
energy density issue of hydraulic hybrids by designing a drive system. Te drive system
will utilize new enabling technologies such as the INNAS Floating Cup pump/pump
motors and the Toshiba Super Charge Ion Batteries (SCiB). Te proposed architecture
initially included a hydraulic-electric system, where the high power braking power is
absorbed by the hydraulic system while energy is slowly transferred fom both the Inter-
nal Combustion Engine (ICE) drive train and the hydraulic drive train to the electric
accumulator for storage. Simulations were performed to demonstrate the control meth-
od for the hydraulic system with in-hub pump motors. Upon preliminary analysis it is
concluded that the electric system alone is sufcient. Te fnal design is an electric system
that consists of four in-hub motors. Analysis is performed on the system and MATLAB
Simulink is used to simulate the full system. It is concluded that the electric system has
no need for a fictional braking system if the Toshiba SCiBs were used. Te regenerative
braking system will be able to provide an energy saving fom 25% to 30% under the
simulated conditions.
compact car regenerative drive systems:
electrical or hydraulic
quiNN lai
school of mechanical engineering
Georgia institute of technology
adVisor:
wayNe j. book
school of mechanical engineering
Georgia institute of technology
57

introduCtion
With around 247 million on-road vehicles traveling
around 3 trillion miles (Highway Statistics, 2009) ev-
ery year, the efciency of on-road vehicles are of major
concern. As a result, hybrid drive trains which dramati-
cally increase urban driving efciency of vehicles have
been developed and implemented in vehicles. Existing
on-the-road hybrids have their secondary regenerative
systems (Electric Motors and batteries) installed on
their primary drive trains (ICE drive train) to provide
the regenerative braking capability. Recently eforts
have been put in designing drive train systems that have
either hydraulic or electric components as integral parts
of the systems. For example, in the Chevy Volt, a series
electric hybrid system, the ICE is used to charge the
electric accumulator, which in turn drives the electric
motor (Introducing Chevrolet, 2009).
Electric hybrid drive trains have been implemented in
passenger vehicles while hydraulic hybrids have been
implemented in commercial vehicles. Since electric
hybrid systems can operate quietly, enhancing passen-
ger comfort, this system is implemented in passenger
vehicles. However, current battery technologies in the
market prevent high power charging and thus prevent
the electric system from replacing frictional brakes. As
a result a signifcant amount of braking energy is lost
to the surroundings through heat. Hydraulic hybrids, in
contrast, have the ability to capture most of the braking
power. However, due to the characteristics of hydraulic
components, the hydraulic systems sufer from the accu-
mulator’s low energy density; the Noise, Vibration and
Harshness (NVH) also signifcantly afect the driving
experience.
In an attempt to address the charging power density
challenge faced by Electrical Hybrids and the energy
density challenge faced by Hydraulic Hybrids, diferent
drive systems were designed. Braking Power Analysis of
the report presents simple braking power analysis as the
foundation of further calculations in other parts. Te
initial approach to solve the problem was to incorpo-
rate an electrical system in an existing hydraulic hybrid
system. Hydraulic Hybrid Drive System presents the
Hydraulic Hybrid system engineering level analysis,
and Hydraulic Accumulator Analysis investigates the
hydraulic accumulator. It was confrmed that the hy-
draulic accumulator does not have a sufcient energy
density for braking energy capture and therefore elec-
trical accumulators were introduced to capture the ac-
cess energy. Battery Analysis investigates the electrical
accumulators. Upon the completion of Analysis, it was
concluded that the electrical system alone is sufcient.
As a result, an in-hub motor driven electric drive system
(Figure 7) was chosen, and the analysis and simulations
are presented in Electrical Systems of the paper.

Braking power analysis
Braking power analysis is performed to serve as a foun-
dation for accumulator analysis in Hydraulic Accumu-
lator Analisis and Battery Analysis. Analysis is conduct-
ed with an assumption of negligible rolling friction, air
drag and other losses. Te driving analysis is performed
on a mid size passenger sedan, such as a Honda FCX
Clarity. Te Honda FCX Clarity fuel cell car was se-
lected because the weight of the components in the car
closely resembles the weight of the suggested drive sys-
tem. Te assumed vehicle mass is 1625 kg (Honda at the
Geneva, 2009). Te ECE-15 Driving Schedule is shown
in Figure 1.
A 6 second 35 mph to 0 mph deceleration is assumed.
Te assumed braking slope resembles a rapid urban brak-
ing is more rapid than ECE-15 Urban Driving Schedule
braking. A rapid 60 mph to 0 mph deceleration is also
assumed. Under normal driving conditions, a passenger
vehicle will take about 200 f to decelerate (2009 Driv-
er’s Manual, 2009). Te deceleration time involved can
be obtained using Equation (1) and Equation (2)
article: lai
58
sPring 2010: the tower
(1)
where is the fnal velocity of the vehicle, is the initial
velocity of the vehicle, a is the acceleration of the vehi-
cle, and s is the displacement involved in the accelera-
tion. Te time is obtained using Equation (2)
(2)
where t is the deceleration time. Using the provided
equation, we obtained 11 seconds as the deceleration
time. Te energy dissipated can be obtained using Equa-
tion (3)
(3)
where KE is the kinetic energy of the vehicle, m is the
mass of the vehicle, and v is the velocity of the vehicle.
Te braking power can then be determined using Equa-
tion (4)
(4)
where P is the power involved with the braking. Te de-
celeration details calculated are tabulated in Table 1.
hydrauliC hyBrid drive systeM
Te initial approach to solve the problem was to in-
corporate an electrical system in an existing hydraulic
hybrid system. Te selected hydraulic system is the IN-
NAS HyDrid. Te architecture of the HyDrid system
(Achten, 2007) is presented in Figure 2.
INNAS claimed that 77 Miles Per Gallon (MPG) is
possible for the HyDrid system (HyDrid, 2009) because
Figure 1. ECE-15 Driving Schedule; x-axis: time (s); y-axis: vehicle velocity (mph).
v v as
f i
2 2
2 = +
v v at
f i
= +
KE mv =
1
2
2
P
d KE
dt
mv
dv
dt
mva = = =
( )
59

the secondary power plant allows engine of operation,
and the Infnitely Variable Transmission (IVT) allows
the engine to rotate at optimum RPM for efciency. Te
INNAS HyDrid utilizes the INNAS Hydraulic Trans-
formers (IHT) (Achten, 2002) in a Common Pressure
Rail (CPR) (Vael & Acten, 2000). Te IHT is claimed
to have unmatched efciency due to the Floating Cup
Principle that it utilizes. Te starting torque efciency,
according to Achten, is up to 90% (Achten, 2002) or
above. Te control method of the HyDrid is not pub-
lished; therefore, a possible control method is presented
to demonstrate how the IHT functions as an IVT and
thus converting the varying pressure from the accumu-
lator into the desired pressure for the in wheel constant
displacement motor/pumps. When accelerating, ei-
ther the pressure accumulator or the ICE will provide
the required pressure in the CPR which will in turn be
transmitted by the IHT to drive the in-wheel constant
displacement motor/pump. Te IHT is assumed to be
a variable pump coupled with a variable pump/motor.
A possible method of controlling the acceleration is to
vary the stroke of the variable pump in the IHT while
keeping the pump motor stroke and the ICE RPM con-
stant. During braking, the pump (CPR side) stroke is
kept constant while the pump motor stroke is varied
to charge up the accumulator. Te presented control
method is presented in Figure 3.
A simulation is performed to demonstrate the control
method. Te system assumes that the vehicle has a 4 cyl-
inder gasoline engine, a 0.85 volumetric efciency for
variable pump/motor, and a 0.92 volumetric efciency
for constant displacement pumps and inactive pressure
accumulators. It is also assumed ideal pipe lines and no
force is involved in the varying of the pump stroke. Te
simulation shows how by varying the IHT pump stroke
the vehicle speed can closely follow a desired trajectory
with minimal ICE rpm variation. Te ICE rpm and
the pump stroke variation are shown in Figure 4 and 5
Figure 2. HyDrid Drive system adapted from Achten.
v
i
(m/s) v
f
(m/s) t(s) Braking Power (kW)
35 0 6 66.0
60 0 11 99.8
Table 1. Deceleration details for the assumed vehicle.
article: lai
60
sPring 2010: the tower
respectively. Te resulting vehicle velocity is shown in
Figure 6.
Te simulated vehicle velocity closely matches with the
desired velocity trajectory, which is the ECE-15 driving
schedule (Figure 1). Te simulated velocity trajectory is
idealized because of the idealized assumptions made in
creating the simulation model. Te pressure values pro-
vided from the simulation are also observed to be faulty.
Tis simulation’s values cannot be used for quantitative
purposes. However, it is sufcient for the demonstra-
tion of the variation between the stroke of the pump in
the IHT and the vehicle velocity.
hydrauliC aCCuMulator analysis
Te hydraulic accumulator has sufcient power density
but a low energy density. An attempt was made to quan-
tify the energy storage capacity of a typical size hydrau-
lic accumulator for a hydraulic hybrid vehicle so that
the proposed additional battery pack can be correctly
sized. A 38L EATON hydraulic accumulator (Product
Literature, 2009) is assumed (Used in CCEFP Test Bed
3: Highway Vehicles). Te parameters used for energy
calculations are tabulated in Table 2.
Figure 3. Suggested Control Method for HyDrid system.
Volume (m
3
) 0.038
Precharge Pressure (MPa) 10.7
Precharge Nitrogen Volume (m
3
) 0.038
Maximum Nitrogen Pressure (MPa) 20.6
Nitrogen Volume at Maximum Pressure (m
3
) 0.0176
Table 2. EATON 38 L hydraulic accumulator.
61

Figure 4. Engine RPM for HyDrid simulation; x-axis: time (s); y-axis: ICE rpm.
Figure 5. Pump stroke variation for HyDrid simulation; x-axis: time (s); y-axis stroke (m).
article: lai
62
sPring 2010: the tower
Te assumed relationship between pressure and volume
is shown in Equation (5)
(5)
where p is the pressure of the nitrogen in the accumula-
tor, V is the volume of nitrogen in the accumulator, and
n is an empirical constant. Using this relationship, the
total energy involved in completely pressurizing or de-
pressurizing the accumulator is shown in Equation (6)
(6)
where is the initial pressure, is the fnal pressure, is the
initial volume, is the fnal volume, and W is the energy
involved. Using Equation (5) and Equation (6) we can
calculate the total energy storage of the EATON 38L
pressure accumulator, which is 293.6 kJ. Using Equa-
tion (3) and assuming a vehicle with the weight of 1625
kg (from Braking Power Analysis), a 38L accumulator is
sufcient for the acceleration from 0mph to 42.5mph.
It is assumed that no energy is lost due to friction, drag,
and inertia changes. As the main purpose of the hydrau-
lic system in a Hydraulic Hybrid is to capture urban
braking and to accelerate the car to a velocity where
the ICE can be started, 293.6 kJ is sufcient. However,
if the vehicle is braking from a speed higher than 42.5
mph, or the duration of braking is long, the hydraulic
system will not be able to capture the braking energy.
Terefore an electrical system is introduced to capture
the excess energy.
Figure 6. Vehicle velocity variation for HyDrid simulation; x-axis: time (s); y-axis: velocity (mph).
pV
n
=constant
W
p V pV
n
f f i i
=
-
- 1
63

Battery analysis
Te batteries are proposed to serve as secondary energy
storage which will capture excess energy that cannot be
captured by the hydraulic system. Te two electric ac-
cumulators analyzed are the Sony Olivine-type Lithium
Iron Phosphate (LFP) cells (Sony Launches High-Pow-
er, 2009) and the Toshiba SCiB cells (Toshiba to Build,
2009). Both cells exhibit impressive recharge cycles and
high charging power density. Cell specifcations are
shown in Table 3.
Te charging power density can be obtained using
Equation (7).
(7)
where is the charging time for one cell.(fx this sentence)
Energy density can be described using Equation (8).
(8)
where C is the cell capacity in Ah, V is the nominal volt-
age, and is the mass of one cell. Te energy density and
charging power density values obtained are tabulated in
Table 4.
As shown in Table 4, the Sony LFP outperforms the
Toshiba SCiB in terms of energy density by a factor of
1.3, while the SCiB outperforms the LFP in terms of
power density by a factor of 3. As the major limitation
in Electric Hybrid systems is the charging power density
of batteries, the SCiB cell is used for further analysis.
As calculated in Braking Power Analysis, 66.0 kW is the
maximum braking power occurred in a 6 second 35mph
to 0 mph city braking. Assuming that all the SCiB cells
used are arranged in such a way that the power density is
equally shared among the cells and ideal electronic com-
ponents, 90.8 kg of the SCiB cells are required. 90.8 kg
of SCiB has a total capacity of 22,000 kJ. According to
Braking Power Analysis calculations, each city braking
involves 197.7 kJ braking energy, which is 0.9% of the
battery pack’s capacity. According to Toshiba, the capac-
ity loss afer 3,000 cycles of rapid charge and discharge
is less than 10% (Toshiba to Build, 2009). Using the as-
sumptions in Braking Power Analysis and assuming that
the 10% of capacity loss afer 3000 cycles is negligible,
we can obtain 334 thousand cycles as an approximate
SCiB Cell LFP
Energy Density (kJ/kg) 242 316.8
Charging Power Density (W/kg) 726 198
Table 4. Sony LFP and Toshiba SCiB cell spec (charging power
density and energy density).
SCiB Cell LFP
Nominal Voltage (V) 2.4 3.2
Nominal Capacity (Ah) 4.2 1.1
Size (mm) approx 62 x 95 x13 d=18, h=65
Weight (g) approx 150 40
Charging time 90% in 5 min 99% in 30 min
Table 3. Sony LFP and Toshiba SCiB cell spec.
article: lai
Charging power density=
(%Charge) Energy Density)
charging
·(
t
Energy density=
min sec
charging
CV
t
· · · 60 60
64
sPring 2010: the tower
for the regenerative braking and driving cycles allowed
before the capacity of the SCiB drops below 90%. Us-
ing the same assumptions we can also fnd that 137.6 kg
of SCiB is sufcient for the maximum charging power
involved in the 11 second 60 mph to 0 mph highway ac-
cident braking. Te weight of the battery pack required
is slightly heavier than the 70kg battery pack in a Toyota
Prius electric hybrid vehicle.
eleCtriCal systeMs
As shown in Battery Analysis calculations, the Toshiba
SCiBs have a power density that is more than sufcient
for regenerative braking. As a result neither the hydrau-
lic system nor the frictional braking system is necessary
in an electric vehicle equipped with the Toshiba SCiBs.
A mechanical emergency brake should be installed to
prevent accidents in case of regenerative braking system
failure.
Te possible simplest design is a plug in electric or a fuel
cell vehicle that has 4 in-hub motors. Te simplifed
system is shown in Figure 7. As shown in the Figure 7,
the 4 in-hub wheel motors are directly connected to the
wheel. With mechanical components such as the ICE,
diferentials, and the transmission removed, the vehicle
weight can be reduced, and the efciency of the whole
driving train can be increased by at least a factor of 3
(Clean Urban Transport, 2009). Some efciency values
(Achten, 2009; Valøena & Shoesmith, 2009) are pro-
vided in Table 5 for comparison. Te 4 in-hub motor de-
sign also allows the vehicle to enjoy a very small turning
radius and other advantages of 4WD vehicles, such as
increased traction performance and precision handling.
To validate the design, a simulation is done for the sys-
tem suggested. Because of the mechanical components
removed, a lighter car is selected for simulation. Te
selected vehicle is a Honda Civic, with a vehicle mass
of 1246 kg (Complete Specifcations, 2009) and a CdA
value of 0.682 m2. Te air drag of the vehicle can be cal-
culated using Equation (9) (Larminie & Lowry, 2003)
(9)
where is the drag force, is air density, is the drag coef-
fcient, A is the cross sectional area of the vehicle facing
the front, and v is the velocity of the vehicle. Te rolling
friction of the vehicle can be obtained using Equation
(10) (Larminie & Lowry, 2003).
(10)
Figure 7. Selected Electric Car architecture.
Approx. Efciency
ICE 20%
Transmission (automatic) 85%
Transmission (manual) 92% to 97%
Diferential 90%
Motor 90%
Battery recharge 80% to 90%
Table 5. Efciency values of integral components of ICE
drive train and electric drive train.
F C Av
ad d
=
1
2
2
ρ
F mg
rr rr
= µ
65

where is the rolling friction force, is the rolling friction
coefcient, which is assumed to be 0.015 (for radial ply
tire) (Larminie & Lowry, 2003), m is the mass of the ve-
hicle, and g is the acceleration due to gravity. As in-hub
motor specifcations are not readily available, the 4 in-
hub motors are assumed to resemble the Tesla Roadster
drive train system (2010 Tesla Roadster, 2009), which
only involves a motor and a fx gear set with a gear train
ratio of 8.28. Te 375 V AC motor has a 215kW peak
power and 400 Nm of torque. All the parameters, in-
cluding Equation (9) and Equation (10), are included
in the simulation.
Te model simulates the vehicle attempting to follow
the ECE Driving Schedule shown in Figure 1. Te con-
troller assumed is a PID controller with a Kp of 10.4,
Ki of 0.546, and a Kd of -0.386 (MATLAB Simulink
tuned for 2.06 second response time). Te resultant ve-
locity trajectory and the power variation are shown in
Figure 8 and Figure 9 respectively
As expected, the power stays below a value of 66kW,
and there are negative values in the time versus power
plot as deceleration is involved in the ECE 15 Driving
Schedule. Because of the losses, the negative peaks that
correspond to braking power have a magnitude that
is relatively smaller than the positive peaks that corre-
spond to accelerating power. From the simulation, the
highest braking power observed is a little over 10 kW,
which can easily be fully captured using the 70kg of
SCiB battery packs (refer to Battery Analysis for cal-
culations). Simulink scopes are added to the system to
observe the energy change with and without regenera-
tive braking systems. Te results are shown in Figure 10
and Figure 11.
Comparing Figures 9 and 10, we can observe a 25%
energy saving for the system with regenerative braking.
Tuning the PID controller can increase the energy sav-
ings value up to 30%. Using a better controller has the
potential to increase the energy savings value further.
Figure 8. Vehicle Velocity Variation for Electric Drive System; x-axis: time (s); y-axis: velocity (mph).
article: lai
66
sPring 2010: the tower
Figure 9. X-axis: time (s); y-axis: power(W) plot for Electric Drive System Simulation.
Figure 10. Energy required to complete the ECE 15 driving cycle without regenerative braking; x-axis: time(s); y-axis: energy( J).
67

reCoMMended future work
Te hydraulic drive system may deserve a more in depth
analysis. If the accumulator energy density can be im-
proved, the hydraulic drive system may be a more en-
vironmentally friendly option than its electrical coun-
terparts as the production and disposal of batteries
leave detrimental efects to the environment. It is rec-
ommended that the simulation model be improved to
better model the actual response of the HyDrid system.
For the electrical system, eforts should be invested in
the research of in-hub motors, which produce signif-
cantly less torque than a regular AC motor coupled
with an 8.28 to 1 gear ratio gear train. Parameters
within the Simulink model should be selected to bet-
ter represent in-hub motors and the batteries should be
modeled with greater detail, as diferent arrangements
of the cells will result in diferent power density. Losses
involved with the electrical components should also be
investigated. One challenge that the electric drive sys-
tem should overcome is the energy density issue of the
batteries. Te energy density of a battery is signifcantly
lower than gasoline. Eforts should also be invested in
technologies related with battery recycling. (this is not
your future work)
ConClusion
An attempt was made to design a compact car drive
system to address the charging power density challenge
faced by electric hybrid vehicles and the charging energy
density challenge faced by hydraulic hybrid vehicles. Te
initial approach to solve the problem was to incorporate
an electrical system in an existing hydraulic hybrid sys-
tem. Te INNAS HyDrid was used as the foundation
architecture for analysis. Simulations were performed
to understand the control dynamics of the HyDrid. Af-
ter performing quantitative analysis on hydraulic accu-
mulators it was confrmed that hydraulic accumulators
cannot provide a sufcient energy density for braking
Figure 11. Energy required to complete the ECE driving cycle with regenerative braking; x-axis: time(s); y-axis: energy( J).
article: lai
68
sPring 2010: the tower
energy storage. In one of the intermediate designs, elec-
trical accumulators were introduced into the system to
capture excess energy that cannot be captured by the
hydraulic system. Te Sony LFP and the Toshiba SCiB
were considered. Te Toshiba SCiB was chosen as a re-
sult of its superior charging power density performance.
Upon further analysis, it was concluded that the batter-
ies have a sufcient charging power density to capture
braking power. It was then suggested that the electric
system can fully replace the hydraulic components, the
ICE drive train, and the frictional braking system. With
the convoluted hybrid system, which consists of a lot
of inefcient components replaced by a simple electric
only drive train, the vehicle drive train efciency can be
increased. An electrical system is simulated. Te simu-
lated models showed energy savings of around 25~30%
with regenerative braking. Te fnal drive system design
consists of an electric/fuel cell vehicle with four in-hub
motors.
69

referenCes
Highway Statistics. (2007). Washington, D.C.: Federal
Highway Administration Retrieved from http://www.
fwa.dot.gov/policyinformation/statistics/2007/.
Introducing the Chevrolet Volt. (2010) Retrieved
December 5, 2009, from http://www.chevrolet.com/
pages/open/default/future/volt.do
Honda at the Geneva Motor Show. Retrieved
December 5, 2009, from http://world.honda.com/
news/2009/c090303Geneva-Motor-Show/
Larminie, J., & Lowry, J. (2003). Electric Vehicle Tech-
nology Explained. Chichester: John Wiley & Sons Ltd.
Driver’s Manual. (2009). Government of Georgia.
Retrieved from http://www.dds.ga.gov/docs/forms/
FullDriversManual.pdf.
Acten, P.A.J. (2007). Changing the Paradigm. Paper
presented at the Proc. of the Tenth Scandinavian Int.
Conf. on Fluid Power, SICFP’07, Tampere, Finland.
HyDrid. Retrieved December 5, 2009, from http://
www.innas.com/HyDrid.html
Achten, P.A.J. (2002). Dedicated design of the hydrau-
lic transformer. Paper presented at the Proc. IFK.3,
IFAS Aachen.
Vael, G.E.M., Achten, P.A.J., & Fu, Z. (2000). Te
Innas Hydraulic Transformer, the Key to the Hydro-
static Common Pressure Rail. Paper presented at the
International Of-Highway & Powerplant Congress &
Exposition, Milwaukee, WI, USA.
Accumulators Catalog. (2005). In Eaton (Ed.),
Vickers.
Sony Launches High-power, Long-life Lithium Ion
Secondary Battery Using Olivine-type Lithium Iron
Phosphate as the Cathode Material. (2009). News Re-
leases Retrieved December 5, 2009, from http://www.
sony.net/SonyInfo/News/Press/200908/09-083E/
index.html
Toshiba to Build New SCiB Battery Production
Facility. (2008). News Releases Retrieved Decem-
ber 5, 2009, from http://www.toshiba.co.jp/about/
press/2008_12/pr2401.htm
Berdichevsky, G., Kelty, K., Straubel, J.B., & Toomre,
E. (2006). Te Tesla Roadster Battery System. In Tesla
Motors (Ed.).
Electric Vehicles. (2009). Retrieved from http://
ec.europa.eu/transport/urban/vehicles/road/elec-
tric_en.htm.
Valøen, L.O., & Shoesmith, M.I. (2007). Te ef-
fect of PHEV and HEV duty cycles on battery and
battery pack performance. Paper presented at the
Plug-in Hybrid Electric Vehicle 2007 Conference,
Winnipeg, Manitoba. http://www.pluginhighway.ca/
PHEV2007/proceedings/PluginHwy_PHEV2007_
PaperReviewed_Valoen.pdf
Complete Specifcations. Civic Sedan Retrieved De-
cember 5, 2009, from http://automobiles.honda.com/
civic-sedan/specifcations.aspx
Performance Specifcations. (2010). Tesla Roadster Re-
trieved December 5, 2009, from http://www.teslamo-
tors.com/performance/perf_specs.php
article: lai
70
sPring 2010: the tower
A switchable solvent is a solvent capable of reversing its properties between a non-ionic
liquid to an ionic liquid which is highly polar and viscous. Switchable solvents have
applications for the Heck reaction, which is the chemical reaction of an unsaturated
halide with an alkene in the presence of a palladium catalyst to form a substituted alk-
ene. Te objective of this research was to apply a switchable solvent system to the Heck
reaction in order to optimize reaction and separations by eliminating multiple reaction
steps. Switchable solvents reduce the need to add and remove multiple solvents because
they are capable of switching properties and dissolving both the inorganic and organic
components of the reaction. Tis reversal of chemical properties by a switchable solvent
provides for easier separation of the product, minimizes the cost by eliminating the need
for multiple solvents, and reduces the overall environmental impact of the industrial
process. Specifcally, the cost is lowered by the ability of the catalyst and solvent to be
recycled fom the system. In addition, the “switch” that initiates the formation of the
ionic liquid switchable solvent is carbon dioxide, which is cheap and nontoxic. In con-
clusion, we were able to use a switchable solvent system to obtain good product yields of
E-Stilbene, the desired product of the Heck reaction, and recycle the remaining catalyst
+ solvent which also produced good product yields at a lower economic and environ-
mental cost.
switchable solvents: A combination of
reaction & separations
GeorGiNa w. schaefer
school of chemical and biomolecular engineering
Georgia institute of technology
adVisor:
charles a. eckert
school of chemical and biomolecular engineering
Georgia institute of technology
71

introduCtion
A common problem for chemical synthesis is the re-
action of an inorganic salt with an organic substrate,
which is an important reaction in the production of
many industrial chemicals and pharmaceutical prod-
ucts. Typically, a phase transfer catalyst (PTC), such as
a quaternary ammonium salt, is used and must subse-
quently be separated from the product afer the reaction
has proceeded. However, the separation of a PTC from
the product is very difcult. In fact, solvents such as di-
methyl sulfoxide (DMSO) or ionic liquids, liquid salts
at or near room temperature, that are capable of dissolv-
ing both the organic and inorganic components of the
reaction still inhibit simple separation of the product
from the catalyst. (Heldebrant et al., 2005)
Now imagine a smart solvent that can reversibly change
its properties on command through a built in “switch”.
Our goal in designing such a solvent is to minimize the
economic and environmental impact of such industrial
processes while creating a solvent that remains highly
polar. Tese solvents are able to dissolve both the or-
ganic and inorganic components of the reaction while
highly polar and then change properties for easier sepa-
ration and efective product isolation afer the reaction
is complete.
Switchable solvent systems are capable of doing just
that. Tese systems involve a non-ionic liquid, an alco-
hol and amine base, which can be converted to an ionic
liquid upon exposure to a “switch”. Te switch chosen
to induce this change in solvent properties is carbon
dioxide. CO2 reacts with the alcohol-amine mixture
to form an ammonium carbonate. Furthermore, it is
cheap, readily available, benign, and easily removed by
heating and purging with nitrogen or argon. Switchable
solvent systems therefore should facilitate chemical syn-
theses involving reactions of inorganic salts and organic
substrates by eliminating the need to add and remove
diferent solvents afer each synthetic step in order to
achieve diferent solvent properties (Heldebrant et al.,
2005; Phan et al., 2007).
Industrial chemical production usually requires multi-
ple reaction and separation steps, each of which usually
requires the addition and subsequent removal of a dif-
ferent solvent for each step. For example, the synthesis
of Vitamin B12 is achieved in 45 steps. Te application
of switchable solvent systems to industrial produc-
tion processes of major chemicals and pharmaceuticals
would signifcantly lower the associated pollution and
cost of these processes by eliminating the need to add
and remove multiple solvents for each reaction step.
(Heldebrant et al., 2005; Phan et al., 2007).
projeCt desCription
Switchable solvents convert between a non-ionic liquid,
which has varying polarity, to an ionic liquid whose
properties include higher polarity and higher viscosity.
As discussed in previous research, the ideal properties
of the solvent as a reaction medium include a usable
liquid range, chemical stability, and the ability to dis-
solve both organic species and inorganic salts. In terms
of the solvent’s role in separations, the solvent should
be decomposable at moderate conditions with a reason-
able reaction rate, the decomposition products should
have very high or very low vapor pressures, and recom-
bination to form solvent should be relatively easy. Our
principle aims in designing a switchable solvent system
to optimize reactions and separations were to eliminate
multiple reaction steps, reverse solvent properties to
facilitate better separations, and minimize the cost and
environmental impact by optimizing catalyst and sol-
vent recycle. (Heldebrant et al., 2005; Xiao, Twamley,
& Shreeve, 2004; Phan et al, 2007).
Ionic liquids have gained popularity in their technolog-
ical applications as electrolytes in batteries, photoelec-
trochemical cells, and many other wet electrochemical
article: schaefer
72
sPring 2010: the tower
devices. Tey are particularly attractive solvents because
of dramatic changes in properties such as polarity, which
may be elicited through a “switch”. On the other hand,
changes in conditions such as temperature and pressure
usually can only elicit negligible to moderate changes
in a conventional solvent’s properties making the use of
multiple solvents for a single process necessary. In addi-
tion, ionic liquids have low vapor pressures essentially
eliminating the risk of inhalation. In particular, guani-
dinium-based ionic liquids have low melting points and
good thermal stability, properties which make these
high nitrogen materials attractive alternatives for ener-
getic materials. (Xiao, Twamley, & Shreeve, 2004; Gao,
Arritt, Twamley, & Shreeve, 2005).
Our research focused on the application of switchable
solvents for the Heck reaction in order to optimize the
reaction and separation. Specifcally, we studied the re-
action of bromobenzene and styrene in the presence of
a palladium catalyst (PdCl2(TPP)2) and base to form
E-stilbene, an important pharmaceutical intermediate
in the production of many anti-infammatories. (Hel-
debrant, Jessop, Tomas, Eckert, Liotta, 2005; Xiao,
Twamley, & Shreeve, 2004).
As illustrated in Figure 3, the nonionic liquid can be
“switched” to an ionic liquid by exposure to carbon diox-
ide and reversed back to a non-ionic liquid by exposure
to argon or nitrogen. Te reaction of bromobenzene
and styrene in the presence of palladium catalyst is run
in the highly polar ionic liquid which is able to dissolve
both the organic and inorganic components of this sys-
tem. Te ionic liquid is a particularly efective media for
this reaction in that it is able to immobilize the palla-
dium catalyst while preserving the overall product yield.
In addition, ionic liquids are nonvolatile, infammable,
and thermally stable making them an attractive replace-
ment for volatile, toxic organic solvents. (Heldebrant,
Jessop, Tomas, Eckert, Liotta, 2005; Xiao, Twamley, &
Shreeve, 2004; Phan et al., 2007).
Figure 1. Te Heck reaction of bromobenzene and styrene in
the presence of palladium catalyst and ionic liquid.
Figure 2. Synthesis of the palladium catalyst used in the Heck
Reaction (Figure 1).
Figure 3. Ionic liquid formation; note the reversibility of this re-
action under Argon.
Figure 4. Switchable solvent system comprised of 1,8-diazabi-
cyclo-[5.4.0]-undec-7-ene (DBU) and hexanol.
73

In a previous study, a guanidine acid-base ionic liquid
was determined to be an efective media for the palla-
dium catalyzed Heck reaction of bromobenzene and
styrene [4]. Te guanidine acid-base ionic liquid was
used as both the solvent, ligand, and base in this reac-
tion with the guanidine acting as a strong organic base
able to complex with the Pd(II)salt and as a replacement
for a phosphine ligand. High activity, selectivity, and re-
usability were all observed under moderate conditions
and thus we expect our system to function best at simi-
lar conditions. (Li, Lin, Xie, Zhang, Xu, 2006; Gao, Ar-
ritt, Twamley, & Shreeve, 2005).
projeCt desCription
Once the reaction is run in the ionic liquid, the homo-
geneous product-ionic liquid solution is extracted and
separated into a two phase product, catalyst/ionic liquid
system. Afer the product is removed by extraction with
heptane, the system is exposed to argon and the ionic
liquid is reversibly switched back to a nonionic liquid
where the system again separates into a two phase salt
precipitate byproduct/catalyst and nonionic liquid so-
lution system. In this fnal stage the catalyst and solvent
may be removed and recycled back into the process.
Our studies of the palladium catalyzed Heck reaction
of bromobenzene and styrene were performed in a
Figure 5. Process Diagram for the Heck reaction performed in a switchable solvent system.
article: schaefer
74
sPring 2010: the tower
10ml window autoclave. First the catalyst solution was
added and the solvent vacuumed out. Next the ionic
liquid, bromobenzene, and styrene were added, and
the system was pressurized. Te autoclave was then lef
stirring and heating for three days until the reaction
was completed. Afer three days, the autoclave was al-
lowed to cool down and depressurize. Once the system
was back to room temperature and atmospheric pres-
sure, the homogeneous ionic liquid/product/catalyst
solution was extracted from the autoclave with heptane
under carbon dioxide in order to sustain the formation
of the ionic liquid. Te product in heptane phase was
then removed and the remaining ionic liquid/catalyst
phase was exposed to argon and heat. Afer exposure to
argon and heating, the byproduct salt precipitated out
of the catalyst/reversed non-ionic solvent solution. Te
catalyst and solution was then removed from the salt
byproduct and recycled. Any remaining product lef
in the autoclave from the extraction with heptane was
then extracted with dichloromethane (DCM) for later
mass balance calculations.
results and ConClusions
Te Heck reaction was optimized by running at various
temperature and pressure conditions. In order to assess
the success of each system, the conversion percentages
were compared.
Based on Figure 4, the optimal conditions for product
formation and catalyst + solvent recovery were a tem-
perature of 115˚C and a pressure of 30 bar. Te reac-
tions run at these conditions show very acceptable and
repeatable results, such as an 83% and an 84% yield,
the highest observed percent yields. However, the high
variability of the product yield results at these condi-
tions can be explained by the poor extraction methods
for this system. Extracting with large amounts of hep-
tane (greater than 50mL) led to higher product yields
(greater than 60%) whereas extracting with less heptane
(10-50mL) led to lower product yields at these condi-
tions due to product loss in the system. Terefore, there
was a tradeof between the amount of heptane used in
the extraction to recover the product, which adds to the
Figure 6.
Palladium (PdCl2(TPP)2) cata-
lyzed Heck reaction of bromoben-
zene and styrene in a switchable
ionic liquid system. Te catalyst
and ionic liquid solution used in
the reaction run at T=115˚C and
P=30 bar, which had 55% yield,
was recycled from the reaction
run at T=115˚C and P=30 bar,
which had 83% yield, demonstrat-
ing that the catalyst in ionic liquid
remained active in the reaction
and that was good recycle.
75

overall cost of the process, and the amount of product
we were able to recover from the system.
Te catalyst in solvent was also extracted from the reac-
tion which had an 83% product yield and recycled for a
second reaction performed at the same conditions. Ob-
served yield of 55% from the recycle reaction demon-
strates that the recycled catalyst activity was preserved
and that the solvent was able to “switch” back to an
ionic liquid a second time to run the reaction success-
fully. In general, the reactions which experienced lower
percent yields did so as a result of too harsh reaction
conditions, such as too high temperatures and/or pres-
sures. We predict that the palladium catalyst loses ac-
tivity and perhaps decomposes at higher temperatures
and pressures which accounts for the lower yields at
harsher conditions. In the reactions with higher percent
yields the ionic liquid and catalyst extracted from the
system were usually light yellow in color, whereas the
reactions with lower percent yields were brown or black
indicating catalyst decomposition. In particular, two
of the reactions run at T=140˚C and P=50 bar which
had product yields of 6% and 9% respectively contained
black particles within the catalyst and solvent mixture.
In addition to the extremely low percent yield, we be-
lieve the palladium catalyst complex decomposed, and
the black particles were possibly palladium nanopar-
ticles (palladium black).
In order to improve this system, further investigations
into the extraction methods should be performed in
order to fnd a work-up method which minimizes prod-
uct loss by maximizing the amount of product that can
be washed out of the product phase. Te variability of
the product yield results are a refection of this difcult
extraction procedure. In our work, we extracted with
heptane (between 10 and 50mL) because of its low
boiling point, which made it easy to remove the lef-
over heptane in the system afer the product had been
extracted by heating. In order to improve the extraction
method, diferent solvents could also be tried in this
extraction. In addition, it was ofen difcult to extract
all of the products and reactants from the autoclave due
to the viscosity of the ionic liquid. A better extraction
solvent would facilitate this step and minimize product,
catalyst, and solvent loss. Another difculty we encoun-
tered with our system was evaporating the solvent out
of the catalyst solution before adding the other reac-
tants. Changing the solvent from chloroform to toluene
was attempted, but toluene was even more difcult to
evaporate from the system. Te solution we found was
to dissolve the catalyst directly into 1,8-diazabicyclo-
[5.4.0]-undec-7-ene (DBU) and then, afer the cata-
lyst and DBU solution was completely homogeneous,
add hexanol and expose to carbon dioxide in order to
convert it to an ionic liquid and then added it to the
system directly. NMR samples of the catalyst in DBU
and hexanol were taken to test that the activity of the
catalyst was preserved in this solution by confrming
the catalyst’s molecular structure remained unchanged.
In addition, the percent yields of the reactions run with
this method confrm that the palladium catalyst retains
its activity in the DBU-hexanol solution. Tis method
also eliminates the addition of an extra solvent and its
difcult removal from the system.
In conclusion, the application of switchable solvents to
the Heck reaction of bromobenzene and styrene was
successful in optimizing product (E-Stilbene) yield and
catalyst + solvent recovery for a good recyclable system.
E-Stilbene is an important intermediate in the synthesis
of some pharmaceuticals such as anti-infammatories
and is used in the manufacture of dyes and optical
brighteners. A switchable solvent system could there-
fore be implemented for chemical synthesis of Heck
reaction products in order to reduce the economic and
environmental impact of this industrial process. (Hel-
debrant, Jessop, Tomas, Eckert, Liotta, 2005; Heldeb-
rant et al., 2005).
article: schaefer
76
sPring 2010: the tower
referenCes
Heldebrant, D.J.; Jessop, Philip G; Tomas, Colin A.;
Eckert, Charles A.; Liotta, Charles L. J. Te Reaction
of 1,8-Diazobicyclo[5.4.0]undec-7-ene (DBU) with
Carbon Dioxide. Organic Chemistry 2005, 70, 5335-
5338.
Heldebrant, D.J.; Jessop, Philip G; Li, Xiaowang; Eck-
ert, Charles A.; Liotta, Charles L. Green Chemistry-
Reversible nonpolar-to-polar solvents. Nature 2005,
Vol. 436, 1102.
Xiao, Ji-Chang; Twamley, Brendan; Shreeve, Jean’ne
M. An Ionic Liquid-Coordinated Palladium Complex:
A Highly Efcient and Recyclable Catalyst for the
Heck Reaction. Organic Letters 2004, Vol. 6, No. 21,
3845-3847.
Li, Shenghai; Lin, Yingjie; Xie, Haibo; Zhang, Suobo;
Xu, Jianing. Brønsted Guanidine Acid-Base Ionic
Liquids: Novel Reaction Media for the Palladium-
Catalyzed Heck Reaction. Organic Letters 2006, Vol.
8, No. 3, 391-394.
Phan, Lam; Chiu, Daniel; Heldebrant, David J.; Hut-
tenhower, Hillary; John, Ejae; Li, Xiaowang; Pollet,
Pamela; Wang, Ruiyao; Eckert, Charles A.; Liotta,
Charles L.; Jessop, Phillip G. Switchable Solvents Con-
sisting of Amidine/Alcohol or Guanidine/Alcohol
Mixtures. American Chemical Society 2007.
Gao, Ye; Arrit, Sean W.; Twamley, Brendan; Shreeve,
Jean’ne M. Guanidinium-Based Ionic Liquids. Inor-
ganic Chemistry 2005, Vol. 44, No. 6, 1704-1712.
77

sUbmission gUidelines
Te Tower accepts papers from all disciplines ofered at
Georgia Tech. Research may discuss empirical or theo-
retical work, including, but not limited to, experimental,
historical, ethnographic, literary, and cultural inquiry.
Te journal strives to appeal to readers in academia and
industry. Articles should be easily understood by bach-
elors-educated individuals of any discipline. Although
Te Tower will review submissions of highly technical
research for potential inclusion, submissions must be
written to educate the audience, rather than simply
report results to experts in a particular feld. Original
research must be well supported by evidence, arguments
must be clear, and conclusions must be logical. More
specifcally, Te Tower welcomes submissions under the
following three categories: articles, dispatches, and per-
spectives.
forMatting
Articles
An article represents the culmination point of an
undergraduate research project, where the author ad-
dresses a clearly defned research problem from one, or
sometimes multiple approaches.
A properly formatted article must: be between 1500
and 3000 words (not including title page, abstract, and
references); include an abstract of 250 words or less;
and have at least the following sections:
• Introduction/Background Information
• Methods & Materials/Procedures
• Results
• Discussion/Analysis
• Conclusion
• Acknowledgements
Dispatches
A dispatch is a manuscript in which the author reports
recent progress on a research challenge that is relatively
narrow in scope, but critical toward his or her overall
research aim.
A dispatch should: not be more than 1500 words (not
including title page and references); and have at least
the following sections:
• Introduction/Background Information
• Methods & Materials/Procedures
• Results
• Discussion/Analysis
• Future Work
• Acknowledgements, as a separate page
Perspectives
A perspective refects active scholarly thinking in
which the author provides personal viewpoints and in-
vites further discussions on a topic of interest through
literature synthesis and/or logical analysis
A perspective should: not be more than 1500 words
(not including title page and references); address some
of the following questions: Why is the topic impor-
tant? What are the implications (scientifc, economic,
cultural, etc.) of the topic or problem? What is known
about this topic? What is not known about this issue?
Waht are possible methods to address this issue?
General
Te following formatting requirements apply to all
types of submissions and must all be satisfed before a
submission will be reviewed. All papers must: adhere to
APA formatting guidelines as specifed in the Publica-
tion Manual of the American Psychological Associa-
tion, 5th ed. (Washington, DC: American Psycho-
logical Association, 2001); be submitted in Microsof
Word format; be set in 12-point Times New Roman
font, double-spaces; not include identifying informa-
tion (name, professor, department) in the text, refer-
78
sPring 2010: the tower
sUbmission gUidelines
ence section, or on the title page. Papers will be tracked
by special sofware that will keep author information
separate from the paper itself; be written in standard
U.S. English; utilize standard scientifc nomenclature;
defne new terms, abbreviations, acronyms, and sym-
bols at their frst occurance; acknowledge any funding,
collaborators, and mentors; not use footnotes — if
footnotes are absolutely necessary to the integrity of
the paper, please contact the AESR at review@gttower.
org; reference all tables, fgures, and references within
the text of the document; adhere to the Georgia Insti-
tute of Technology Honor Code regarding plagiarism
and proper referencing of sources; and keep direct quo-
tations to an absolute minimum — paraphrase unless a
direct quote is absolutely necessary.
deadlines
Submissions are accepted on a rolling basis throughout
the year. Te Tower publishes an issue per semester.
Due to the review and production process, for submis-
sions to be considered for each issue they must be
submitted before the publicized deadline, which can
be found at gttower.org. Submissions received afer
this deadline will be considered for the following issue.
If the submission quality will be compromised in the
attempt to meet the deadline, authors are encouraged
to further develop their work and only submit it once
it is fully realized.
eligiBility
Submitters must be enrolled as undergraduate students
at the Georgia Institute of Technology to be eligible
for publication. Authors have up to three months afer
graduation to submit papers regarding research com-
pleted as an undergraduate. Te priciple investigator
may not be included among the co-authors.
suBMitting
To submit a paper, authors must register on our Online
Journal System (OJS) at http://ejournals.library.gat-
ech.edu/tower. Once the author flls out the required
information and registers as an author, he or she will
have access tot he submission page to begin the multi-
step submission process.
For more detailed submission guidelines, as well as cur-
rent deadlines and news, please visit gttower.org.
79

IBEW Local 613 Electricians
Do Everything from Wiring Electrical Outlets
to Programmable Logic Controls
Yourelectricalsystemisthelifebloodofyourbusiness.Ifitfails,yourphones
don’tring,computerswon’twork.YourBUSINESSSTOPS.
Don’t take chances with your electrical system. Demand the best trained
Electrical Professionals available from an IBEW Local Union 613 Electrical
Contractor.IBEWLocal613memberscompletearigorous5Ͳyearclassroom
and onͲtheͲjob training program. They know how to get your electrical
systembackupandrunningasquicklyaspossible.
PowerUp.CallIBEWLocal613forthenameofQualifiedUnionElectrical
Contractorsnearyou.
InternationalBrotherhoodofElectricalWorkersLocal613
501PulliamStreet,SWSuite250Atlanta,Georgia30312
(404)523Ͳ8107
www.ibew613.org
MaxMountJr
President
GeneR.O'Kelley
BusinessManager
advertisements
sPring 2010 : the tower
We are looking for chemical, electrical and mechanical engineers.
For more information on employment opportunities,
log on to www.yokogawa.com/us
Yokogawa Corporation of America
800-447-9656 www.yokogawa.com/us
800 524 SERV
7 3 7 8
We know your eyes are on the future.
So, look at Yokogawa.
advertisements
Merial,
a world leading
animal health company,
is a proud
contributor to
Georgia’s growth
in the
biotech sector.
We are dedicated to enhancing the health and well-being of animals.
Compliments of
Hoover Foods Inc.
www.hooverfoods.com
Hoover Foods is a franchisee of Wendys International, Inc.
sPring 2010 : the tower
Compliments of
21 East Broad Street
Savannah, GA 31401
912.236.1865
Fax: 912.238.5524
A Family Business
for 110 Years
www. bar nhar dt . net
Manufacturing & Engineering Facility
1995 Air Industrial Park Road
Grenada, MS 38901
1-800-848-2270
Sales, Marketing & Customer Service
2175 West Park Place Blvd.
Stone Mountain, GA 30087
High-Efficiency Air Handlers
advertisements
Transportation • Environmental • Planning
Civil • Construction • Program Management
Parsons Brinckerhoff
3340 Peachtree Rd. NE
Suite 2400, Tower Place100
Atlanta, GA 30326
(404) 237-2115
www.pbworld.com

The World’s Leading Conveyor Belt Company
For internship and career opportunities please
E-Mail renee.speers@fennerdunlop.com
325 Gateway Drive
Lavonia, GA 30535
1-706-356-7607
Fax: 1-706-356-7657
www.fennerdunlop.com
METROPOWER, INC.
1703 Webb Drive
Norcross, GA 30093
Phone: 770-448-1076
Fax: 770-242-5800
ON SITE ELECTRICAL SERVICE
AND CONSTRUCTION
sPring 2010 : the tower
Compliments of
Piedmont Center
Compliments of
Piedmont Center
Aderans Research is dedicated to developing state-of-the-art
cell engineering solutions for hair loss, a pervasive condition
with extremely negative effects on the lives of millions of people.
Located in Atlanta GA and Philadelphia PA, Aderans Research
is the pioneering arm of two great companies in the hair
restoration world: Aderans Company, Ltd, the world’s largest
manufacturer of wigs; and Bosley, the largest hair transplant
company in the world.
Aderans Research
2211 Newmarket Parkway, Suite 142
Marietta, GA 30067
1-678-213-1919
www.aderansresearch.com
1391 Cobb Parkway N.
Marietta, GA 30062
770-422-7118
www. stasco-mech.com
Plumbing and piping systems
construction, engineering and design.
“Stasco people make the difference!” “Stasco people make the difference!”
3970 Johns Creek Ct
Ste 500
Suwanee, GA 30024
770-871-4500
www.fishersci.com
advertisements
Albany~Augusta~Atlanta Albany~Augusta~Atlanta
Environmental Engineering
Water Resources Engineering
Sewer and Stormwater
Civil Site
Transportation Design
Operations and Permitting
Surveying / GIS / Mapping
Funding and Planning Assistance
Operations and Permitting www.speng.com www.speng.com
5036 B.U. Bowman Drive
Burord, GA 30518
Phone: 770-904-4444
Fax: 770-904-0888
Cunningham Forehand
Matthews & Moore
Architects, Inc.
2011 MANCHESTER STREET, N.E.
ATLANTA, GEORGIA 30324
404.873.2152
sPring 2010 : the tower
advertisements

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.