Vous êtes sur la page 1sur 350

NASA/ OHIO SPACE GRANT

CONSORTIUM

2011-2012 ANNUAL
STUDENT RESEARCH SYMPOSIUM
PROCEEDINGS XX


Transit of Venus, J une 5, 2012

April 20, 2012
Held at the Ohio Aerospace Institute
Cleveland, Ohio








TABLE OF CONTENTS
Page(s)
Table of Contents ............................................................................................................................................ 2-7
Foreword ............................................................................................................................................................ 8
Member Institutions ........................................................................................................................................... 9
Acknowledgments ............................................................................................................................................ 10
Agenda ........................................................................................................................................................ 11-14
Group Photograph ............................................................................................................................................ 15
Other Symposium Photographs .................................................................................................................. 16-25
Student Name College/University Page(s)

Adams, Christopher A. ............................... Wilberforce University ....................................................... 26-27
Automated Automobile System

Ash, Stephanie D. ...................................... Ohio Northern University..................................................... 28-31
Low Energy Electron Diffraction Structural Analysis of AU(111)-(5x5)-7S

Balderson, Aaron M. ........................................ Marietta College ............................................................ 32-35
Gas Invasion into Freshwater Auquifers in Susquehanna County Pennsylvania

Barga, Alena M. ......................................... The University of Toledo ..................................................... 36-38
Thermoelectric Solution for Temporomandibular Joint Pain

Bendula, Laura M. ........................................... Miami University ........................................................... 39-43
Increasing Wind Turbine Efficiency

Bennett, Heather M. ................................... University of Cincinnati ...................................................... 44-45
What a Drag: Parachute Design

Bishop, Cory M. ......................................... The University of Toledo ..................................................... 46-47
Gravity: Helping Children Understand What It Is and What Its Effects Are

Black, Winston L., II ..................................... University of Dayton ........................................................ 48-49
Surfactant Drag Reducing Agents in a Mock Aviation Loop

Boothe, Matthew C. ......................................... Marietta College ............................................................ 50-51
Fayetteville Shale Positive Displacement Motor Optimization

Bourne, Harrison W. ........................................ Miami University ........................................................... 52-53
Ionosphere Induced Error Self-Correction for Single Frequency GPS Receivers

Bradford, Robyn L. ....................................... University of Dayton ........................................................ 54-60
Processing CNT-Doped Polymer Fibers Using Various Spinning Techniques

Brant, Garrick M. .................................. Youngstown State University .................................................. 61-65
Detection of NO2 Using Carbon Nanotubes

Brinson, Tanisha M. ................................... Wilberforce University ....................................................... 66-67
How Is the United States of America Being Protected by the United States Air Force With
Computer Systems Programming?
2
Student Name College/University Page(s)

Brooks, Chellvie L. ...................................... Central State University ...................................................... 68-69
Atmospheric Temperature Analysis and Aerial Photography through High Altitude Ballooning

Bryant, Rachel L. ........................................ Wright State University ...................................................... 70-73
Wireless Charging System

Burse-Wooten, Beatrice M. ........................ Central State University ...................................................... 74-76
Process Design of Reinforced Polymeric Resins Impregnated With Multi-Walled Carbon Nanotubes

Carlson, Audra D. .................................. Youngstown State University .................................................. 77-78
Creating a Lunar Biosphere

Charvat, Robert C. ..................................... University of Cincinnati ...................................................... 79-81
The Sierra Project: Unmanned Aerial Systems for Emergency Management Demonstration Results

Conley, Darren M. .............................Columbus State Community College ............................................ 82-83
Biodegradable vs. Non-Biodegradable Slurry in Deep Trench Excavations

Cotto-Figueroa, Desire ..................................... Ohio University ............................................................ 84-90
Radiation Recoil Effects on the Dynamical Evolution of Asteroids

Davidoff, Robert W. ...................................... University of Dayton ........................................................ 91-94
Wind Tunnel Blockage Corrections for Flapping Wing Models

DiBenedetto, Joseph M. .................................... Ohio University ............................................................ 95-97
Commercial UAV Autopilot Testing and Test Bed Development

Edwards, Kristen D. ................................... Central State University ...................................................... 98-99
Nutritional Disparity Among African Americans

Endicott, Derick S. .................................... Ohio Northern University................................................. 100-105
Computational Study of Cylinder Heating/Cooling for Boundary Layer Control

Foster, Daniel R. E. ................................... The Ohio State University ................................................ 106-114
Elastic Constants of Ultrasonic Additive Manufactured Al 3003-H18

Furman, Brea R. ........................................ University of Cincinnati ................................................. 115-118
Microgravity on the ISS

Garasky, Martha E. ......................................... Miami University ....................................................... 119-121
Summary Web Service

Garland, Aubrey A. ................................ Youngstown State University .............................................. 122-124
Analysis of a Passive Adaptive Hydrodynamic Seal

Gerlach, Adam R. ....................................... University of Cincinnati .................................................. 125-131
Approximate Inverse Dynamics for Dynamically Constrained Path-Planning

Gill, Anna J. ...................................................... Marietta College ........................................................ 132-135
Spectroscpoy: Explorations with Light

3
Student Name College/University Page(s)

Gowker, Benita I. ......................................... Wright State University .................................................. 136-138
Wheres Waldo? A Design Paradigm in Pattern Recognition, Machine Learning and Image Processing

Gras, Courtney A. ...................................... The University of Akron ................................................. 139-141
Battery Management for a Lunar Rover

Guzman, Nicole D. .................................... The Ohio State University ................................................ 142-153
Characterization and Mirna Analysis of Breast Cancer Cell-Secreted Microvesicles

Hall, Pierre A. ............................................. The University of Akron ................................................. 154-157
Battery Technology

Haraway, Malcolm X. ................................. Wilberforce University ................................................... 158-159
Design Analysis and Modeling of Hazardous Waste Containment Vessels in Support of Safe Nuclear
Waste Transport and Storage

Hatcher, Kevin M. ....................................... Wright State University .................................................. 160-161
Neck and Head Injury Criteria and How It Relates to Pilot Ejection

Horrigan, Emily S. ................................... Cleveland State University ................................................ 162-163
Can Another Planet Support the Human Lifestyle?

Houck, James P. ............................................... Marietta College ........................................................ 164-165
Diversity of Shale

Iliff, Christopher J. ................................... Ohio Northern University................................................. 166-167
Feeling Down? The Effect of Gravity on Our Weight

Issa, Hadil R. ................................................. University of Dayton .................................................... 168-171
Interaction Between Osteoblasts Cells and Fuzzy Fibers for Possible Medical Use

Jennings, Alan L. ........................................... University of Dayton .................................................... 172-177
Optimal Inverse Functions Created via Population Based Optimization

Johnson, Candace A. ................................... Central State University ......................................................... 178
Medicinal Plants and Bacteria Growth

Johnson, Phillip E. ................................ Cuyahoga Community College ............................................ 179-180
Harmful Effects of 35mm Film vs. Digital Formats

Jones, Nicholas S. ....................................... Ohio Northern University................................................. 181-182
The Effects of Cg Shifting During Flight

Kakish, Carmen Z. .............................. Case Western Reserve University .......................................... 183-186
Oncology Therapeutics: Hyperthermia Using Self-Heating Nanoparticles

Karnes, Michael P. ........................................... Miami University ....................................................... 187-194
Design and Testing of an Adaptive Magnetorheological Elastomers Vibration Isolation System

Kemp, Evan R. .............................................. University of Dayton .................................................... 195-198
Doping of Phthalocyanines for Use As Low Temperature Thermoelectric Materials

4
Student Name College/University Page(s)

Kendall, Isiah A. .......................................... Wright State University .................................................. 199-202
NFL Concussions: The Effect on the Brain

Kingen, Logan M. ..................................... Ohio Northern University................................................. 203-204
Airfoil Investigation

Kirievich, Krista M. .................................... University of Cincinnati .................................................. 205-216
Wind Turbine Design Studies

Kish, Charles D. ..................................... Youngstown State University .............................................. 217-220
Newtons Law in Action

Knapke, Robert D. ...................................... University of Cincinnati .................................................. 221-224
Harmonic Balance and Conjugate Heat Transfer Methods for Unsteady Turbomachinery Simulations

Lai, Stephen E. ................................................. Miami University ....................................................... 225-235
Experimental Framework for Analysis of Curved Structures Under Random Excitation Effect of
Boundary Conditions and Geometric Imperfections

Lyon, Jennifer J. ........................................... Cedarville University .................................................... 236-237
Mathematics of Rockets

Masters, Jennifer E. ......................................... Marietta College ........................................................ 238-239
Analytical Modeling for Waterfloods in the Appalachian Basin

Mayhew, Eric K. ................................. Case Western Reserve University .......................................... 240-243
Thermal Conductivities of Heat-Treated and Non-Heat-Treated Carbon Nanofibers

Miller, Caroline I. ............................................ Miami University ....................................................... 244-245
The Planets and Me: An Exploration Webquest

Mitchener, Michelle M. ................................ Cedarville University .................................................... 246-249
Regulation of Genes by ETS Family Transcription Factors

Montion, Joseph P. ...................................... The University of Toledo ................................................. 250-251
Synthesis of Polymeric Ionic Liquid Particles and Iron Nano-Particles

Morris, Nathaniel J. .................................... Central State University .................................................. 252-255
The Study of Wildfires Using Remote Sensing

Munguia, Gerald A. ................................ Sinclair Community College .............................................. 256-258
How Ailerons Generate a Rolling Motion for an Aircraft

Nichols, Justin S. ........................................... Cedarville University ................................................... 259-260
ACTH Signaling in Tetrahymena Thermophila

Oty, Leah M. .......................................Columbus State Community College ............................................... 261
Green Roofing vs. Traditional Roofing

Reilly, Kathryn R. ....................................... Wright State University .................................................. 262-263
NASA Weightless Wonder: C-9 Jet and Equations of Motion

5
Student Name College/University Page(s)

Rischar, Stephanie A. ............................... Cleveland State University ................................................ 264-265
The Electromagnetic Spectrum: Exploring the Chemical Composition of Our Solar System

Roberts, Dominique N. ............................... Central State University ......................................................... 266
Investigation and Design of a Powered Parafoil and Its Applications in Remote Sensing

Sadey, David J. ......................................... Cleveland State University ................................................ 267-270
Artificial Electrocardiogram Parameter Identification and Generation

Scheidegger, Carr D. .............................. Cleveland State University ................................................ 271-274
Electrocardiogram Parameter Identification for Diagnosing Heart Arrhythmias

Seitz, Ciara C. ........................................... Cleveland State University ................................................ 275-278
Thermally Responsive Elastin-like Polypeptides

Sink, Matthan B. ......................................... Wright State University .................................................. 279-281
Semantic Framework for Data Collection and Query for Knowledge Discovery

Sinko, Robert A. ............................................... Miami University ....................................................... 282-285
Sensing and Energy Harvesting Capabilities of Hard Magnetorheological Elastomers (H-MRE)

Sollmann, Leslie A. ....................................... University of Dayton .................................................... 286-288
Bleed Hole Location, Sizing, and Configuration for Use in Hypersonic Inlets

Solomon, Steven E. ..................................... The University of Toledo ................................................. 289-290
Geology of the Moon

Stahl, Brian J. ...................................... Case Western Reserve University .......................................... 291-296
Thermal Stability and Performance of Foil Thrust Bearings

Studmire, Brittany M. M. ........................ Cleveland State University ................................................ 297-300
Optimization of Scenedesmus Dimmorphous in an Open Pond System

Szczecinski, Nicholas S. ...................... Case Western Reserve University .......................................... 301-304
Neural Controller for Legged Robots: System Behavior Characterization via the Permutation Engine

Taugir, Usaaman ........................................ The University of Akron ................................................. 305-309
Developing Traffic Data Collection Software with Multi-Touch Technology

Tillie, Charles F. ....................................... Cleveland State University ................................................ 310-311
Characterization of Thin Film Deposition Processes

Tocchi, Zachary M. .................................... The University of Akron ................................................. 312-313
Rocket Power

Vick, Tyler J. ............................................... University of Cincinnati .................................................. 314-318
Flapping Flight Micro Air Vehicles

Walker, Alex R. ........................................... University of Cincinnati .................................................. 319-321
Fuzzy Control of Two Two-Degree-of-Freedom Systems

6
Student Name College/University Page(s)

Watson, Erkai L. ........................................... Cedarville University .................................................... 322-324
Model Stirling Engine Manufacturing Project

Webster, Victoria A. ........................... Case Western Reserve University .......................................... 325-328
An Analog Robotic Controller Using Biologically Inspired Neuron and Motor Voxels

White, Marcia J. .......................................... Wright State University .................................................. 329-330
In The Middle: Earths Weather Correlation with Venus and Mars

Williams, Mahogany M. ............................. Wilberforce University ................................................... 331-333
Prediction of the Potential Classification of Solar Flares on Earth Using Fuzzy Logic Application

Williams, Michael D. ................................... Wilberforce University ................................................... 334-335
Configuration of Field-Programmable Gate Array Integrated Circuits

Wo, Chung Y. ...................................... Case Western Reserve University .......................................... 336-341
Wind Measurement and Data Analysis Applied to Urban Wind Farming

Wukie, Nathan A. ........................................ University of Cincinnati .................................................. 342-345
Comparison of Simulations and Models for Aspiration in a Supersonic Flow Using Overflow

Yeh, Benjamin D. .......................................... Cedarville University .................................................... 346-348
Stability of the Uncemented Hip Stem in THA

Zook, Caitlin M. ........................................ Ohio Northern University................................................. 349-350
Applications of Ellipses




These Proceedings are dedicated to the memory of Dr. Gerald T. Noel,
Sr., who passed away suddenly on April 1, 2012. Dr. Noel served as the
Associate Director of the Ohio Space Grant Consortium (OSGC) and also
as the Campus Representative from Central State University. We thank
you for your countless contributions to STEM research, education, and
especially to our students!




Proceedings may be cited as:
Ohio Space Grant Consortium (April 20, 2012) 2011-2012 Annual Student Research Symposium
Proceedings XX. Ohio Aerospace Institute, Cleveland, Ohio.

Articles may be cited as:
Author (2012) Title of Article. 2011-2012 Ohio Space Grant Consortium Annual Student
Research Symposium Proceedings XX, pp. xx-xx. Ohio Aerospace Institute, Cleveland, Ohio,
April 20, 2012.
7
FOREWORD

The Ohio Space Grant Consortium (OSGC), a member of the NASA National Space Grant
College and Fellowship Program, awards graduate fellowships and undergraduate scholarships to
students working toward degrees in Science, Technology, Engineering and Mathematics
(STEM) disciplines at OSGC-member universities. The awards are made to United States
citizens, and the students are competitively selected. Since the inception of the program in 1989,
847 undergraduate scholarships and 148 graduate fellowships have been awarded.

Matching funds are provided by the member universities, the Ohio Aerospace Institute (OAI),
and private industry. Note that this year approximately $600,000 will be directed to scholarships
and fellowships representing contributions from NASA, the Ohio Aerospace Institute, member
universities, and industry.

By helping more students to graduate with STEM-related degrees, OSGC provides more
qualified technical employees to industry. At the Doctoral level, students have a government
co-advisor in addition to their faculty mentor, and perform research at one of the following Ohio
federal laboratories: NASA Glenn Research Center or the Air Force Research Laboratory at
Wright-Patterson Air Force Base. The research conducted for the Masters and Doctoral degrees
must be of interest to NASA. A prime aspect of the scholarship program is the undergraduate
research project that the student performs under the mentorship of a faculty member. This
research experience is effective in encouraging U. S. undergraduate students to attend graduate
school in STEM. The Education scholarship recipients are required to attend a workshop
conducted by NASA personnel where they are exposed to NASA educational materials for use in
their future classrooms.

On Friday, April 20, 2012, all OSGC Scholars and Fellows reported on these projects at the
Twentieth Annual Student Research Project Symposium held at the Ohio Aerospace Institute in
Cleveland, Ohio. In seven different sessions, Fellows and Senior Scholars offered 15-minute
oral presentations on their research projects and fielded questions from an audience of their peers
and faculty, and received written critiques from a panel of evaluators. Junior, Community
College, and Education Scholars presented posters of their research and entertained questions
from all attendees during the afternoon poster session. All students were awarded Certificates of
Recognition for participating in the annual event.

Research reports of students from the following schools are contained in this publication:

Affiliate Members Participating Universities
The University of Akron Marietta College
Case Western Reserve University Youngstown State University
Cedarville University
Central State University Community Colleges
Cleveland State University Columbus State Community College
University of Dayton Cuyahoga Community College
Miami University Sinclair Community College
Ohio Northern University
The Ohio State University
Ohio University
University of Cincinnati
The University of Toledo
Wilberforce University
Wright State University
8
MEMBER INSTITUTIONS
Ohio Space Grant Consortium
Director ..............................................................................................................Dr. Gary L. Slater
Associate Director ...................................................................................... Dr. Gerald T. Noel, Sr.
Program Manager ......................................................................................... Ms. Laura A. Stacko
Program Assistant ............................................................................................Ms. Arela B. Leidy
Lead I nstitution Representative
Ohio Aerospace Institute ................................................................. Ms. Ann O. Heyward
Affiliate Members Campus Representative
Air Force Institute of Technology .................................................. Dr. Jonathan T. Black
Case Western Reserve University ........................................ Dr. Jaikrishnan R. Kadambi
Cedarville University ......................................................................... Dr. Robert Chasnov
Central State University ................................................................ Dr. Gerald T. Noel, Sr.
Cleveland State University ................................................. Dr. Pamela C. Charity-Leeke
Miami University ................................................................................... Dr. Tim Cameron
Ohio Northern University .................................................................. Dr. Jed E. Marquart
Ohio University ............................................................................. Dr. Shawn Ostermann
The Ohio State University .................................................................. Dr. Fsun zgner
The University of Akron .............................................................. Dr. Craig C. Menzemer
University of Cincinnati .......................................................................... Dr. Kelly Cohen
University of Dayton ........................................................................... Dr. John G. Weber
The University of Toledo................................................................ Dr. Lesley M. Berhan
Wilberforce University ................................................................. Dr. Edward A. Asikele
Wright State University .................................................................. Dr. P. Ruby Mawasha
Participating I nstitutions Campus Representative
Marietta College .......................................................................... Prof. Ben W. Ebenhack
Youngstown State University .................................................................. Dr. Hazel Marie
Community Colleges Campus Representative
Columbus State Community College ...................................... Prof. Jeffery M. Woodson
Cuyahoga Community College............................................... Dr. Donna Moore-Ramsey
Lakeland Community College ..................................................... Dr. Margaret F. Bartow
Lorain County Community College ........................................ Dr. George Pillainayagam
Owens Community College ......................................................................Dr. Renay Scott
Terra Community College ................................................................. Dr. James Bighouse
Government Liaisons Representatives
NASA Glenn Research Center:
- Ms. Dovie E. Lacy
- Ms. Darla J. Jones
- Dr. M. David Kankam
- Ms. Susan M. Kohler
Wright-Patterson Air Force Base Research:
- Mr. Wayne A. Donaldson
Wright-Patterson Air Force Base Education:
- Ms. Kathleen Schweinfurth
- Ms. Kathleen A. Levine
9

ACKNOWLEDGMENTS
Thank you to all who helped with the OSGCs Twentieth Annual Research Symposium!
Ohio Aerospace Institute Evaluators
Mark Cline
Matthew Grove
John M. Hale
Craig Hamilton
Michael L. Heil
Ann O. Heyward
Deborah Kalchoff
Gary R. Leidy
Joshua Allen
Edward Asikele
Joshua Allen
Edward Asikele
Eric Bobinsky
Karin Bodnar
Liangyu Chen
Kelly Cohen
Danitra Donatelli
Ben Ebenhack
Patricia Grospiron
Hazel Marie
Ashlie McVetta
Cathy Mowrer
Jay N. Reynolds
Aaron Rood
Brian Tomko
Andrew Trunek
George Williams
Campus Representatives
Affiliate Members:
Dr. Jonathan T. Black, Air Force Institute of Technology
Dr. Jaikrishnan R. Kadambi, Case Western Reserve University
Dr. Robert Chasnov, Cedarville University
Dr. Gerald T. Noel, Sr., Central State University
Dr. Pamela C. Charity-Leeke, Cleveland State University
Dr. Tim Cameron, Miami University
Dr. Jed E. Marquart, Ohio Northern University
Dr. Shawn Ostermann, Ohio University
Dr. Fsun zgner, The Ohio State University
Dr. Craig C. Menzemer, The University of Akron
Dr. Gary L. Slater, University of Cincinnati, Director, Ohio Space Grant Consortium
Dr. John G. Weber, University of Dayton
Dr. Lesley M. Berhan, The University of Toledo
Dr. Edward A. Asikele, Wilberforce University
Dr. P. Ruby Mawasha, Wright State University
Participating Institutions
Prof. Ben W. Ebenhack, Marietta College
Dr. Hazel Marie, Youngstown State University
Community Colleges
Prof. Jeffery M. Woodson, Columbus State Community College
Dr. Donna Moore-Ramsey, Cuyahoga Community College
Dr. Margaret F. Bartow, Lakeland Community College
Dr. George Pillainayagam, Lorain County Community College
Dr. Renay Scott, Owens Community College
Dr. James Bighouse, Terra Community College
Special thanks go out to the following individuals:
Michael L. Heil and the Ohio Aerospace Institute for hosting the event.
Ann O. Heyward, Ohio Aerospace Institute, for all of her contributions to the OSGC.
Jay N. Reynolds, Cleveland State University, for organizing the Poster Session.
Ohio Aerospace Institute staff whose assistance made the event a huge success!
Silver Service Catering (Scot and Mary Lynne)
Sharon Mitchell Photography
10
NASA on celebrating 50 years!



2012 OSGC Student Research Symposium
Hosted By: Ohio Aerospace Institute (OAI)
22800 Cedar Point Road Cleveland, OH 44142 (440) 962-3000
Friday, April 20, 2012

AGENDA




8:00 AM 8:30 AM Sign-In / Breakfast / Refreshments / Student Portraits .......................................... Lobby

8:30 AM 8:40 AM Welcome and Introductions Gary L. Slater ............................... Forum (Lobby Level)
Director, Ohio Space Grant Consortium

8:40 AM 8:45 AM Ann O. Heyward ........................................................................... Forum (Lobby Level)
Vice President for Research and Educational Programs, Ohio Aerospace Institute

8:45 AM 10:30 AM Oral Presentations All Senior Scholars and Fellows (105 minutes)
Session 1 (Groups 1, 2, 3, and 4)
Group 1 ...................................................................................... Forum (Lobby Level)
Group 2 ..................................................................... Presidents Room (Lower Level)
Group 3 .......................................................................... Industry Room A (2nd Floor)
Group 4 .......................................................................... Industry Room B (2nd Floor)

10:30 AM Noon Poster Presentations (90 minutes) ......................................................................... Lobby
All Junior, Community College, and Education Scholars
Jay N. Reynolds, Cleveland State University, Coordinator of Poster Session

12:05 PM 1:00 PM Luncheon Buffet ......................................................... Atrium / Sunroom (Lower Level)

1:00 PM 1:30 PM Luncheon Speaker Gary L. Slater .................................................................. Sunroom

1:30 PM Presentation of Best Poster Awards .................................................................. Sunroom

1:35 PM Group Photograph ................................................................... Lobby / Atrium Stairwell

1:45 PM 3:30 PM Oral Presentations (Continued) All Senior Scholars and Fellows (105 minutes)
Session 2 (Groups 5, 6, and 7)
Group 5 ...................................................................................... Forum (Lobby Level)
Group 6 ..................................................................... Presidents Room (Lower Level)
Group 7 .......................................................................... Industry Room A (2nd Floor)

3:30 PM Symposium Adjourns





11


STUDENT ORAL PRESENTATIONS
SESSION 1 8:45 AM to 10:30 AM (105 minutes)

Group 1 Aerospace Engineering

FORUM (AUDITORIUM LOBBY LEVEL)
Group 2 Mechanical Engineering

PRESIDENTS ROOM (LOWER LEVEL)
Evaluators: George Williams

Evaluators: Hazel Marie, Ashlie McVetta

8:45 Krista M. Kirievich, Senior, UCincinnati
9:00 Nathan A. Wukie, Senior, UCincinnati
9:15 Brian J. Stahl, Masters 1, Case Western
9:30 Robert C. Charvat, Masters 2, UCincinnati
9:45 Robert D. Knapke, Doctoral 1, UCincinnati
10:00 Adam R. Gerlach, Doctoral 2, UCincinnati

8:45 Laura M. Bendula, Senior, Miami University
9:00 Robert W. Davidoff, Senior, UDayton
9:15 Derick S. Endicott, Senior, Ohio Northern
9:30 Stephen E. Lai, Senior, Miami University
9:45 Nicholas S. Szczecinski, Senior, Case Western
10:00 Victoria A. Webster, Senior, Case Western
10:15 Benjamin D. Yeh, Senior, Cedarville University

Group 3 Computer Science/
Computer Engineering

INDUSTRY ROOM A (2ND FLOOR)
Group 4 Biomedical Eng./Chemical Eng./
Chemical&Biomol. Eng./Chemical&Biomedical Eng.

INDUSTRY ROOM B (2ND FLOOR)
Evaluators: Joshua Allen, Edward Asikele,
Brian Tomko
Evaluators: Ben Ebenhack, Aaron Rood

Computer Science:
8:45 Tanisha M. Brinson, Senior, Wilberforce
9:00 Kristen D. Edwards, Senior, Central State
9:15 Matthan B. Sink, Senior, Wright State

Computer Engineering:
9:30 Martha E. Garasky, Senior, Miami University
9:45 Malcolm X. Haraway, Senior, Wilberforce
10:00 Michael D. Williams, Senior, Wilberforce

Biomedical Engineering:
8:45 Isiah A. Kendall, Senior, Wright State

Chemical Engineering:
9:00 Hadil R. Issa, Senior, University of Dayton
9:15 Evan R. Kemp, Senior, University of Dayton
9:30 Ciara C. Seitz, Senior, Cleveland State
9:45 Charles F. Tillie, Senior, Cleveland State

Chemical and Biomedical Engineering:
10:00 Brittany M. M. Studmire, Masters 1, Cleve. State

Chemical and Biomolecular Engineering:
10:15 Nicole D. Guzman, Doctoral 2, Ohio State






12
STUDENT POSTER PRESENTATIONS
10:30 AM to Noon (90 minutes)
Jay N. Reynolds, Cleveland State University, Coordinator of Poster Session
Evaluators: Eric Bobinsky; Karin Bodnar, NASA Glenn Research Center; Liangyu Chen, Ohio Aerospace Institute;
Kelly Cohen, University of Cincinnati; Danitra Donatelli; Patricia Grospiron, Ohio Aerospace Institute;
Cathy Mowrer, Marietta College; Andrew Trunek, Ohio Aerospace Institute
J unior Scholarship Recipients:
Christopher A. Adams, Junior, Electrical Engineering, Wilberforce University
Winston L. Black, II, Junior, Chemical Engineering, University of Dayton
Matthew C. Boothe, Junior, Petroleum Engineering, Marietta College
Harrison W. Bourne, Junior, Electrical Engineering, Miami University
Garrick M. Brant, Junior, Chemical Engineering, Youngstown State University
Chellvie L. Brooks, Junior, Manufacturing Engineering, Central State University
Beatrice M. Burse-Wooten, Junior, Manufacturing Engineering, Central State University
Aubrey A. Garland, Junior, Mechanical Engineering, Youngstown State University
Courtney A. Gras, Junior, Electrical Engineering, The University of Akron
Kevin M. Hatcher, Junior, Biomedical Engineering, Wright State University
James P. Houck, Junior, Petroleum Engineering, Marietta College
Jasmine N. Irvin, Senior, Computer Engineering, Wilberforce
Nicholas S. Jones, Junior, Mechanical Engineering, Ohio Northern University
Carmen Z. Kakish, Junior, Biomedical Engineering, Case Western Reserve University
Michael P. Karnes, Junior, Mechanical Engineering, Miami University
Logan M. Kingen, Junior, Mechanical Engineering, Ohio Northern University
Joseph P. Montion, Junior, Chemical Engineering, The University of Toledo
Justin S. Nichols, Junior, Cedarville University
Dominique N. Roberts, Junior, Manufacturing Engineering, Central State University
Leslie A. Sollmann, Junior, Mechanical Engineering, University of Dayton
Tyler J. Vick, Junior, Aerospace Engineering, University of Cincinnati
Alex R. Walker, Junior, Aerospace Engineering, University of Cincinnati
Erkai L. Watson, Junior, Mechanical Engineering, Cedarville University
Mahogany M. Williams, Junior, Computer Engineering, Wilberforce University
Community College Scholarship Recipients:
Darren M. Conley, Sophomore, Construction Management, Columbus State Community College
Phillip E. Johnson, Sophomore, Visual Communications and Graphic Design, CCC
Gerald A. Munguia, Sophomore, Aviation Technology, Sinclair Community College
Leah M. Oty, Sophomore, Construction Engineering Technology, Columbus State Community College
Education Scholarship Recipients:
Heather M. Bennett, Senior, Middle Childhood Education, Science/Mathematics, University of Cincinnati
Cory M. Bishop, Graduate Student, Middle Childhood Education, Science, The University of Toledo
Audra D. Carlson, Junior, Middle Childhood Education, Science/Mathematics, Youngstown State University
Brea R. Furman, Senior, Early Childhood Education, University of Cincinnati
Anna J. Gill, Senior, Middle Childhood Education, Science, Marietta College
Emily S. Horrigan, Senior, AYA Education, Science, Cleveland State University
Christopher J. Iliff, Senior, AYA Education, Mathematics, Ohio Northern University
Charles D. Kish, Senior, AYA Education, Integrated Science, Youngstown State University
Jennifer J. Lyon, Junior, AYA Education, Mathematics, Cedarville University
Caroline I. Miller, Sophomore, Early Childhood Education, Miami University
Kathryn R. Reilly, Junior, Grades 6-12, Physics, Wright State University
Stephanie A. Rischar, Post-Bacc., AYA Education, Science, Cleveland State University
Steven E. Solomon, Junior, AYA Education, Science, The University of Toledo
Zachary M. Tocchi, Senior, AYA Education, Mathematics, The University of Akron
Marcia J. White, Senior, Middle Childhood Education, Science/ Mathematics, Wright State University
Caitlin M. Zook, Senior, AYA Education, Mathematics, Ohio Northern University
13

STUDENT ORAL PRESENTATIONS (Cont.)

SESSION 2 1:45 PM to 3:30 PM (105 minutes)

Group 5 Electrical Engineering


FORUM (AUDITORIUM LOBBY LEVEL)
Group 6 Bioengineering/Biology/
Chemistry/Physics/Petroleum Eng./Astrophysics

PRESIDENTS ROOM (LOWER LEVEL)
Evaluators: Karin Bodnar, Liangyu Chen Evaluators: Jay Reynolds

1:45 Rachel L. Bryant, Senior, Wright State
2:00 Joseph M. DiBenedetto, Senior, Ohio U
2:15 Benita I. Gowker, Wright State University
2:30 Pierre A. Hall, Senior, University of Akron
2:45 David J. Sadey, Senior, Cleveland State
3:00 Carr D. Scheidegger, Senior, Cleveland State
3:15 Alan L. Jennings, Doctoral 3, UDayton


Bioengineering:
1:45 Alena M. Barga, Senior, University of Toledo
Biology:
2:00 Candace A. Johnson, Senior, Central State
Chemistry:
2:15 Michelle M. Mitchener, Senior, Cedarville
Physics:
2:30 Stephanie D. Ash, Senior, Ohio Northern
Petroleum Engineering:
2:45 Aaron M. Balderson, Senior, Marietta College
3:00 Jennifer E. Masters, Senior, Marietta College
Astrophysics/Physics/Astronomy:
3:15 Desire Cotto-Figueroa, Doctoral 1, Ohio U

Group 7 Manufacturing Engineering/ Mechanical Engineering/
Materials Engineering/Welding Engineering

INDUSTRY ROOM A (2ND FLOOR)

Evaluators: Brian Tomko

Manufacturing Engineering:
1:45 Nathaniel J. Morris, Senior, Central State

Mechanical Engineering:
2:00 Eric K. Mayhew, Senior, Case Western
2:15 Matthew G. Smith, Senior, Ohio Northern
2:30 Chung Y. (C. Y.) Wo, Senior, Case Western

Materials Engineering:
2:45 Robyn L. Bradford, Masters 1, UDayton

Welding Engineering:
3:00 Daniel R. E. Foster, Doctoral 3, Ohio State


14
15
Welcome Session
OSGC Director Dr. Gary L. Slater
(University of Cincinnat) welcomes everyone to
the Symposium!
Ann. O Heyward, Vice President for Research and
Educatonal Programs, Ohio Aerospace Insttute,
addresses the audience.
16
Robert C. Charvat (University of Cincinnat) presents
The Sierra Project: Unmanned Aerial Systems for
Emergency Management Demonstraton Results.
Adam R. Gerlach (University of Cincinnat) explains
Approximate Inverse Dynamics for Dynamically
Constrained Path-Planning.
Brian J. Stahl (Case Western Reserve University)
discusses Thermal Stability and Performance of Foil
Thrust Bearings.
Martha E. Garasky (Miami University)
explains Summary Web Service.
Derick S. Endicot (Ohio Northern University)
presents Computatonal Study of Cylinder
Heatng/Cooling for Boundary Layer Control.
Nathan A. Wukie (University of Cincinnat) discusses
Comparison of Simulatons and Models for Aspiraton
in a Supersonic Flow Using Overfow.
17
Hadil R. Issa (University of Dayton) presents
Interacton Between Osteoblasts Cells and Fuzzy
Fibers for Possible Medical Use.
Evan R. Kemp (University of Dayton) presents
Doping of Phthalocyanines for Use As Low
Temperature Thermoelectric Materials.
Malcolm X. Haraway (Wilberforce University)
discusses Design Analysis and Modeling of
Hazardous Waste Containment Vessels in
Support of Safe Nuclear Waste Transport
and Storage.
Mathan B. Sink (Wright State University) explains
Semantc Framework for Data Collecton and
Query for Knowledge Discovery.
18
Alena M. Barga (The University of Toledo) presents
Thermoelectric Soluton for Temporomandibular
Joint Pain.
Stephanie D. Ash (Ohio Northern University) presents
Low Energy Electron Difracton Structural Analysis of
AU(111)-(5x5)-7S.
Desire Coto-Figueroa (Ohio University) discusses Radiaton
Recoil Efects on the Dynamical Evoluton of Asteroids.
Benita I Gowker (Wright State University) explains Wheres
Waldo? A Design Paradigm in Patern Recogniton, Machine
Learning and Image Processing.
Pierre A. Hall (The University of Akron)
discusses Batery Technology.
19
Alan L. Jennings (University of Dayton)
presents Optmal Inverse Functons Created
via Populaton Based Optmizaton.
Rachel L. Bryant (Wright State University)
explains Wireless Charging System.
Robyn L. Bradford (University of Dayton)
discusses Processing CNT-Doped Polymer
Fibers Using Various Spinning Techniques.
Daniel R. E. Foster (The Ohio State University)
presents Elastc Constants of Ultrasonic Additve
Manufactured Al 3003-H18.
20
Cory M. Bishop (The University of Toledo)
presents Gravity: Helping Children
Understand What It Is and What Its
Efects Are.
Joseph P. Monton (The University of Toledo)
explains Synthesis of Polymeric Ionic Liquid
Partcles and Iron Nano-Partcles.
From lef to right: Nathaniel J. Morris and
Dr. Augustus Morris, Jr.
(Central State University)
Christopher A. Adams (Wilberforce University)
discusses Automated Automobile System.
Heather M. Bennet (University of Cincinnat)
presents What a Drag: Parachute Design.
Garrick M. Brant (Youngstown State University)
discusses his research Detecton of NO2 Using
Carbon Nanotubes with fellow Scholar
Hadil R. Issa (University of Dayton).
21
Brea R. Furman (University of Cincinnat)
presents Microgravity on the ISS.
Anna J. Gill (Marieta College) presents
Spectroscopy: Exploratons With Light.
Nicholas S. Jones (Ohio Northern University)
presents The Efects of CG Shifing During Flight.
Carmen Z. Kakish (Case Western Reserve
University) discusses Oncology Therapeutcs
Hyperthermia Using Self-Heatng Nanopartcles.
From lef to right: James P. Houck and
Mathew C. Boothe (Marieta College)
Steven E. Solomon (The University of Toledo)
discusses his poster on Geology of the Moon.
22
Columbus State Community College Faculty and
Scholars (from lef to right):
Darren M. Conley, Professor Dean Bortz,
and Leah M. Oty.
Robyn L. Bradford (Fellow,
University of Dayton) and
Gorgui Nado
(Central State University)
Marieta College Faculty and Scholars:
Back Row: Aaron M. Balderson and Mathew C. Boothe;
Middle Row: Prof. Ben Ebenhack, Anna J. Gill, and
James P. Houck; Front Row: Jennifer E. Masters
and Dr. Cathy Mowrer.
Dr. Jay N. Reynolds (Cleveland State University)
Dr. Gary L. Slater and Robert W. Davidof
(University of Dayton)
Everyone enjoying the Poster Session!
23
From lef to right: Dr. Michael DiBenedeto
(Ohio University)
and Dr. Gary L. Slater.
Aubrey A. Garland (Youngstown State University)
discusses Analysis of a Passive Adaptve
Hydrodynamic Seal with Dr. Kelly Cohen,
Campus Representatve,
University of Cincinnat.
Beatrice M. Burse-Wooten (lef) (Central State
University) discusses Process Design of Reinforced
Polymeric Resins Impregnated With Mult-Walled
Carbon Nanotubes with Patricia Grospiron (right)
(Ohio Aerospace Insttute).
Joseph P. Monton (The University of Toledo)
discusses his research Synthesis of Polymer-
ic Ionic Liquid Partcles and Iron Nano-
Partcles with Nicole D. Guzman
(Fellow, The Ohio State University).
University of Dayton Fellow Alan L. Jennings
(lef) and Dr. John G. Weber (right)
(Campus Representatve from the University
of Dayton).
24
Dr. Gary L. Slater (far right) posing with the Poster Contest winners. From lef to right:
Leah M. Oty (Columbus State Community College), Erkai L. Watson (Cedarville University),
and Anna J. Gill (Marieta College).
Erkai L. Watson receives his Certfcate of
Recogniton from Dr. Gary L. Slater
as Dr. Jay N. Reynolds looks on.
Anna J. Gill accepts her Certfcate of Recogniton
as Dr. Gary Slater and Dr. Jay N. Reynolds look on.
Leah M. Oty receives her Certfcate of Recogniton
from Dr. Jay N. Reynolds as Dr. Gary L. Slater looks on.
25
Automated Automobile System

Student Researcher: Christopher A. Adams

Advisor: Dr. Edward Asikele

Wilberforce University
Department of Electrical Engineering

When you are driving in a vehicle there are many things that come with being road ready in all given
driving events. When it starts to rain you must turn on your windshield wipers to reduce visual imparity.
When it becomes dark you must turn on your headlights to reduce visual imparities. This can be a
distraction; taking your eyes and attention away from the road to adjust your vehicle to be well suited for
the atmospheric conditions. The driver should have the comfort to focus on the roadways at all times,
without having to adjust their automobile for atmospheric discrepancies.

In this understanding of a problem that is currently at hand while driving, it is my goal to find a solution
to this problem that will be beneficial to auto owners. I have done much research on sensors that are used
to detect rain levels. I have come across a sensor that is called the RG-11 optical rain gauge sensor which
has been created by Hydreon Corporation. This sensor detects the amount of rain in the atmosphere by
using several infrared beams that project across a rounded surface. How the beams are used to detect rain
is very simple. When the infrared light is projected on the surface it must be projected in a series of four
different angles. You can see this in the picture below:




The beams cover a 360 degree radius around the inside of the sensor. They are sent from two parallel
LED sources and absorbed by a LED receiver. SW 5 must be turned on to initiate its raining mode.
When a drop hits the outside of the sensor a beam escapes. These escaped beams can measure the size of
the drop and amount of drops by infrared dissipation. The dissipation of beams only last 50ms before
another infrared beam is shot in place of the first. This is how the sensor is able to measure how many
drops of water have fallen on the surface. You must adjust the switch inside the sensor to be set for high,
medium, or low rain amounts. The circuit for these systems operates by using an operator switch and an
open or closed relay. The voltage can either be high or low based on the amount of perspiration and
operator control switch method. The sensor also has the ability to detect light or dark levels in the
atmosphere and can transmit a signal if the luminescence falls below 2000 lux. This however cannot be
used simultaneously with rain sensor because it only has a single relay so the switch will either be open or
closed.

The RG-11 sensor is a great idea but I have found that it needs much improvement. With the same
concept of using Optics to sense rain levels I have researched how a circuit will be designed to adjust
wiper speeds simultaneously based on the level of rain in the atmosphere. The RG-11 can sense the
amount of rain and turn wipers on automatically, but it cannot control the speed of windshield wiper
blades. So as you can see this is a big problem. However I have designed a solution. We already know
that infrared beams will dissipate at a given amount of rainfall so with that we can design a circuit to
handle the amount of dissipation beams by using them in a series of resistances. First you must
understand that the circuit will have a variable resistance ( ). This variation of resistance is for the
26
variable speeds of the wiper blades. There will be five resisters to make up the R(v) (variable resistance).
The resistances measure zero rain, light rain, medium rain, high rain, and very high rain. Speed variations
will be adjusted by the amount of rain and how much resistance is measured in the circuit at the given
time. The more resistance in a circuit the slower the wiper blades will go. The motor ( ) will also
have some value which will be added to total resistance. There will be a voltage lag across the resistor and
motor, we can then calculate the current if we know the voltage. Being that we will be using a car battery
which consist normally of 12V, we can find current using the equation I = V/R. The amount of resistance
from the sensor is measured at 8m . The current can then be calculated once you have found the
equivalent resistance which includes the motor. The current will travel to a semiconductor device called a
transistor. The transistor is used to amplify signals and switch electronic powers. The current that travels
through the terminals will be switched to another set of terminals. Because the output power is higher
than the input power the signal will be amplified. The current in the transistor will be sent to the base, and
then collected by the collector, which will then send the signal to the emitter. Transmitters are either used
to amplify the current or draw the current from the circuit. This feature will give us the ability to switch
between resistances (wiper speeds) and handle all the current in the circuit simultaneously. A view of the
circuit is below:

The circuit is not that complex; however it will give us the ability to integrate with an automobile circuit
to allow for the windshield wipers to work automatically at the correct speed based on rain levels.

Another problem that I have come across is the dimensions of RG-11 the sensor are un-appealing and
blunt looking on automobiles. I have found a way to use the same optical technique but in a more efficient
and visually friendly way. The sensor transmits beams of infrared light at various angles. When there is a
change in light intensity a beam escapes and dissipates for 50ms. The infrared then returns for the next
reading. The infrared beams can be emitted across a flat surface as well. They will project from the light
emitting diode (LED) and be received by the silicon photodiode (SPD). When a drop accumulates on the
windshield it will block light wherefore causing a shadow. The SPD will read the shadow as condensation
build up on the window. When raindrops reach the surface the light intensity changes according to size,
the degree of changes determines how much rain is falling wherefore wiper speeds are then adjusted. This
function can be set up on the roof of the vehicle or in the headlamps of the automobile. It will not be as
visible as the RG-11 sensor is.

In the future I hope to design an automated headlight system as well as an in car temperature control
system.

References
http://apps.volvocars.us/ownersdocs/1992/1992_960/92960_1_6.html
http://www.sensiblesolutions.se/articles/ims2004_the_smart_diaper.pdf
http://www.phaseivengr.com/p4main/Sensors/WirelessPressureSensors.aspx
http://ask.cars.com/2010/07/which-carssuvs-have-a-full-glass-roof.html
http://parts.digikey.com/1/parts-kws/light-sensor
http://www.ecnmag.com/news/2011/06/RFIDprotection/Albis-Technologies-Powers-Protection-With-
Active-RFID-for-Museums-in-Real-Time.aspx
http://auto.howstuffworks.com/wiper1.htm
http://www.modulatedlight.org/Modulated_Light_DX/OpticalComms4Amateur79.html
http://www.rainsensors.com/
27
Low Energy Electron Diffraction Structural Analysis of AU(111)-(5x5)-7S

Student Researcher: Stephanie D. Ash

Advisor: Dr. Mellita Caragiu

Ohio Northern University
Department of Physics and Astronomy

Abstract
The clean Au(111) surface is known to undergo several structural changes when exposed to adsorbates, in
particular sulfur. As the sulfur coverage increases towards 1ML, the structure of the gold (111) surface
has been observed to go through a range of changes as follows: unreconstructed (1x1), followed by a
(5x5)-7S structure, then a (3x3)R30-S phase, and eventually an incommensurate "complex" phase.
The current LEED study focuses on the intermediate Au(111)-(5x5)-7S phase. The 7 sulfur atoms in each
unit cell are found to occupy fcc hollow sites. There is considerable rumpling of the sulfur adsorbed layer,
as well as the top gold layers in the surface, which results in an average S-Au distance of 1.540.06,
followed by the next Au-Au average interlayer spacing of 2.370.01. When comparing the latter value
to the bulk interlayer spacing of clean gold, of 2.35, a slight expansion is noticed. The results are
compared to the structural information obtained by other studies of the same Au(111)-(5x5) phase.

Project Objectives
The surface under investigation is Au(111)-(5x5)-7S, that is, a clean gold surface, cut along the (111)
crystallographic plane, upon which sulfur has been adsorbed, for a total coverage of 0.28 ML. The study
investigates the position of the adsorbed atoms on the surface by LEED (Low-energy electron
diffraction). Also, the study draws conclusions about the interlayer spacings: the average distance
between the sulfur layer and the top-most gold layer, as well as average distances between consecutive
gold planes.

Methodology
Experimental Method
In studying the surface of materials it is very important that this is done under high vacuum conditions.
To ensure high vacuum, besides pumping out the air from the chamber, the vacuum chamber itself has to
be baked to cause any of the molecules that are on the surface of the chamber to be desorbed and
afterward pumped out of the system. Secondly, it is important that the surface of the sample under study
is clean. This is done by bombarding the sample with Ar
+
to remove any molecules that are absorbed onto
the surface of the sample. Then the sample is annealed, or heated, to allow the surface molecules to
rearrange themselves into a smooth surface.

Once the surface of the material is prepared in the manner explained above, an electron gun sends
electrons perpendicular to the surface of the crystal. The electrons, upon interacting with the top few
layers of the crystal, emerge from the crystal (are elastically back scattered) and hit a fluorescent screen
creating bright diffraction spots, called beams. A mirror reflects the image of these diffraction spots into
the detector of a camera. Each image is stored on a computer and will be analyzed later. The intensity of
each beam is extracted as a function of the energy of the electrons forming that particular beam. Thus, the
experimental Intensity versus Energy plots - I(E) - are obtained (see figures 3 and 4 for some typical
experimental curves - shown with continuous line) .

Computational Method
The computational analysis consists in calculating theoretical I(E) curves using the Barbieri/Van Hove
SATLEED package and the Barbieri/Van Hove phase shift package [1]. Each I(E) curve corresponds to
28
very particular positions of the atoms in the surface layers. When the theoretical curves calculated for a
certain set of coordinates of the atoms in the surface turn out to be similar enough to the experimental
curves, one can assume that the "guessed" coordinates of the atoms correspond to the real position of the
atoms in the surface. Thus, the atomic coordinates are found. Similarities between the theoretical and
experimental curves (mainly the position of the peaks) are judged by a reliability factor, R-factor, which
should take values as small as possible, i.e. less than 0.3, for a good match.

Data and Results
The LEED experiment was performed by the Surface Physics Group at Penn State University. The I(E)
experimental curves extracted from LEED images by our collaborators were sent to us for analysis. There
were two sets of experimental data that were taken at Penn State, the main difference being the way the
Au(111) sample was treated just before LEED images have been acquired. In one case, the sample has
been heated (annealed) before extracting the LEED images, and for the second set, the sample has not
been heated (un-annealed sample) before the extraction of the I(E) curves. Other distinguishing factors
between the two sets have been listed in the table in figure 1.

Based on the LEED pattern, one can deduce that the adsorbed sulfur forms an overlayer with a unit cell
which has sides five times larger than the unit cell corresponding to the clean Au(111) surface. Thus the
(5x5) notation of the unit cell. Also, based on the coverage of 0.28 ML of adsorbed sulfur, it is deduced
that there should be 7 sulfur atoms in each unit cell. A model of the optimal (5x5) suface is represented in
figure 2. In order to understand the exact placements of each of the seven sulfur atoms, multiple surface
structures were investigated. The structure with the best R-factor was found for sulfur adsorbed in fcc
adsorption sites, in the particular arragment shown in figure 2.

For both sets of experimental data, the experimental I(E) curves were compared to the theoretical I(E)
curves, as shown in figures 3 and 4. In the case of the un-annealed sample, only 7 experimental beams
have been extracted, while the annealed data set has 9 beams. This would explain why the R-factor of
0.28 obtained for the un-annealed data is slightly better (lower) than the 0.33 R-factor obtained for the
annealed data set. The R-factors for individual beams are written on the graph next to each beam.

Knowing the sulfur atomic coordinates, as well as the coordinates for the gold atoms in the top-most
layer, and subsequent layers, one can calculate the average interlayer spacings. Figure 5 displays the
results for both the un-annealed and the annealed samples; the un-annealed results are written first
followed by the annealed ones. The spacings are similar for both.

Conclusion
The results obtained compare well with studies of the same structure done by two other groups - Yu et. al.
[2] and Rodriguez et. al. [3].
a) The S-Au
1
interlayer spacing of 1.570.14 (annealed data set) or 1.650.11 (un-annealed data
set) compares well with the 1.560.07 value found by Yu et. al. and the 1.71 value resulting
from DFT calculations by Rodriguez et. al.
b) The S-Au bond length varies between 2.290.1 (annealed data set) and 2.340.08 (un-
annealed data set) both slightly larger than the 2.280.04 found by Yu et. al., but still smaller
than the calculated value of 2.39-2.45 by Rodriguez et. al.
c) As expected from fcc (111) metal surfaces, the Au
1
-Au
2
relaxation is very small: the 2.360.02
and 2.340.03 topmost interlayer spacing is to be compared to the 2.355 value of bulk
interlayer spacing.




29
Figures

Un-annealed exp. data Annealed exp. data
LEED images taken at: T=80K T=118K
Energy range E=987 eV E=1929 eV
Inner potential V
i
=-5 eV; V
r
=4.790.05eV V
i
=-5 eV; V
r
=6.500.15eV
Nr. of phaseshifts used in the
calculation
l
max
=12 l
max
=12
Debye temperature
Debye for S
=150K,

Debye for Au
=170K

Debye for S
=150K,

Debye for Au
=170K
Overall R-factor R
P
=0.28 R
P
=0.33

Figure 1. Table of experimental conditions and computational parameters which distinguish the two
analyses: for the un-annealed Au(111) sample, and for the annealed one.



Figure 2. The optimal model (5x5) surface structure when the sulfur atoms are located in the fcc
adsorption sites. The three layers of gold atoms are marked and the dark gray atoms on the surface
represent the sulfur atoms. The parallelogram is the overlaying unit cell.

Figure 3. Intensity vs. energy curves, for the un-annealed data set, for seven beams; the solid lines
represent the experimental data and the dashed lines represent the theoretical data.

30


Figure 4. Intensity vs. energy curves, for the annealed data set, for nine beams; the solid lines represent
the experimental data and the dashed lines represent the theoretical data.



Figure 5. Average interlayer spacing for the (5x5) structure. The first values correspond to the un-
annealed data set, and the second values correspond to the annealed data set.

Acknowledgments
Thanks to the Ohio Space Grant Consortium Scholarship for supporting my learning and research
experience. Thanks to Dr. Renee Diehl, Heekeun Shin and Garry McGuirk at Penn State University, and
to James Thompson at Lincoln University for the experimental data. Thanks to Dr. Mellita Caragiu for
teaching me the computational aspect of LEED.

References
1. A. Barbieri, M. A. Van Hove, private communication - to acknowledge the use of the "Barbieri/Van
Hove SATLEED package" and the "Barbieri/Van Hove phase shift package".
2. Yu, M., et al., The Structure of Atomic Sulfur Phases on Au(111). Journal of Physical Chemistry C,
2007. 111(29): p. 10904-10914.
3. Rodriguez, J.A., et al., Coverage Effects and the Nature of the Metal-Sulfur Bond in S/Au(111): High-
Resolution Photoemission and Density-Functional Studies. Journal of the American Chemical
Society, 2003. 125(1): p. 276-285.
d(S-Au
1
) = 1.650.11 / 1.570.14
d(Au
1
-Au
2
) = 2.360.02 / 2.340.03
d(Au-Au) = 2.3555
(Fixed Bulk Value)
31
Gas Invasion Into Freshwater Auquifers In Susquehanna County Pennsylvania

Student Researcher: Aaron M. Balderson

Advisor: Ben Ebenhack

Marietta College
Petroleum Engineering Department

Abstract
Three wells operated by Cabot Oil & Gas (Baker #1, Gesford #3 and Gesford #9) were ordered to be
plugged by the Pennsylvania Department of Environmental Protection due to natural gas invasion into
fresh water aquifers (Watson). These wells were all located in Dimock Township in Susquehanna
County, Pennsylvania (Consent). The gas invasion affected 19 homes (Consent) and water tests showed
dissolved methane contents of the water to be 50 mg/L (Water). Cabot Oil & Gas was required to deliver
water to homes after plugging the well until water tests showed a decreased volume of dissolved methane
(Consent). In addition to the cost of these actions a fine of $500,000 was assessed (Consent). Water
testing revealed that the methane originated from an Upper Devonian formation known as the Catskill
Formation. A review of the drilling and the completion process for these wells was undertaken to
possibly find remedial actions that could have been taken to avoid this situation. Unfortunately no
remedial actions were found but it was concluded that one step needs to be added to the drilling process,
namely water samples need to be taken around every location before drilling commences. This paper
reviews the possible sources for the gas found in these Cabot Oil & Gas wells and demonstrates the need
for sampling to occur prior to drilling.

Project Objectives
This project intends to review the drilling and completion processes for the three Cabot Oil & Gas wells
mentioned in the abstract. This review is intended to extract a remedial action in the drilling or
completion process that would better protect the fresh water aquifers. Protecting the aquifers would allow
the petroleum industry to be viewed as more environmentally and communally friendly. Furthermore, it
would allow for court cases such as the one in which Cabot Oil & Gas has been involved to be avoided.
If a process were to be found that needs to be included in everyday drilling practices this could better ease
the mind of community members worldwide eventually.

Methodology Used
To review these wells some basic data must be obtained initially, namely the drilling and completion
notes and any geological information available. The data must be obtained for each well because
localized geology can be different even in wells relatively close together.

The Gesford #3 well was completed on December 16, 2008 . At this location fresh water was detected at
350 feet. Pipe was set and cemented, across the fresh water zone before Cabot Oil & Gas drilled into a
gas bearing zone located at 1459 feet . There was steel pipe and cement from 892 feet to the surface
protecting the aquifer. After the cement set drilling continued through the gas bearing zone at 1459 feet.
The zone was expelling 900 million cubic feet of gas per day. Drilling continued to 1673 feet where pipe
was set and cemented to provide another string of protection for the fresh water aquifers and to shut in the
gas bearing zone. The well was completed into the target zone to a depth of 7058 feet. Once the
production string (the last string of pipe set in which the gas flows to the surface) and cement was set the
well began producing (Watson).
32
During the drilling of the Gesford #9 well there was a fresh water zone encountered at 350 feet. In order
to protect the zone 857 feet of pipe was set and cemented. After the cementing was complete initially the
operator noticed that the space between the pipe and the well bore was not full of cement. Cement was
placed in this area by grouting. A logging tool was run in the casing that determined cement adequately
filled the space between the pipe and the well bore (Watson). Therefore the fresh water zone was
protected by a layer of steel and a layer of cement. Drilling only commenced to a depth of 1911 feet
(Exhibit) before the well was ordered to be plugged. A production string was never set because the target
formation was never encountered.

Fresh water was discovered at 990 feet while drilling, and pipe and cement were set at 1094 feet to protect
this zone. Another pipe was set at 1534 feet which further protected the water zone. The first recorded
gas show is at 5908 feet at the Manatango Shale. Drilling continued until a depth of 7450 feet. While
cementing the final string of casing in the well bore, minor cementing problems occurred. The cement
locked up ending the initial cement job. A logging tool was run and the top of the cement was determined
to be at only 7100 feet. Cabot Oil & Gas elected to do a squeeze job. After the first squeeze job another
logging tool was run which indicated complete fill-up had still not occurred. Cabot Oil & Gas elected to
do another squeeze job to ensure a proper cement job was complete (Watson).

Looking at possible causes we find a variety of things to consider.

Fractures in the Cement:
The logging tools run in the wells indicated cement filled the annulus but the report did not specifically
say anything about fractures within the cement. Fractures occurring after the cement sets could have
allowed methane to migrate up the well bore into the freshwater zones. The logging tool run rules out the
possibility of no cement existing between the pipe and the well bore, but does not rule out fractured
cement. However these tools can determine if fractured cement is in place so this should have been
known.

Natural Fractures:
Natural fractures present could provide a path for the methane. Natural gas and oil seeps have been
known to exist for generations. If this existed natural gas should have been present possibly decades
before drilling occurred. The Marcellus Shale does contain natural fracturing which is necessary to obtain
gas from this unconventional reservoir economically, but if these natural fractures allowed flow into the
aquifer gas would be there years before drilling since gas has been present there for centuries.

Near Surface Gas Generation:
Gas is generated near surface and can contaminate freshwater aquifers. Hydrocarbon analysis may rule
out the possibility or may confirm the possibility of near surface gas contaminating the aquifers. Along
with the analysis the concentration of methane in the freshwater aquifers may also rule out the possibility
of near surface gas generation. Nonetheless this possibility remains until testing confirms or disregards
this theory.

Other Sources:
Dimock, Pennsylvania has been a source for coal mining. Methane generates from coal beds, hence
another source where methane can originate. If this methane is contaminating the aquifers then Cabot Oil
& Gas is not at fault and oil and gas companies do not need to change procedures to protect aquifers.
Another possibility includes gas contamination being present before drilling commenced. YouTube
videos claim this possibility (Cabot), but credibility with these videos is questionable. When individuals
33
have an opportunity to benefit financially from a situation many take advantage. This could very well be
the case in Dimock, Pennsylvania. If this is the case natural fracturing or coal bed methane could have
been occurring before Cabot Oil & Gas drilled the three wells in consideration.

Results Obtained
Based on an article published in the Oil & Gas Journal in December 2011 titled Methane in
Pennsylvania Water Wells Unrelated to Marcellus Shale Fracturing more than 1,700 samples gathered
prior to drilling activity in 2006 show that natural gas has existed in the fresh water aquifers. This gas is
thought to originate from the Catskill formation, which many water wells are actually drilled into. This
formation is a highly fractured sandstone that allows for movement of fresh water and gas. Therefore gas
has existed in this water bearing formation likely for centuries but that does not automatically rule out that
gas from the Marcellus is invading also.

Isotopic analysis can distinguish one type of gas from another. The Pennsylvania Department of
Environmental Protection isotopically determined a significant difference in gas originating from the
Marcellus Shale and a formation of an earlier age. When samples were obtained from the area of the gas
wells the water contained thermogenic gas. In most regions thermogenic gas would signal that the deeper
formations were invading into the aquifers. However, that was not the case here.

Recent studies have shown that the Catskill contains thermogenic gas and that the gas concentrations in
water wells depend on the topography of the landscape. Water wells in valleys have a higher
concentration of natural gas than those on hills. The figure in the figure/charts section shows the
topographical map illustrating this fact. Combining all this information leads us to believe that the gas
invading the freshwater aquifers has actually been there all along and does not originate from the
Marcellus Shale.

Significance and Interpretation of Results
Cabot Oil and Gas had to plug three wells based on the Department of Environmental Protections
findings. Upon further study the gas is now found to likely not be caused by Marcellus Shale
development. If Cabot Oil and Gas had data prepared to defend themselves completely when the issue
arose these drastic measures may have need to be taken. Therefore water samples should have been taken
before drilling occurred and tests run on these samples. This recommendation should be applied
universally and not just regionally, remediating any future potential problems with methane invasion into
freshwater aquifers.

34
Figures/Charts

Oil and Gas Journal. December 2011.

Acknowledgments
Acknowledgments are in order for Ben Thomas and Ben Ebenhack for guidance on this project.

References
1. Watson, Robert W. Report of Cabot Oil & Gas Corporation's Utilization of Effective Techniques for
Protecting Fresh Water Zones/Horizons During Natural Gas Well Drilling, Completion and Plugging
Activities. 31 Mar. 2011. <http://www.cabotog.com/pdfs/WatsonRpt.pdf>.
2. Consent Order and Settlement Agreement. 31 Mar. 2011. Commonwealth of Pennsylvania:
Department of Environmental Protection. 15 Dec. 2010 <http://www.cabotog.com/pdfs/FinalA_12-
15-10.pdf>.
3. Exhibit 1. 31 Mar. 2011. Cabot Oil & Gas Corporation. 24 May 2010
<http://www.cabotog.com/pdfs/WatsonEx1.pdf>.
4. Water Well Test Data. 31 Mar. 2011. Department of Environmental Protection and Cabot Oil & Gas.
<http://www.cabotog.com/pdfs/WaterWellTestData.pdf>.
5. Cabot and Clean Water in Dimock, PA. YouTube. Web. 8 Apr 2011.
<http://www.youtube.com/watch?v=J5fHlYML8lQ>.
35
Thermoelectric Solution for Temporomandibular Joint Pain

Student Researcher: Alena M. Barga

Advisor: Dr. Ronald Fournier

The University of Toledo
Bioengineering Department

Abstract
An estimated 10 million Americans suffer from persistent jaw pain and clicking centered at the
temporomandibular joint
1
. This condition is often given the umbrella term: temporomandibular joint
(TMJ) disorder or syndrome. Treatment options consist of anti-inflammatory drugs, jaw exercises, heat
and cold therapy, and, as a last resort, surgery. The proposed solution for this widespread issue was to
design a headpiece that could allow the user to apply heat and cold therapy sequentially without the need
for two separate devices.

The therapeutic prototype discussed in the following paper incorporated a thermal unit with a virtual and
physical circuit. The thermal unit was comprised of a thermoelectric module or Peltier module, a heat
sink, and a small fan. The physical circuit acted as a bridge between the thermal unit and the virtual
circuit, and it was comprised of a Wheatstone bridge circuit, a transistor, and a non-inverting and
inverting operational amplifier circuit. The virtual circuit, which was constructed using the program,
LabVIEW, allowed the user to control whether heat or cold therapy would be administered, as well as at
what thermal intensity. More specifically, the virtual circuit used a hysteresis loop to maintain the
temperature desired by the user.

This project incorporated material research, heat transfer investigations, circuit construction, and
headpiece design. Over the last four months, the final design shows promise for being an effective
treatment for pain and inflammation caused by TMJ syndrome.

Project Objective
Currently, there are no combined cold and heat therapy devices marketed towards temporomandibular
pain. Therefore, this gap in the market created a specific opportunity for the development of a new
therapeutic device that could specifically target patients with TMJ syndrome by combining the
advantages of heat and cold therapy. In combination with over-the-counter non-steroidal anti-
inflammatory drugs (NSAIDs), heat and cold therapy is one of the most highly recommended treatments
for TMJ pain. The device was also designed in conjunction with an ergonomic headpiece that would
increase comfort and user-friendliness.

To administer the appropriate therapeutic ranges to the user, a few different components were necessary
to the design including a thermal unit, a control module and a headpiece. The thermal unit was comprised
of a thermoelectric module or Peltier module, an aluminum heat sink and a small fan used to dissipate
heat. The control module was comprised of two main components: a hysteresis loop created in LabVIEW
and a physical circuit. Lastly, the headpiece design was created in the three-dimensional modeling
program, SolidWorks.

Methodology Used
The strategy and subsequent construction of the thermal unit proved to be the most challenging design
issue. During early research, it was decided that a thermoelectric module would be the basis for the
overall device. A thermoelectric module consists of two ceramic plates connected by P and N junctions.
By employing the Peltier effect, a heat flux is created between the two plates resulting in a hot side and
a cold side. By reversing the current through the module, the cold side and hot side can be
reversed. These theories and ideas became the building blocks for the design.

36
Design concepts discussed during the beginning stages of development were based on a thermoelectric
module being encased within a thermoplastic elastomer. However, during initial concept testing it was
realized that the elastomer alone could not dissipate sufficient amounts of heat from the hot side,
preventing the cold side from reaching desired temperatures. To improve these heat dissipation issues,
an aluminum heat sink and 5V fan were attached to the thermoelectric cooler. Upon further investigation,
it was decided by the group that the heat conductivity value of the elastomer was not large enough to
conduct heat from the thermoelectric module to the user. To obtain a larger heat conductivity value, a
simple aluminum sheet was used in place of the elastomer. A depiction of the thermal unit can be found
in Figure 1 of Figure and Tables.

After designing the thermal unit, the creation of the virtual and physical circuit was conducted. The
physical circuit acted as a bridge that connected the thermal unit to the virtual circuit, while the virtual
circuit acted as a negative feedback loop, preventing malfunction of the thermal unit. Based on early
research, a thermistor was chosen to convert the temperature being experienced by the user into a
resistance that could be used to regulate the system. The thermistor also proved to be an effective way of
determining the temperature value by using the following equation, where T represent the temperature of
the material being measured and R represents the resistance across the thermistor.



A Wheatstone bridge circuit was then used to convert the resistance value into a voltage value that could
be used by the digitizer, or data acquisition device, which input data into the virtual circuit.

After being entered into the virtual circuit program, the data from the Wheatstone bridge circuit was
converted back to resistance and then temperature, which was subsequently compared with the desired
temperature value set by the user. The difference between the two values was then used as operate the
hysteresis loop. If the thermal unit temperature climbs above the desired temperature, the cold loop
would turn on. If the thermal unit temperature dropped below the desired temperature, the hot loop
would turn on. It was this process by which the applied temperature remains constant throughout use.

The output of the virtual circuit was then transferred back through the digitizer to the transistor, which
acted as a switch to turn the thermoelectric module on or off. Because the maximum voltage output of the
digitizer was 5V, the threshold value for the transistor was also set at 5V. It is at this voltage that the
transistor will turn on or off. The 5V output from the transistor was not large enough to power the
thermoelectric module; consequently an operational amplifier circuit was utilized to produce a gain of 2.
Therefore, a voltage input of 5V was amplified to a voltage output of 10V. With the addition of the
operational amplifier circuit, the design circuit needed to power and control the thermal unit was
complete.

Finally, the headpiece that would house the thermal units was modeled in the three-dimensional modeling
program, SolidWorks. The headpiece was designed as a behind the head structure that would hold the
units over the affected area in a way that was comfortable to the user. A depiction of the headpiece
design is shown in Figure 2 of Figures and Tables. In the future final design, the headpiece will be
intended to house the printed circuit board.

Results Obtained and the Significance
Throughout the development of this project, the applied temperature range of the thermal unit was
critical. During the first few temperature range investigations, it was evident that achieving heat therapy
temperature would be straightforward. However, achieving the desired cold therapy temperature would
prove to be the challenge. Because it was much simpler for the device to reach higher temperatures, the
virtual control circuit had to be limited the maximum temperature to 45C to mitigate burn risks. After
studying various construction and mounting procedures, a temperature within the desired therapeutic
range for cold therapy was finally reached. The thermal unit construction allowed for large heat
dissipation, allowing the cold side of the thermoelectric module to reach adequate temperatures. In
37
conjunction with the control circuit, the final constructed thermal unit was able to achieve a maximum
temperature value of 45C and a minimum temperature value of 7C.

While investigating the ability of the unit to meet thermal demands, the virtual and physical circuits were
validated. It was verified that the user could manually set the temperature within the virtual circuit and
that the hysteresis loop could maintain the desired temperature within a few degrees over an extended
period of time. It was also verified that the physical circuit was able to turn the thermoelectric module
on and off in response to the hysteresis loop. In conjunction with one another, the therapeutic range
and circuit investigations verify the efficacy and effectiveness of this therapeutic device.

Figures and Tables

Figure 1. Thermal Unit with Components


Figure 2. Headpiece Design in SolidWorks

Acknowledgments
The author of this paper would like to thank the other three design group members, Chase Maag, Ashley
Muszynski, and Nicholas Oblizajek, for their contributions to the success of this project. The author
would also like to thank Dr. Brent Cameron for his suggestions and thoughts during the development of
this project. Finally, the author would like to thank Dr. Ronald Fournier for his constant support and
encouragement of the design group.

References
1. United States. National Institute of Dental and Craniofacial Research, National Institutes of
Health. TMJ Disorders. Bethesda: 2011. Web.
<http://www.nidcr.nih.gov/OralHealth/Topics/TMJ/TMJDisorders.htm>.
38
Increasing Wind Turbine Efficiency

Student Researcher: Laura M. Bendula

Advisor: Dr. James Van Kuren

Miami University
Mechanical Engineering Department

Abstract
This wind turbine project focuses on improving operation efficiencies by studying various blade types and
shapes. Through the use of computational fluid dynamic software, ANSYS, the drag coefficient and flow
regimes can be analyzed to select the best airfoil. To further verify results, prototypes of wind turbine
blades were created. Tests were performed and the results showed that an airfoil-like design produced the
highest voltage when the angle of the hub was set to 10 degrees. In the future, more designs will be tested
and the prime design will be created using aluminum for testing. Currently, the NACA 63415 airfoil has
the best lift-to-drag ratio and an airfoil shape produces the highest voltage at a ten degree hub angle.

Project Objectives
Studying wind turbine efficiency has become a huge area of interest recently. By increasing the
efficiency of wind turbines, power can be saved. As most wind turbines operate below 50% efficiency,
there is plenty of room for improvement. The efficiency of wind turbines can be increased by
maximizing the lift-to-drag ratio of the blades and by changing the shape or angle of attack of the blades.
By focusing research primarily on blade shape, size, length, material, and hub angle a prime design can be
proposed for manufacture.

Methodology
To test different airfoil shapes, two methods were used: analytical and experimental approaches. The
analytical approach used a computational fluid dynamic program called ANSYS. Two airfoil shapes
were chosen for comparison: NACA 4415 and 63415. These two were chosen because of their high lift
and low drag, which is ideal for wind turbine blade design. In ANSYS, FLUENT was used to create
geometry, create a mesh, set up initial conditions, and view results. The airfoil dimensions were
normalized and the velocity chosen for simulation was 10 m/s. Plots of the drag coefficient and velocity
and pressure contours were created.

Next, prototypes were designed and made for wind tunnel testing. Materials were researched and
prototypes were created out of balsa wood, Styrofoam, clay, and wax. Due to structural difficulties, these
materials were discarded and layered cardboard was used for final prototypes. Cardboard was much
sturdier when layered and held up during the tests in the wind tunnel.

Two designs chosen for experimental testing in the wind tunnel were a standard airfoil shape (design #1)
and an airfoil shape with whale tubercles on the leading edge of the blade (design #2). Additionally,
different styles of blades will be tested that resemble old-style wind mill blades and ovular shapes to
compare to the first two designs. The tests in the wind tunnel will be measuring the voltage produced by
the wind turbine at multiple velocities: 3 m/s, 5 m/s, 7 m/s, and 10 m/s. With this data, it will be possible
to select the design that will produce the most power.

Results
The ANSYS analysis showed that the NACA 63415 airfoil is an ideal shape for wind turbine use. The
experimental data from the wind tunnel shows that the optimal angle is 10 degrees with blade design #1
(the shape most similar to an airfoil). Additionally, the whale tubercles used in blade design #2 did not
increase the voltage produced as was originally predicted.


39
Figures and Charts
Figure 1 shows the geometry sketch of one of the airfoils (NACA 4415) tested with the ANSYS program.
The airfoil was normalized to a length of 1 and was modeled as a 2-D rectangle with the airfoil shapes
area subtracted.


Figure 1. Geometry sketch of NACA 4415

Figure 2 shows the mesh of one of the airfoils (NACA 4415) created with ANSYS. The mesh used a
mapped face method to refine the nodes. The mesh was refined until the skewness factor was less than .6,
which was a reasonable value for this projects purposes.


Figure 2. Mesh of NACA 4415

Figure 3 shows the velocity vectors of the NACA 4415 series airfoil. The highest velocity occurs at the
highest point of the airfoil and predictably, the velocity over the airfoil is zero due to the no-slip
condition.


Figure 3. Velocity vectors of NACA 4415
40

Figure 4 shows the velocity vectors of the NACA 63415 series airfoil. The highest velocity again occurs
at the highest point of the airfoil and again, the velocity over the airfoil is zero due to the no-slip
condition. The flow over the NACA 63415 is smoother, with no trailing vortices as with the NACA 4415
airfoil.


Figure 4. Velocity vectors of NACA 63415

Figure 5 shows Miami Universitys Aerolab Educational Wind Tunnel system. The flow come in the
inlet section (left), flows through the clear, plexiglass test section, and is released through the exhaust end
(right). The test section has doors on either side of the wind tunnel and testing rigs can be mounted from
the top or bottom of the test section. The wind tunnel can be controlled either by the local remote (a PID
controller mounted on the back) or through the computer mounted on the front.


Figure 5. Wind tunnel

Figure 6 is a wax prototype of an airfoil blade made with a CNC lathe. The stock wax had a 1 diameter
and had a length of 5 inches. The blade was designed in NX7.0 and was manufactured using RolandMX.


Figure 6. Wax prototype of an airfoil blade
41
Figure 7 shows clay prototypes of airfoil blades. They were molded after a test kit of propellers bought
from Hobby Lobby and were painted with yellow and black latex paint. The black stripe was intended to
be used with a strobe light to capture the speed of rotation of the blades in the wind tunnel. However, the
clay used was not strong enough to withstand the force of the wind tunnel and alternate material was
required for tests.


Figure 7. Clay prototype of airfoil blades

Figure 8 shows the angle of the hub versus the voltage produced for blade design #1.


Figure 8. Graph of hub angle versus voltage produced for blade design #1

Figure 9 shows the angle of the hub versus the voltage produced for blade design #2

Figure 9. Graph of hub angle versus voltage produced for blade design #2

42
Acknowledgments
The author of this paper would like to thank Dr. James Van Kuren and Ryan Ettenhoffer for their
assistance with research. Additionally, thanks go out to Dr. Douglas Coffin for lending the author the
wind turbine mounting device and Dr. Tim Cameron for organizing the trip.

References
1. "Energy Efficiency and Renewable Energy." EERE:. Web. 2 Apr. 2012.
<http://www1.eere.energy.gov/windandhydro/pdfs/>.
2. "Latest News." Wind Turbines: A Social Network for the Wind Turbine Community. Web. 18 Feb.
2012. <http://www.windturbines.net/>.
3. "Whalepower." Whalepower. Web. 10 Mar. 2012. <http://www.whalepower.com/drupal/>.
4. "Wind Turbine." Wikipedia. Wikimedia Foundation, 18 Mar. 2012. Web. 18 Mar. 2012.
<http://en.wikipedia.org/wiki/Wind_turbine>.
5. "Wind Turbines." - Kinetic Wind Energy Generator Technology. Web. 1 Apr. 2012.
<http://www.alternative-energy-news.info/technology/wind-power/wind-turbines/>.
43
What a Drag: Parachute Design

Student Researcher: Heather M. Bennett

Advisor: Tammy Waldron

University of Cincinnati
College of Education, Criminal Justice, and Human Services

Abstract
Following principles of inquiry, students engaged in parachute design, activating background knowledge
and previous experience to develop experiments to test the effects of surface area and shape on the drag
created by a falling parachute. Students then used their knowledge of mathematics and basic physics to
analyze and explain their results and extend them to design a parachute that meets determined
specifications.
Lesson
I introduced the parachute challenge to my 8
th
grade algebra students by drifting a large plastic bag like a
parachute and asking what made a good parachute. We briefly discussed the definition of drag and
students debated whether the shape and surface area of a parachute affected its fall time. We agreed that
we could test this question by making parachutes of different shapes and the same area, and parachutes of
the same shape and different areas. Students self-selected teams and got to work creating parachutes from
a circle, an equilateral triangle, a square, a regular hexagon, and a rectangle with its length twice its width,
all of which had as close to a chosen area as possible. I worked with groups individually on strategies to
solve the resulting linear equations and systems of linear equations, and to draw the dimensions of their
figures. Once the designs were created, groups cut them out of trash bags and taped or tied strings (about
25 cm. long) to the corners of the parachutes, which they then tied together around a paperclip payload.
(The paperclip was twisted into an S-shape so that groups that were further along could test the effects of
parachute mass of the fall time as an extension: clay could be weighed and stuck onto the paperclip. Only
one group explored this option, however.) Students practiced throwing their parachutes inside to get a
consistent technique.

Next, groups tested their parachutes outside by dropping them from the back of the football stadium
bleachers and timing their descent. The windy weather gave students a chance to analyze how not only
errors in their techniques but also conditions beyond their control affected the performance of designs that
had been consistent in lab tests. Students assigned and shared the roles of timer, recorder, parachute
releaser and retriever within their groups. Their goal was to drop each parachute five times with
successful opening and take the average of these results.

Finally, we compiled their data using Microsoft Excel and created scatter plots of fall time v. area for
each shape and for all the data. Students used lines of best fit and correlation coefficients to answer
questions of how the area affected the fall time and predict how long a 100 cm
2
parachute would take to
hit the ground from the same height.

Objectives
Students will to solve area formulas that include pi and radicals to for side lengths to create
parachutes of different shapes with approximately the same area.
Students will use scatter plots of the data collected from their parachute drop experiment to
analyze the appropriateness of lines of best fit and analyze patterns of association between the
area of the parachute and its fall time.

Alignment with Academic Content Standards (From March 2011 Common Core Standards)
Grade 8 Mathematics. Expressions and Equations: 7. Solve [] equations in one variable.
Grade 8 Mathematics. Statistics and Probability: 1. Construct and interpret scatter plots for
bivariate measurement data to investigate patterns of association between two quantities.
44
Describe patterns such as clustering, outliers, positive and negative association, linear association,
and nonlinear association.

Underlying Theory
Constructivism in math has long advised that students learn from concrete to abstract, in the order of
Bruners Stages of Representation. In this unit, students have a concrete representation of equal areas in
the plastic bags they cut into parachutes. While some were immediately ready to dive into solving
algebraic equations to find the side length of their hexagon, I encouraged other students to begin by
cutting out one shape (a circle, for instance) and estimating the sizes of other shapes by drawing figures
produced approximately the same amount of gap as overlap. Overlaying the shapes and then moving to
think about how the areas would be calculated helped many students transition to solving equations.

Lev Vygotskys theories of social constructivism also come into play as students learn and work together.
The collaborative learning groups, based on friendships rather than ability, encouraged positive
interdependence, particularly during the experiment. Each student had an important role, and no group
could function effectively without teamwork.

This unit also employs all five NCTM math process standards. As seen, communicating mathematical
ideas (with peers and the teacher) and representing them (concretely, through pictures and with equations)
formed the basis of the entire activity. Connections abounded between geometric formulas, hands-on
measurement skills, data analysis, solving equations, opposing forces and the familiar, real-world
parachute scenario. Finally, the problem was non-routine and encouraged diverse perspectives and
divergent thinking, in both the parachute design (i.e. determining how to create an equilateral triangle)
and in analysis of the results. Students needed to build skills in both reasoning/proof and problem-solving.

Student Engagement
Students did not participate equally in the solving equations phase, however, they asked each other and
me questions about how to solve for the side length of the regular hexagon, for example, and they all
worked together to create the actual parachutes, sometimes developing creative strategies to do so. They
discussed with their group the relative merits of taping on the string or tying it through holes and make
such decisions collaboratively. Students were very enthusiastic about testing their parachutes from the
back of the football stadium, and worked together well, though some groups were notably more efficient.
Students were not eager to analyze their data, but the connection with having actually collected it did
increase engagement in comparison with previous instruction in lines of best fit.

Resources
This unit required 8-gallon white trash bags, permanent markers, rulers, scissors, string or embroidery
thread, tape, paper clips, modeling clay, clipboards, stop-watches, graphing calculators and Microsoft
Excel. I consulted the NASA educational guide Adventures in Rocket Science (EG-2007-12-179-MSFC)
page 77-80 for help on this activity.

Assessment & Results
Student achievement was analyzed through a combination of observation, written work, and a
performance task. With sufficient scaffolding and guidance in the questions, students produced
sophisticated analysis of their results and reflected on potential sources of anomalies. Though correlation
was not particularly strong (owing partially to intermittent gusts of wind) students did find a positive
correlation between parachute area and fall time, no matter how they organized their data.

Conclusion
The parachute activity adapted from NASA materials for eighth grade algebra students was successful in
helping students connect math to the real world and attain applicable Ohio standards. During the unit,
they exercised critical thinking, problem-solving, and teamwork while engaging in the mathematics of
basic aerospace engineering.

45
Gravity: Helping Children Understand What It Is and What Its Effects Are

Student Researcher: Cory M. Bishop
Advisor: Dr. Mark Templin
The University of Toledo
Judith Herb College of Education

Understanding gravity is an extremely difficult concept for children to understand. Even though they
experience it every day, children have many different misconceptions about what gravity really is. This is
evident with their ideas that certain things have gravity and others dont, that astronauts are weightless,
and everything in the universe falls to the Earth. Through the use of NASA materials including, Gravity
Games, Heavy Duty Topics, and Fighting Gravity-A Matter of Balance, along with the Our World:
Gravity in Space video clip, I will develop a lesson that will educate students about gravity and its
presence on the Earth and in the universe, and help to clear up any misconceptions that might be present.

Lesson Plan
Engage
1. Begin lesson by dropping a basketball and having it bounce on the floor. Ask the students to
explain what they observed. (Lead students to discuss that gravity is what forced the ball to the
ground)
2. Ask the students where this force of gravity comes from. (Lead students to say that it is the Earth
that creates this force)
3. Ask the students what the benefits of gravity are. Is there any way to escape Earths gravitational
pull? Do astronauts experience gravity? (Questions can inform teacher of misconceptions as well as
lead into activities)

Explore
Students will work together on the NASA activity: Fighting Gravity-A Matter of Balance. This will
introduce the children to the concept of center of gravity.
Students will work together on the NASA activity: Heavy Lifting. This will exercise the students
inquiry skills and the students will begin thinking about how gravity affects objects on and near Earth.
Students will watch the NASA video: Our World: Gravity in Space

Explain
1. Have the students explain what force keeps objects on the surface of the Earth.
2. Ask the students what causes gravity to happen. (This will lead to a discussion of mass and weight.
Explain to the students that objects with more mass have a stronger gravitational pull than objects
with less mass. This explains why we stay on the Earths surface and that weight is just the force of
the Earths gravitational pull on our mass).

Elaborate
1. Ask students if they have ever been on a rollercoaster, and if they have, to explain what it felt like
when they went over the first drop. (Students should state that they felt weightless, or that they felt
like they came up off the seat, or something similar)
2. Ask students if there are other times when they might have felt this feeling.
3. Show students Toys in Space pg. 6. This will discuss microgravity with the students and introduce
them to the idea of riding in an elevator. The teacher can explain to the students that when we ride up
an elevator we feel a heavier weight, when we ride down an elevator we feel a lighter weight, and if
the elevator were to begin free-falling we would feel weightless within the car.
46
4. Introduce the class to the Toys in Space activity and have them work on the activity. This will help
the students understand microgravity in more detail and build an understanding of how ordinary toys,
and subsequently, ordinary objects would be affected by microgravity.

Evaluate
1. Student understanding will be assessed throughout the various activities through:
A. Discussions
B. Classroom presentations of projects and findings
C. Formative assessments i.e.: admittance/exit slips

Learning Objectives
1. Describe what gravity is, what it creates, and where it is present.
2. Define center of gravity.
3. Explain how objects are kept in orbit.
4. Compare gravity on a roller coaster to the gravity experienced in the International Space Station.
5. Construct a two-dimensional object, identify its center of gravity, and support your conclusion.
6. Explain what microgravity is, and summarize why it is different from zero gravity.

Alignment with Ohio Academic Content Standards
5
th
Grade Earth and Space Science
(Designed for a 5
th
grade classroom, but could be modified for other grade levels)
Standard
Students demonstrate an understanding of how the concepts and principles of energy, matter,
motion and forces explain Earth systems, the solar system and the universe.
Benchmark
Explain the characteristics, cycles and patterns involving Earth and its place in the solar system.
Indicator
Describe the characteristics of Earth and its orbit about the sun

Underlying Pedagogy
I believe that one of the best ways, if not the best way, for children to learn is through inquiry based
learning. By simply defining phenomena and moving on in the lesson, we can lose the childrens interest,
fail to have them fully understand the concepts we would like them to know, and create misconceptions
that can follow them throughout their adolescence and into their adulthood. Through inquiry based
learning we can, not only, increase engagement and create excitement in the science classroom, but also
empower our students to think beyond the simple definition and begin to ask more questions and strive to
learn more about the content. This can lead to a deeper understanding of the material that can benefit
them later in their education.

The benefit of the materials from NASA is that so many of the resources available center around an
inquiry based classroom. The materials that I have decided to use for the preceding lesson look to have
the students find the answers and figure out how to get there. They are given the power to find out for
themselves how various phenomena happen in our universe. I believe through lessons like these, we will
empower our students to delve into the world of science and find out for themselves all that it has to offer.
I believe that part of my responsibility as an elementary and middle school science teacher is to expose
my students to the field of science and help them to understand how exciting, interesting, and fun it can
be, and encourage them to be proactive in high school, college, and possible career in a science field. The
arena of science has much to offer to our students and I feel that by creating lessons like these, that focus
on inquiry, our students will be excited to see whats next.
47
Surfactant Drag Reducing Agents in a Mock Aviation Loop

Student Researcher: Winston L. Black, II

Advisor: Dr. Robert Wilkens

University of Dayton
Chemical and Materials Engineering

Abstract
In an aircraft moving at high speed and burning highly engineered fuels, the ability to move heat
effectively and efficiently to heat sinks becomes essential. Surfactant Drag Reducing Agents (SDRAs)
have shown themselves to be promising candidates for application in the field of thermal management as
a method for decreasing pumping power required to circulate coolants. SDRAs are unique in their ability
to auto assemble in solution in to constructs know as micelles. This ability is critical in high shear stress
environments such as those found in a pump.

Project Objectives
This project, as a result of working under the guidance of a PhD student, has a very broad scope and a
long term methodology. The objective of the PhD thesis is to enhance the heat transfer coefficient of
poly-alpha-olefin, 2 centistokes (PAO-2) in a coolant loop similar to those in airplanes, without
increasing the pumping power of the system. Achieving this objective requires that many experiments be
carried out, both on the fluid and on the fluid in the coolant loop. The objective of this project was to
determine the viscosity of several concentrations SDRAs in aqueous solutions. This being part of the
effort to characterize properties of the SDRAs and determine which concentration leads to the greatest
increase in efficiency.

Methodology Used
Viscosities were determined through the use of a rotational viscometer which featured a cup through
which water could be continuously circulated to maintain constant temperature. Two different
concentrations of the surfactant Ethoquad O/12 (EO12) and counterion sodium salicylate (NaSaI) were
prepared in a 500 mL volumetric flask. The viscometer was calibrated according to the manufacturer
instructions and the cup and cone were both cleaned with acetone. Special care was taken to insure that
the volume of fluid delivered to the cup was both as consistent and close to the required amount as
possible. As SDRA fluids are known to exhibit non-Newtonian behavior, viscosity and percent of total
power were recorded for the widest range of shear rates which the apparatus could handle. Two data
points were taken at each shear rate.

Results Obtained
The graph below shows how the viscosity of the two SDRA/counterion concentrations changed with
shear rate at 20C. Both data sets exhibit an overall decrease in viscosity as shear rate increases. This is
consistent with a shear thinning non-Newtonian fluid. This result supports the ability of SDRAs to
decrease drag in high stress environments. Lower viscosities allow for easier pumping of fluids
48


Figure 1. Viscosities of Ethoquad O/12 and Sodium Salicylate as a Function of Shear Rate

These results are only the initial results in what will become a PhD thesis. As such, these results are very
limited. Viscosities must be measured for more concentrations of surfactant and counterion at additional
temperatures as well. More experimentation is needed as well on to determine other physical and fluid
properties.

References
1. Bird, R. B., W. E. Stewart, E. N. Lightfoot (2007). Transport Phenomena. Wiley.
2. Choi, H. J. and M. S. Jhon. (1996). Polymer Induced Turbulent Drag Reduction, Ind. Eng. Chem.
Res., 35, pages 2993-2998.
3. Hoyt, J. W. (1986). Drag Reduction in Encycl. Polym. Sci. Eng., 5, Kroschwitz, J. I. (ed.), Wiley,
New York, pages 129-151.
0
10
20
30
40
50
60
0 20 40 60 80 100
V
i
s
c
o
s
i
t
y

(
c
P
)

Shear stress (RPM)
Viscosities of EO 12 and NaSaI at 20C
EO12 5 mM NaSaI 12.5
mM
EO12 3 mM NaSaI 7.5
mM
Power (EO12 5 mM
NaSaI 12.5 mM)
49
Fayetteville Shale Positive Displacement Motor Optimization

Student Researcher: Matthew C. Boothe

Advisor: Dr. Ben W. Ebenhack

Marietta College
Petroleum Engineering and Geology

Abstract
This research project will combine my personal statistical analysis combined with other publications in
order to take a look at 3, 5 and 7 stage positive displacement drilling motors in unconventional shale
plays. I will be using data from the Fayetteville Devonian shale that is located in the south central region
of the country. The age and nature of this rock should make it comparable to the Marcellus Shale of
Pennsylvania and the Utica Shale of Ohio.

Project Background and Objectives
Motors have three main components: the rotor, the stator and the stator liner. The rotor consists of lobes
as seen on the diagram and the stator consists of void spaces called cavities for the rotor lobes to fill. As
you can see, the rotor will always have one less lobe than the stator has cavities so that there is always a
place where hydraulic energy and pressure will cause rotation. As the number of lobes on the rotor
increases, so does the torque and power delivered to the bit.
The scope of my research aims to develop and optimization function based on specific formation
characteristics and rates of penetration to find a practical motor assembly that can operate more
efficiently. Essentially, I would like to find the breaking point between drilling time (which translates to
capital) and the increased rate of penetration. Combining this with other variables will yield an answer
that can hopefully provide a faster and better drilling practice in the lateral portion of the well.
Methodology
My project challenge was to compare the rates of penetration across the Desoto field with various positive
displacement motors (3, 5 and 7 stage). From these comparisons, I was to find the break over points for
higher power PDMs vs. the improvement in ROP.

To start, I limited the data to wells that had been drilled since January 1
st
of 2010. I then got rid of any
outliers, which includes wells that required multiple trips, experienced bit damage, motor damages, side-
tracks etc. Finally, I set my sample size to 30 wells because this is the point at which the data will
normalize. (This analysis only considers the lateral section of the well)

From this gathered data, I will analyze the average time required to drill x amount of feet in the lateral
section by area of the Fayetteville (East, West and South). This will lead to a break-even time, based on
the rig rate in cost per hour (proprietary information) and the cost of the motor.

Results and Conclusions
If the motors are analyzed by rig, the definition of which motor drills the fastest in the lateral is not clear.
This can only be attributed to personnel and operating techniques. For example, drilling techniques; if you
are drilling wood with a hand-drill, you have to have the perfect balance of pressure and rotational speed.
If you push too hard, you get stuck and cant drill ahead. The same concept applies here.
50

Case study with Rig two was the rig that had the most aggressive rig hands and operators. However, we
still see an increase in efficiency with the use of a seven stage motor from 90 ft/hr to 152ft/hr; an increase
of 60% efficiency!
The cost of each lateral with respect to time, motor inspection, stator liner and rental rate adds up to quite
a large sum of money. But, with the improvement in ROP offered by the 7 stage motor, we can estimate a
cost savings due to that increased efficiency. Because most operations use a 5 stage motor upon re-entry,
this is the most important savings figure.

Acknowledgments
First, I would like to acknowledge Mr. Mike Engle and Southwestern Energy for providing me with this
information that has allowed me to complete this research project. Secondly, I would like to acknowledge
Dr. Ben Ebenhack and the Marietta College Petroleum and Geology faculty for assisting me with any
questions and concerns that I may have had during this process.

References
N/A- (Primary Research)
0.00
20.00
40.00
60.00
80.00
100.00
120.00
Rig
1
Rig
2
Rig
5
Rig
6
Rig
7
Rig
8
Rig
9
Rig
10
Rig
26
A
v
e
r
a
g
e

R
O
P

(
f
t
/
h
r
)

Rig
Rig Motor Rate of Penetration
Comparison
3 Stage Motors
5 Stage Motors
51
Ionosphere Induced Error Self-Correction for Single Frequency GPS Receivers

Student Researcher: Harrison W. Bourne

Advisor: Dr. Jade Morton

Miami University
Electrical and Computer Engineering

Abstract
GPS is a technology increasingly important in consumer electronics, from dedicated navigation devices to
essentially every cell phone. However these devices use small, cheap single frequency receivers instead of
the more expensive dual frequency receivers. But single frequency receivers are susceptible to error
caused by the ionosphere. The typical method for correcting ionosphere error in a single frequency
receiver involves using a model of ionosphere behavior. Even the best of these models can not correct
more than 75%-80% of the error due to the extreme variability of the ionosphere (Morton et al. 2007).
Developing a software algorithm to correct the error without modeling would allow for more precise
location information without needing to replace existing hardware.

Project Objectives
This project has three major goals. First, to design and implement the algorithm in software. Second, to
test the software using simulated GPS data to insure the error correction method performs as expected
under ideal conditions. To evaluate the effectiveness of the algorithm, the amount of error in the position
solution is compared to position solution error obtained using established methods of error correction.
The metric of performance is if the solution is consistently better than the current method of correcting
ionosphere error for single frequency receivers. The third objective is to verify the simulated results
using real GPS data collected by receivers around the world. As with the simulated data the objective of
this testing is to show that the algorithm is capable of better performance than the established model. Due
to the inherent uncertainties in real data these tests will prove more difficult, but will provide conclusive
evidence that the project has been successful.

Methodology Used
The method currently being developed does not rely on a model to predict the ionosphere error, rather it
allows the total electron content (TEC) of the ionosphere above the receiver to be an additional variable in
the equation which describes the measured distance between the GPS receiver and a satellite. This
equation is normally given as


Where is the measured range, r the true range, c the speed of light, t
r
the receiver clock error, t
s
the
satellite clock error, I the ionosphere error, T the troposphere error, M the multipath error, and all other
errors and noise. In a standard single frequency receiver only the true range and receiver clock error must
be solved, which requires range information from at least four satellites. All other terms are provided by
information in the navigation message sent to every GPS receiver or some other source. However in this
project the ionosphere error must also be solved, thus leading to the following equation




Where I
0
is the ionosphere error directly above the receiver, the obliquity factor,

the spacial
derivative of the ionosphere error with respect to latitude, and

the spacial derivative of the


ionosphere error with respect to longitude. In this case there are seven unknowns to solve for which
indicates seven satellites are needed. There is a less accurate version of this equation which requires five
satellites and another, more accurate version, which requires nine satellites. The code which solves this
equation is implemented in MATLAB and is tested with simulated range measurements. The simulated
measurements are generated by fixing an arbitrary position for the receiver and using ephemeris
parameters for each satellite to calculate its position. Knowing these two positions the distance between
52
the receiver and the satellites can be calculated. The ionosphere error is found from a map provided by the
International GPS Service for Geodynamics and added, along with noise, to the range measurement. The
position of the receiver is then found using the new method. The position solution is then calculated using
the duel frequency method to evaluate the effectiveness of the new algorithm.

Results Obtained
The figure below shows the position solution of our new method in blue while the results with no
ionosphere error are shown in red. Although the new algorithm produces results which are less accurate
than the calculation devoid of ionosphere error, it is only approximately a few tens of nanometers less
accurate. Obviously these simulated results are much more accurate than could be hoped for in the real
world. However it does indicate that this algorithm is a viable method of correcting for ionosphere
induced errors and that its accuracy is comparable to even the duel frequency method.

Figure 1. Receiver Position Calculated for Every Hour for One Day
References
1. Morton J., Zhou Q., Cosgrove M., A Floating Vertical TEC Ionosphere Delay Correction Algorithm
for Single Frequency GPS Receivers, 2007 ION-GPS Meeting, Cambridge, April 2007.
53
Processing CNT-Doped Polymer Fibers Using Various Spinning Techniques

Student Researcher: Robyn L. Bradford

Advisor: Dr. Khalid Lafdi

University of Dayton
Department of Chemical and Materials Engineering

Introduction
Today there is growing interest in developing low-cost tissue engineering scaffolds made of synthetic
materials. These scaffolds are essentially support structures designed to mimic the configuration and
environment of the in vivo extracellular matrix. Suitable scaffolds must be biodegradable, biocompatible,
capable of three-dimensionality, and robust to support cell activity.

Engineering scaffolds involves constructing and testing substrates composed of various material
combinations. These materials are made of natural or synthetic polymers in the form or fibers or
membranes. Currently, electrospinning is one of the most widely used techniques because it is relatively
inexpensive and reliable. However, it requires high voltage and can be challenging to scale up. As a
result, researchers find it necessary to develop both new materials and new spinning techniques. One
method that is garnering more attention uses a centrifugal force approach, much like how cotton candy is
made. This comparative study explores the techniques of electrospinning and centrifugal force spinning.
The work presented here is a precursor to a larger study that will investigate using carbon nanotube
(CNT)-doped polymer fibers as scaffolds for tissue engineering applications.

Abstract
For this project, different concentrations of electrospun CNT-doped polymer fibers were seeded with
human fibroblasts and cultured. A cell proliferation assay was performed to determine the number of
viable cells. Results indicate the existence of a 2 wt % percolation threshold. Of particular interest is
cellular response in terms of nucleation and orientation. These will be monitored post-culture via
fluorescent microscopy of cell bodies and nuclei relative to fibers.

Parallel to the electrospinning investigation, this study also examines how human fibroblasts respond to
CNT-doped polymer fibers that are made using centrifugal force spinning. These fibers are being made
using an un-modified, table top cotton candy machine. Again, different concentrations of CNTs are being
used. Specimens produced thus far are discontinuous, randomly oriented and have large diameter ranges.
The process is being fine-tuned in order to generate fibers that are continuous and aligned with smaller
diameters. Once achieved, these fibers will be seeded, cultured and monitored for cellular response as
well.

Project Objective
The overall goal of this project is to investigate cellular attachment and growth on substrates made using
two different fabrication techniques: electrospinning and centrifugal force spinning.

Methodology

Electrospinning
A typical electrospinning arrangement is represented schematically in Figure 1. The main components
include 1) a high voltage source; 2) a polymer solution; 3) a syringe mounted to a syringe pump (not
shown) which houses the solution; 4) an exit (needle); and 5) a grounded rotating target. The usual
polymer-solvent solution is composed of an appropriate polymer dissolved in a volatile solvent to
facilitate its evaporation.

With electrospinning, several parameters can be adjusted to achieve different nanofiber diameters, surface
characteristics (i.e., pitted, pored surfaces) and thicknesses. For example, fiber diameter can be controlled
54
by tuning the source-target distance, polymer concentration and applied voltage. Table 1 provides a
categorical listing of these parameters. [1].

Table 1. Parameters that Affect the Electrospinning Process
Solution Parameters Process Parameters Ambient Parameters
Viscosity
Conductivity/polarity
Surface tension
Applied electric voltage
Tip-to-target distance
Needle tip diameter
Feed/pump rate
Hydrostatic pressure
Temperature
Air velocity
Humidity

Electrospinning uses high voltage to create an electric field that charges the particles in the solution, thus
creating a repulsive force. An oppositely charged rotating surface is placed a certain distance away from
the needle tip. As the syringe pump pushes the solution to the needle tip, repulsive forces overcome the
solutions surface tension once a threshold voltage is reached, and a polymer jet is created. The jet is
attracted to and collects on the grounded target. As the jet moves along its trajectory towards the target,
air exposure quickly evaporates the solvent, leaving behind polymer fibers; and nanoscale diameters are
produced by the jets stretching and bending. [2]


Figure 1. Schematic diagram showing the components of a typical electrospinning setup.

The electrospinning arrangement shown in Figure 1 can produce a single continuous polymer thread that
accumulates on the grounded rotating target with diameters that range from several microns to 100 nm.
The resulting nanofiber matrix is referred to as a scaffold.

For this study, aligned, high heat treated electrospun CNT-doped polymer fibers of 1, 2 and 4 wt %
concentrations were purchased from Applied Sciences, Inc., Cedarville, Ohio. To study cellular response,
they were seeded and cultured with human fibroblasts. The next few sections of this report describe how
the fibers were prepared for cell culture and the cell proliferation assay.

Preparation of Electrospun CNT-Doped Polymer Fibers
Alignment and continuity for the 1, 2 and 4 wt % samples were confirmed with environmental scanning
electron microscopy (ESEM) (Figure 2). The average diameters were 3.94, 3.82 and 5.9 microns,
respectively.

55

Figure 2. ESEM images of electrospun CNT-doped polymer fibers. From left to right: 1 wt %, 2 wt %,
and 4 wt %. (magnifications: x5k, x4k and x3k, respectively).

Once characterized, ten 1 cm
2
samples were cut from 11 cm 7.5 cm thin sheets using scissors for short
cuts and a paper cutter for longer cuts. Long cuts were made in the direction of the fibers and short cuts
were made transverse to fiber direction. Afterwards, they were transferred into petri dishes according to
weight percent and cleaned with alcohol (70% ETOH) for one minute and then air dried. Next, the
samples were placed fiber side up into a 24-well plate for cell culture. All procedures, starting with the
alcohol rinse, were performed under a sterile fume hood.

Cell Growth and Harvesting
Human fibroblasts were grown in media in cell culture flasks under environmental conditions (37 C, 5%
CO
2
) until 75-100% confluent. This cell line was chosen for its easy growth characteristics. Media was
changed every 3-4 days. Once confluent, the media was removed and 2 mL of trypsin was added. The
flask was then placed in an environmental incubator for 5 minutes to aid cells in detachment. Next, 10
mL of fresh media was added and gently mixed by pipetting up and down.

The solution of cells, media and trypsin was pipetted into a 15 mL centrifuge tube and spun down in an
MSE Mistral 2000 centrifuge for 5 minutes at 1000 rpm to obtain a cell pellet. The media was then
removed from the centrifuge tube. Approximately 2 mL of fresh media was added to the tube and the cell
pellet was gently re-suspended in the new media by pipetting up and down. A 1:10 solution of 0.4 %
trypan blue to cells in media solution was prepared to stain dead cells. A 10 sample of the trypan blue
and cell mixture was used with a hemocytometer to count the number of viable cells per mL of solution.

Cell Culture of Electrospun CNT-Doped Polymer Fibers
The cell proliferation design involved the three different fibers, three time points, three control wells per
time point and 6 standards of various cell counts ranging from (1.5k to 200k cells). A schematic of the
plate maps is given in Figure 3.


Figure 3. 24-well plate maps showing locations of CNT-doped polymer samples (), human fibroblasts
(C), assay wells (A), stain wells (S), control wells, and standards (given by cell count in thousands)

56
Culture plates were prepared according to the plate maps given in Figure 3. Afterwards, one mL of fresh
media was added to each standard well. One mL of cells (8k cells) was then added to 27 wells (the two
assay wells per time point per sample and the one stain well per time point per sample). One mL of cells
was also added to the positive control well per time point. Once prepared, the plates were labeled and
incubated at 37 C, 5% CO
2
for 24 hours. These procedures were performed under a sterile fume hood.

Cell Proliferation Assay of Electrospun CNT-Doped Polymer Fibers
To determine cell viability, a cell proliferation assay was done using CellTiter 96 Aq
ueous
One Solution.
This procedure is a color-indicating method that establishes the number of healthy cells. After the
sections were cultured for each time point, they were transferred into a new 24-well plate. Also, the old
culture media was removed from the appropriate time point controls.

Next, 100 L of fresh media was added to the controls in the old plate and the sections in the new plate.
An extra 100 L was added to the sections to ensure that they were covered completely. A 1:10 dilution
of assay reagent to media was then pipetted into the new plate and controls under a darkened sterile hood
(20 L to the controls and 40 L to the samples). The plates were wrapped with aluminum foil and
incubated for one hour at 37 C in a humidified, 5% CO
2
environment. After incubation, 100 L from the
section and control wells was pipetted into a 96-well microplate in the dark. Absorbance was measured at
490 nm with a 96-well plate reader (Wallac Victor
2
). The absorbance values are presented in the Results
section of this report.

Centrifugal Force Spinning
The mechanics behind centrifugal spinning are straight forward and perhaps the best example of the
process is a cotton candy machine as shown in Figure 4. Typical cotton candy makers are designed with a
spinning head that houses a reservoir into which a feedstock of sugar is loaded for spinning. There are
heaters near the reservoir; and sugar is melted as the head rotates at about 3,450 revolutions per minute.


Figure 4. Cotton candy machine.

The spinning action forces the sugar melt through small orifices into a collecting bowl that surrounds the
spinning head. Once the melt is forced through the holes and is exposed to air, the sugar melt quickly
solidifies as it spins and forms a web on the collecting surface.

Preparation of Samples for Centrifugal Spinning
The first polymer solutions being tested are sugar-water. The solutions are made by dissolving 1 gram of
granulated table sugar into 20 mL of tap water in a 50 mL beaker (0.05 wt% sugar) at room temperature.
To aid dissolution, the solution is mixed with either a glass stirring rod or the beaker is gently agitated by
hand. Next, about 0.02 grams of carbon nanotubes are added to the beaker (0.1 wt% CNT). The CNTs
were purchased from Pyrograf Products, Inc., Cedarville, Ohio.

The beaker is then placed in a Branson 3510 Sonicator for a 15 minute ultrasonic bath to evenly disperse
the CNTs in the liquid solution. After the bath, the beaker is placed in a fume hood to evaporate the water
because the desired state for the spinner is a dry solid solution. Once dry, the samples are carefully
scraped off of the bottom of the beaker with a stainless steel lab scoop in the fume hood. The samples are
then transferred to a mortar where they are crushed into a fine powder with a pestle.

57
The Chef Buddy cotton candy machine (Figure 4) is used to spin the crushed samples. Different
collecting surfaces are made using either copy paper and/or wide packing tape. The packing tape was
placed adhesive side up on long, wide strips of copy paper and set in place with small pieces of
transparent tape. The length was just enough to wrap around the inside of the collecting bowl and wide
enough to reach about 0.25 inches above the rim.

The machine is first heated for 5 minutes and then the collecting surface is placed inside the machine
along the edge of the collecting bowl. Next, the sample is poured into the reservoir with the machine in
the on position (heater on and spinning head rotating). The heating element melts the solid and
centrifugal force squeezes the melt through tiny holes in the spinning head and onto the collecting
surface.

Once spun, the samples are characterized using a Hitachi Tabletop TM-1000 ESEM. The next step after
characterization will be cell culture with human fibroblasts. However, since the process to make smaller
diameter centrifugal specimens is still being fine-tuned, cell cultures have not yet begun. Images for one
of the characterized samples are presented in the Results section.

Results

Centrifugal Force Spun CNT-Doped Polymer Fibers
ESEM images of a fiber sample using the cotton candy maker are illustrated in Figure 5. These thick,
short fibers are randomly aligned with small, irregular shaped globules unevenly dispersed throughout the
matrix. The fibers range in diameter from about 11 to 38 microns. The globule diameter range is from
172 to 228 microns or more. This particular sample was collected on white copy paper; and the white
granular areas in the image are paper particles.


Figure 5. ESEM images of a 0.1 wt % centrifugal force spun CNT-doped fiber sample. The first image is
at a magnification of x200 and the second image is x1k.

Electrospun CNT-Doped Polymer Fibers
Cell proliferation results for the electrospun fibers are given in Figure 6. This plot represents absorbance
versus incubation times of 20, 44, and 68 hours for the three different samples. A discussion of these
results is provided in the Interpretation of Results section.


58
Figure 6. A chart of absorbance versus incubation times for electrospun CNT-doped polymer fibers.

Figure 7 is a chart of absorbance versus the standards. The initial standard plate wells contained various
amounts of cells that ranged from 1.5k to 200k. After 20 hours, a 100 sample was pipetted from each
well into a microplate and absorbance was measured. The results are consistent with expectations,
meaning that the higher the cell count, the higher the absorbance.


Figure 7. Absorbance versus cell count for the electrospun 0.1 wt % CNT standards.


0.242
0.299
0.245
0.346
0.935
0.734
0.365
0.896
0.750
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
20 44 68
A
b
s
o
r
b
a
n
c
e

Incubation Time (hours)
1 wt% CNT
2 wt % CNT
4 wt % CNT
0.545
0.476
0.274
0.223
0.193
0.145
0.0
0.1
0.2
0.3
0.4
0.5
0.6
200 100 50 25 3 1.5
A
b
s
o
r
b
a
n
c
e

Standards (cell count in thousands of cells at 20 hours)
59
Interpretation of Results and Discussion
The most interesting feature about the column chart in Figure 6 is the dramatic difference in absorbance
for the 1 and 2 wt % CNT electrospun samples. The 2 wt % absorbance values are larger by a factor of
1.43 and 3.13, respectively for the 20 to 44 hour time periods. A corresponding decrease in absorbance is
noted at the 68 hour time point. These results might be explained by the 2 wt % percolation threshold that
marks the onset of electrical conductivity in polymeric materials. This conductivity is the effect of
polymer networks and will be investigated further.

Also in Figure 6, note the increase in absorbance for each sample from 20 to 44 hours. This means that
cell populations grew. In contrast, there was an absorbance decrease for each sample from 44 hours to 68
hours, indicating cell death. One possible explanation for this cell demise may be explained by the
volumetric constraints of the culture plates. In other words, at some point between the 44 and 68 hour
mark, cell growth reached a maximum and then declined as cells began to die off because they ran out of
room to grow.

Next steps for the electrospinning portion of this project are to conduct another cell culture and
proliferation assay. This will aid in confirming the observations made thus far regarding the 2 wt %
percolation threshold. For the centrifugal force spun part of the project, the spinning process is being
fine-tuned in order to obtain smaller diameter-sized fibers. Once this is accomplished, the next step will
be cell culture and proliferation studies. For both groups, fluorescent staining of cell bodies and nuclei
will be conducted to analyze cell response (i.e., cell attachment and orientation).

Acknowledgments
The author would first like to thank God for providing research opportunities and support through the
Ohio Space Grant Consortium, the University of Dayton Graduate Materials Engineering Department,
and the University of Dayton Research Institute. Special thanks are extended to Dr. Khalid Lafdi for his
mentorship and continuous support as well as Dr. Panagiotis for the use of his biology laboratory. The
author also expresses her appreciation to Dr. Charles Browning; Dr. John G. Weber; Jerry Czarnecki, MS;
and Asha Waller.

In addition, the author expresses her deepest sympathy at the passing of Dr. Gerald T. Noel, Sr. He will
be remembered for his strong dedication and support of students as the Ohio Space Grant Consortium
Associate Director and Campus Representative at Central State University, the authors undergraduate
alma mater.

References
1. Gonsalves, Kenneth E., Halberstadt, Craig R., Laurencin, Cato T., and Nair, Lakshmi S. (2008).
Biomedical Nanostructures. Hoboken, N.J.: Wiley-Interscience.
2. Matthews, Jamil A., Wnek, Gary e., Simpson, David G., and Bowlin, Gary L. (2002).
Electrospinning of Collagen Nanofibers. Biomacromolecules, 3(2), 232-238.
3. Qinghua Zhang, Chang, Zhenjun, Zhu, Meifang, Mo, Xiumei, and Chen Dajun. (2007). Electrospun
Carbon Nanotube Composite Nanofibres with Uniaxially Aligned Arrays. Nanotechnology, 18(11).
4. Badrossamay, Mohammad R., McIlwee, Holly A., Goss, Josue A., and Parker, Kevin K., (2010).
Nanofiber Assembly by Rotary Jet-Spinning. Nano Letters 10(6), 2257-2261.
5. Bognitzki, Michael, Czado, Wolfgang, Frese, Thomas, Schaper, Andreas, Hellwig, Michael,
Steinhart, Martin, Greiner, Andreas, and Wendorff, Joachim H. (2001). Nanostructured Fibers via
Electrospinning. Advanced Materials, 13(1), 70-72.
6. Hwang, C.M., Park, Y., Park, J.Y., Lee, K., Sun, K., Khademhosseini, A., and Lee, S.H. (2009).
Controlled Cellular Orientation on PLGA Microfibers with Defined Diameters. Biomedical
Microdevices, 11(4), 739-746.
7. Reneker, Darrell H. and Chun, Iksoo. (1996). Nanometre Diameter Fibres of Polymer, Produced by
Electrospinning, Nanotechnology 7(3), 216-223.

60
Detection of NO
2
Using Carbon Nanotubes

Student Researcher: Garrick M. Brant

Faculty Advisor: Dr. Pedro Cortes

Youngstown State University
Department of Civil/Environmental and Chemical Engineering

Abstract
Fast and accurate detection of hazardous chemicals is an important aspect in the public health and safety
field. NO
2
is a pollutant compound formed by internal combustion engines. The health effects associated
with NO
2
exposure are eye, nose, and throat irritation, and may cause impaired lung function and
increased respiratory infections in young children [1]. The objective of this research project is to
investigate the use of Carbon Nanotubes (CNTs) physically modified with polymers to detect NO
2
. The
physical modification of the CNTs, by wrapping them in a number of polymers, seems to yield similar
detection properties to nanotubes chemically modified. The ratio of polymer to CNTs is also an important
factor in NO
2
detection. It seems that CNTs and polymers alone are not capable of detecting NO
2
. In
contrast, the combination of both materials appears to yield a sensing nanostructure. The sensing
properties of relatively non-expensive Multi-Wall Carbon Nanotubes (MWCNTs) are also investigated
and compared to those exhibited by expensive Single-Wall Carbon Nanotubes (SWCNTs) in order to
explore a more cost effective detection platform.

Introduction
The objective of this research is to develop a NO
2
detecting platform using CNTs. CNTs inherently have
very low resistance, whereas most polymers have very high resistances. Hence, the junction of these two
materials appears to yield a conjugate CNTs-polymer complex with semi-conductor properties that can
detect hazardous gases [2]. The change in the electrical properties in the conjugated polymer-CNTs
appears to be due to the gas absorption, the swelling of the polymer or the charge transfer between the
polymer and the molecules of the analyte [2]. Previous efforts in the area of sensors based CNTs have
concentrated on the detection of gases [3-5]. Kong et al. [6] studied the electrical response of CNTs as
chemical detectors for NO
2
and NH
3
; and found that whereas the conductivity of the nanotubes increases
when in contact to NO
2
, it decreases in the presence of NH
3
. In such study, a voltage gate was used to
shift the Fermi levels of the nanotubes in order to modulate the resistance of the nanomaterial. Someya et
al [7] also worked on detecting alcohol vapors using carbon nanotubes based (filed-effect transistor) FET,
and found that CNTs are sensitive to a wide range of alcohols. They also show that the change in the
electrical properties of the nanotubes caused by the analytes, strongly depends on the gate voltage applied
to the nanodevice. Additional research on CNTs-film based on a two terminal resistor approach has also
been studied by Valentini et al [8]. They showed that carbon nanotubes are capable of sensing several
gases including NO
2
, NH
3
and C
2
H
5
OH. Further research work on detection has been done by coating
CNTs with specific polymers [9-11]. The change in the electrical properties in the conjugated polymer-
CNTs appears to be due to adsorption, polymers swelling or charge transfer between the polymer and the
molecules of the analyte [12]. Philip et al [13] developed a CNT-PMMA (Polymethylmethacrylate) for
detecting a range of gases and found that detection takes place at room temperature based on the polymer
volume expansion and polar vapor-nanotube interaction. Li et al. [14] have introduced organic coating to
detect Cl
2
and HCl, suggesting that the detection properties of nanotubes can be tailored based on the
polymer used. Although the sensing properties of CNTs have been continuously studied, most of the
studies have concentrated on individual single wall carbon nanotubes, as well as on chemical
modifications in order to introduce selectivity into the nanotubes. However, sensors based on individual
SWCNTs require complex instruments as well as an expensive nanotubes source. Additionally, chemical
treatments can damage the sidewall structure of the nanotubes. Hence, the present project will investigate
the sensing properties of a bed of relative non-expensive nanotubes (MWCNTs) based on a physical
functionalization. Here, the sensing properties of MWCNTs will be compared with those reported by
expensive SWCNTs. Additionally; several polymers will be investigated in order to study their effects on
the detection properties of the nanotubes.
61
Methodology
The MWCNTs (outer diameter 50-100nm, inner diameter 5-10nm, length 5-10m from Nanoamor.
Houston TX.) were wrapped in an specific polymer by initially suspending the CNTs in a water solution,
then taking 0.5ml of the solution and mixing it with 0.5ml of a selected polymer to make a 1:1
(CNTs:Polymer) ratio solution. The polymers here investigated were: Polyaniline emeraldine base (PA)
(Mw Ca 5,000 from Sigma Aldrich), PolyEthyleneGlycol (PEG) (Mw 5,000 from Sigma Aldrich) and
Poly (3,4-Ethylenedioxythiophene)poly(styrenesulfonate) (PEPSS) (from Sigma Aldrich). This process
was repeated on SWCNTs (1-2nm diameter and 5-30 m length from Nanoamor, Houston TX.) for
comparison purposes. Additional 1:2, 1:4, 2:1 and 4:1 (CNT:Polymer) ratio were also investigated. The
conjugated complex was place in a two way electrode and allowed to bake at 60C for 30 minutes. The
electrodes were then placed into a plastic container of a known volume, connected to a data acquisition
system and exposed to the NO
2
gas. When the sensor was exposed to the gas, the electrical signal of the
responsive electrodes changed and the data acquisition system recorded the electrical output.

Results and Discussion
Figure 1A shows the electrical change of the PEPSS-wrapped nanotubes based electrodes after being
exposed to the NO
2
gas. The electrical change is represented by a percentage of voltage change in order to
normalize the results. From Figure 1A, it is observed that the conjugated CNT:PEPSS complex appears to
be able to detect the presence of NO
2
due to a higher change in the electrical output. Included in the
figure, is the electrical response of the plain polymer as a control. It can be seen that this materials by
themselves are not capable of detecting the NO
2
, since no prominent electrical change was recording
during the experiment. The electrical output of the plain MWCNTs as control is also shown in figure 1A.
Here, it seems that the detection of nanotubes lacks of specificity since it did not display any particular
trend. Indeed, the deficiency of functionality imparted by the coating polymer yielded a nanostructure
without selectivity. Figure 1A also displays the influence of the CNTs:Polymer ratio. It can be observed
that by increasing the fraction of nanotubes in the complex (4:1), a higher sensitivity in the electrode is
achieved. Figure 1B shows the degree of detection of the MWCNTs:PEPSS system. It can be observed
that the nanotube complex is able to detect as little as 270ppm of NO
2
gas. Figure 1C shows the different
degree of detection between MWCNTs and SWCNTs wrapped in PEPSS. It can be seen that the MW
nanostructures offered a higher selectivity. Although current research work is in progress in order to
further analyze this outcome, it is assumed that the degree of purity on the single wall resulted in a lower
sensitivity than that shown based on Multi-Wall based on 95% purity.





Figure 1. Electrical response of electrodes based on PEPPS-wrapped MWCNTs. A) Electrical output of
PEPSS wrapped MWCNTs using different polymer ratio and controls after being subjected to 1081 ppm
of NO
2
. B) Electrical output of 4:1 MWCNTs:PEPSS . C) Electrical output of different PEPSS-wrapped
carbon nanotubes following an exposure to 1081ppm of nitrogen dioxide.

The MWCNTs were also wrapped in PEG. Figure 2A shows the electrical output of the PEG-coated
MWCNTs when exposed to 1081 ppm of NO
2
. Figure 2A again shows that the CNT:polymer complex is
able to detect the NO
2
gas. Also, the figure shows that by increasing the volume fraction of nanotubes in
the complex (4:1), a higher sensitivity in the electrode is achieved. This data suggest that PEG has less
selectivity than PEPSS, a factor that could be associated to the positive charges on the Sulfur element in
the PEPSS polymer. Indeed, the large electron cloud of the electrons of the NO
2
gas seems to show a
higher affinity towards the positive charge of PEPSS. In Figure 1A the (4:1) MWCNT:PEPSS base
C.
.
A.
.
B.
.
62
electrode yielded a -61% change in voltage whereas the (4:1) MWCNT:PEG base electrode (fig. 2B)
yielded only -43% change in voltage. Figure 2A also shows that the PEG control was not able to detect
the presence of NO
2
and that the lack of functionality on the plain nanotubes led again to a non-selective
nanostructure. Figure 2B shows the degree of detection of the MWCNT:PEG system. Again this response
was less than the one obtained from MWCNT:PEPSS system (figure 1B). Figure 2C shows the degree of
detection between the MWCNTs and SWCNTs wrapped in PEG (1:1) exposed to 1081ppm NO
2
gas.
Once again the MWCNT:polymer system displayed higher sensitivity in this study.



Figure 2. Electrical response of electrodes based on PEG-wrapped MWCNTs. A) Electrical output of
PEG wrapped MWCNTs using different polymer ratio and controls after being subjected to 1081 ppm of
NO
2
. B) Electrical output of 4:1 MWCNTs:PEG. C) Electrical output of different PEG-wrapped carbon
nanotubes following an exposure to 1081ppm of NO
2
.

SWCNTs were also wrapped in PEG and tested. Figure 3 shows the electrical output of PEG-coated
SWCNTs when exposed to 1081 ppm of NO
2
. The results for figure 3 show a similar trend to figure 2A;
however, the data shows that MWCNTs displayed higher selectivity due to the aforementioned reasons.
Also, figure 3 shows that by increasing the polymer fraction in the complex (1:4), a higher selectivity is
achieved with a -30% change in voltage. This is an opposite behavior to the shown in figure 2A in which
a higher fraction of nanotubes (4:1), achieved higher selectivity. These results could be associated to the
increase of selectivity properties of SWCNTs by the polymeric functionalization. It should be noted that
the PEG control line does not appear in figure 3A because the line is behind the SWCNT control line.



Figure 3. Electrical response of electrodes based on PEG-wrapped SWCNTs using different polymer
ratio and controls after being subjected to 1081 ppm of NO
2
.

SWCNTS were also wrapped in PA. Figure 4A shows the electrical output of PA-coated SWCNTs when
exposed to 1081 ppm of NO
2
. The results for figure 4A show a similar trend to figure 3. Indeed, when the
fraction of polymer was increased in the conjugate to a ratio of (1:2) a -45% change in voltage was
achieved. When the electrodes are exposed to 1081 ppm of NO
2
gas SWCNT:PA (1:2) electrodes sit
between MWCNT:PEPSS (4:1) electrodes which resulted in a -61% change in voltage (fig. 1A) and
MWCNT:PEG (4:1) electrodes which yielded a -43% change in voltage (fig. 1B). These results are again
encouraging since it suggests that relative non-expensive nanotubes can be used for detection of
hazardous gases. Figure 4B shows the degree of detection between the MWCNTs and SWCNTs wrapped
in PA (1:1) exposed to 1081ppm NO
2
gas. Once again the MWCNT:polymer system yielded higher
selectivity in this study.

A.
.
B.
.
C.
.
63


Figure 4. Electrical response of electrodes based on PA-wrapped SWCNTs. A) Electrical output of PA
wrapped SWCNTs using different polymer ratio and controls after being subjected to 1081 ppm of NO
2
.
B) Electrical output of 1:2 SWCNTs:PA.

Figure 5A shows the electrical output of the three MWCNT:polymer (1:1) systems here studied when
exposed to 1081 ppm NO
2
gas. It appears that PA and PEPSS have similar selectivity with around a -45%
change in voltage. PEG had a lesser response with a -39% change in voltage. This is could be associated
to the positive charges associated with PA and PEPSS. Figure 5B shows the electrical output of
SWCNT:polymer (1:1) electrodes when exposed to 1081 ppm NO
2
gas. It appears that PA and PEPSS
still have similar selectivity with around a -32% change in voltage. PEG again had a lesser response with
a -18% change in voltage.



Figure 5. Electrical response of electrodes based on polymer-wrapped CNTs. A) Electrical output of
polymer wrapped MWCNTs using (1:1) ratio CNTs:polymer after being subjected to 1081 ppm of NO
2
.
B) Electrical output of polymer wrapped SWCNTs using (1:1) ratio CNTs:polymer after being subjected
to 1081 ppm of NO
2


Conclusion
The objective of this research was to create a NO
2
detecting platform using CNTs and polymers. From the
data gathered it appears that a CNT:polymer NO
2
detection system is viable. It also seems that a
CNT:polymer conjugate is required for the detection of NO
2
. Finally it appears that it may be possible to
use relative non-expensive MWCNTs as the core material in sensing devices. It has been shown that the
electronegativity of the polymers seems to contribute to the selectivity of the gases to be detected.
Acknowledgments
The authors are also grateful to the URC-YSU and the Ohio Space Grant Consortium for funding this
research project.


A.
.
B.
.
A.
.
B.
.
64
References
1. An Introduction to Indoor Air Quality (IAQ), United States Environmental Protection Agency,
www.epa.gov/iaq/no2.html
2. Pedro Cortes, Garrick, M. Brant and Geofrey B. Smith, Detection of Chemical Analytes Using
Carbon Nanotubes in XXXII National Meeting and First International Congress AMIDIQ, May
2011.
3. J. Li, Y. Lu, Q. Ye, et al., Carbon nanotubes sensors for gas and organic vapour detection, Nano
Letters, vol. 3, no. 7, pp. 929933, 2003.
4. J. Suehiro, G. Zhou, and M. Hara, Fabrication of a carbon nanotube-based gas sensor using
dielectrophoresis and its application for ammonia detection by impedance spectroscopy, Journal of
Physics D, vol. 36, no. 21, pp. L109L114, 2003.
5. J. Suehiro, H. Imakiire, S. Hidaka, et al., Schottky-type response of carbon nanotube NO
2
gas
sensor fabricated onto aluminum electrodes by dielectrophoresis, Sensors and Actuators B, vol. 114,
no. 2, pp. 943949, 2006.
6. J. Kong, N. R. Franklin, C. Zhou, et al., Nanotube molecular wires as chemical sensors, Science,
vol. 287, no. 5453, pp. 622 625, 2000.
7. T. Someya, J. Small, P. Kim, C. Nuckolls, and J. T. Yardley, Alcohol vapor sensors based on
single-walled carbon nanotube field effect transistors, Nano Letters, vol. 3, no. 7, pp. 877881, 2003.
8. L. Valentini, C. Cantalini, I. Armentano, J. M. Kenny, L. Lozzi, and S. Santucci, Highly sensitive
and selective sensors based on carbon nanotubes thin films for molecular detection. Diamond and
Related Materials, vol. 13, no. 48, pp. 13011305, 2004.
9. J. K. Abraham, B. Philip, A. Witchurch, V. K. Varadan, and C. Channa Reddy, A compact wireless
gas sensor using a carbon nanotube/PMMA thin film chemiresistor, Smart Materials and Structures,
vol. 13, no. 5, pp. 10451049, 2004.
10. Y. Wanna, N. Srisukhumbowornchai, A. Tuantranont, A. Wisitsoraat, N. Thavarungkul, and P.
Singjai, The effect of carbon nanotube dispersion on CO gas sensing characteristics of polyaniline
gas sensor, Journal of Nanoscience and Nanotechnology, vol. 6, no. 12, pp. 38933896, 2006.
11. L. Valentini, V. Bavastrello, E. Stura, I. Armentano, C. Nicolini, and J.M. Kenny, Sensors for
inorganic vapor detection based on carbon nanotubes and poly(o-anisidine) nanocomposite material,
Chemical Physics Letters, vol. 383, no. 5-6, pp. 617 622, 2004.
12. M. C. Petty and R. Casalini, Gas sensing for the 21st century: the case for organic thin films,
Engineering Science and Education Journal, vol. 10, no. 3, pp. 99105, 2001.
13. B. Philip, J. K. Abraham, and A. Chandrasekhar, Carbon nanotube/PMMA composite thin films for
gas-sensing applications, Smart Mater. & Struct., vol. 12, pp. 935939, 2003.
14. J. Li, Y. Lu, and M. Meyyappan, Nano chemical sensors with polymer-coated carbon nanotubes,
IEEE Sensors Journal, vol. 6, no. 5, pp. 10471051, 2006.
65
How Is the United States of America Being Protected by the United States Air Force With
Computer Systems Programming?

Student Researcher: Tanisha M. Brinson

Advisor: Dr. Edward Asikele

Wilberforce University
Computer Information Systems

Abstract
The responsibilities of Computer Systems Programming personnel are to supervise and perform
either as computer analyst, coder, tester and manager in the design, development, maintenance,
testing, configuration management, and documentation of application software systems, client-server,
and web-enabled software and relational database systems critical to war fighting capabilities. The
Air Force Specialty Code (AFSC) is an alphanumeric code used by the United States Air Force
(USAF) to identify an Air Force Specialty (AFS). Declining fiscal resources, expanding diversity of
mission, and ever changing technologies in the Air Force are impacting the availability of its most
valuable resource, people. These constraining factors will continue to exist in the future, making it
essential for the work force to be effectively and efficiently trained to perform duties within each
skill level of an Air Force Specialty. In order to meet the challenges of tomorrow, the Air Force must
place a greater emphasis on career field training. This Communications-Computer Systems
Programming Career Field Education/Training Plan provides a comprehensive core training
document that identifies life-cycle training/education requirements, support resources, and minimum
core task requirements.

Project Objectives
The objectives of this project will deal with the aspects of the United States Air Force and the
standards for security that protect operating systems, application software, files, and databases from
unauthorized access to sensitive information, or misuse of communication-computer resources. The
research effort will pinpoint the duties of Computer Systems Programming personnel in the USAF.
Also, it will describe USAF processes of protecting operating systems, software and files; and will
also describe how are these systems being attacked.

Methodology Used
In order for airmen to become Computer Systems Programming personnel they must graduate from
Air Force Technical School with the award equivalent to that of a 3-skill level apprentice. In a USAF
Technical School they learn how to work with systems using software methodologies such as
distributed processing, systems networking, advanced information storage and retrieval, and
management techniques. Student candidates will also develop and maintain system specifications.
Then they will conduct and participate in system reviews and technical interchanges. Further, as their
career progresses, they will develop and implement policy to enable effective information discovery,
indexing, storage, life-cycle management, retrieval, and sharing in a collaborative enterprise
information environment. Finally, they will harness capabilities of systems designed to collect, store,
retrieve, process, and display data to ensure information dominance. The United States Air Force,
along with other military branches uses OPSEC. OPSEC is a military capability within Information
Operations (IO). IO is the integrated employment of three operational elements: influence operations
(IFO), electronic warfare operations and network warfare operations. Today, OPSEC is an
established methodology used by Military, Federal entities and Civilian Agencies and
Businesses. More and more private sectors are realizing the importance of Operations Security in
day to day operations. This helps to protect proprietary and sensitive information from accidental
disclosure, corporate espionage, internal espionage and more.
66
Results Obtained
To conclude, the United States Air Force plays a critical role in the defense of the United States
through control of air and space. One major way is by projecting Computer Security. Operations
Security (OPSEC) is simply denying adversary information that could harm you or benefit them.
Another form of OPSEC, although not as widely accepted, is the intentional misinformation of
an adversary, designed to protect your true secrets. OPSEC is a process, but it is also a mindset.
By educating oneself on OPSEC risks and methodologies, protecting sensitive information
becomes second nature

OPSEC is unique as a discipline, because it is understood that the OPSEC manager must make
certain decisions when implementing OPSEC measures. Most of these measures will involve a
certain expenditure of resources, so an estimate must be made as to whether the assumed gain in
secrecy is worth the cost in those resources. If the decision is made not to implement a measure,
then the organization assumes a certain risk. This is why OPSEC managers or Commanders must
be educated and aware of the OPSEC process. OPSEC is not only for Military or Government
entities. More individuals and Corporations are realizing the importance of protecting trade
secrets, personal security and intentions. Whatever the organization and purpose, OPSEC can,
and will, increase the overall security posture.

References
1. http://mr-gadget.hubpages.com/hub/PROTECTING-COMPUTER-SECURITY-SYSTEMS-
IN-THE-UNITED-STATES-AIR-FORCE
2. http://www.internet-security.ca/internet-security-news-archives-036/u-s-military-spy-drones-
will-now-run-on-linux-instead-of-windows.html
3. http://www.docstoc.com/docs/23925709/Communications-Computer-Systems-Programming
4. http://www.opsecprofessionals.org/
67
Atmospheric Temperature Analysis and Aerial Photography through High Altitude Ballooning

Student Researcher: Chellvie L. Brooks

Advisor: Dr. Augustus Morris, P. E.

Central State University
Manufacturing Engineering Department

Abstract
A major goal of the Central State University balloon satellite program is to launch scientific payloads,
using helium filled weather balloons, to altitudes reaching 100,00ft. With correspondence to this effort, a
payload enclosure was designed to accommodate the instrumentation and power needed; automatically
collecting atmospheric temperature data and aerial photographs during the balloons mission. A Hobo
data logger is a small and inexpensive device that was easily programmed to sample and store the
temperature data. A Canon camera was programmed with a simple electronic timer circuit to take
photographs of the landscape at fixed intervals of time. The temperature data collected was compared
with the U.S. standard atmospheric model of the Troposphere referenced by the National Oceanic and
Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and
the United States Air Force (USAF).

Project Objective
There are three different objectives that pertain to the project. Acquiring knowledge and familiarity with
the basic balloon system is one objective. Conducting research and a direct hands-on approach allows the
first objective to be achieved. Since experiments conducted with the balloon system rely greatly on the
weather and time of day, establishing a reliable procedure for filling, launching, and recovering data from
the balloon becomes the natural second objective. Designing an experiment to collect data on
atmospheric temperature vs. altitude and compare with the U.S. atmospheric temperature model is the last
objective of the project.

Method Used
The balloon system consisted of 3 primary components. A 6-foot neoprene weather balloon used to lift
the entire system. The payload shell was constructed using foam core board, foam insulation, and tape.
1000 feet of 100 lb. string was wrapped around a 3-foot wooden winder as the basic tethering system.
The balloon was inflated with helium from a regulated helium tank with a PVC designed coupling system
designed to allow easy balloon inflation.

A HOBO data logger device was used to record temperature measured with an external temperature
probe. Each measurement stamped with a time and date. An aerial photography system was added to the
payload using a modified Cannon camera triggered by an electronic timer circuit.

Prior to each experiment, the HOBO data logger was set up to record temperature every 5 seconds. The
camera system was designed to take pictures 4 times every minute. Both devices were placed into the
payload structure and sealed. Meanwhile the balloon was inflated with helium to a diameter capable of
lifting the total balloon system and payload weight. While the payload and tethering device was attached
to the inflated balloon, the camera system and the data logger were turned on.

As each experiment took place, the balloon system was allowed to ascend to altitudes in increments of
100 feet. At each level, sufficient time was allowed to record the steady state temperature. The
inclination angle of the system attached to the tether was also recorded to calculate the altitude accurately.
A fluorescent flag was attached to the tether every 100 feet to comply with FAA regulations. This
process continued through a maximum altitude of 1000 feet. The recording process was also continued
during the balloons descent.


68
Results
At this present time there is no conclusion of results for this project.

Acknowledgments
Dr. Augustus Morris
Dr. Abayomi Ajayi-Majebi
National Science Foundation
Ohio Space Grant Consortium

References
http://scipp.ucsc.edu/outreach/balloonfest2003/index.html
http://spacegrant.montana.edu/borealis
www.nasa.com
www.noaa.com

69
Wireless Charging System

Student Researcher: Rachel L. Bryant

Advisor: Dr. Yan Zhuang

Wright State University
Department of Electrical Engineering

Abstract
Electronics require a cable connection to a power supply when charging their battery, which can be
inconvenient and restrictive. Problems can also arise with the cable, which can easily be lost, or in the
charging port, which can easily break. In the past, people have used three main types of wireless charging
techniques: antennas, transformers, and resonance coupling. Often these techniques have issues with
adequate range and power transfer efficiency, which makes using them impractical. Using resonance
coupling, a technique in which frequencies at the transmitter are matched to the frequencies at the
receiver to increase the inductive properties of the system, wireless power transfer can be made more
efficient over a short distance. A wireless charging system that is efficient could be a convenient and less
restrictive option. Free from the limitations caused by cable connections all people could benefit.

Project Objectives
The purpose of this project was to create a matched network capable of maximizing the efficiency of
power transferred between two existing coils. The goal was to reach better than 40% efficiency with 3
feet between the coils, and wirelessly charge a 3.7 volt lithium-ion polymer battery. To achieve the main
objective of the project, smaller goals needed to be met. The first goal was to simply excite an
electromagnetic standing wave between the two copper test coils. The next goal was to measure the
resonant frequency of the coils and the impedance of each coil at this resonant frequency. With this
information, calculations needed to be performed to create circuit prototypes that would maximize the
power transfer to the load. The last small goal was to build a circuit that allowed the battery to charge and
observe the output simultaneously with a spectrum analyzer. After these small objectives were met, the
matched network was implemented and tested, monitoring how effectively that the lithium-ion battery
was charged.

Methodology Used
Maximum power transfer occurs when the load impedance is matched to the source impedance. A
matching network can be placed between a load and a transmission line, and is designed so that the
characteristic impedance of the transmission line is equal to the impedance seen looking into the matching
network (Pozar, 2005, p 222). This project required a network to be designed matched to 50, which is
the impedance of the signal generator that was used during testing. The resonant frequency of the copper
coils first needed to be measured, because energy is transferred easiest at this frequency. To find the
resonant frequency, the transmitter coil was connected to a Hewlett-Packard impedance analyzer. This
step in the project required an above average understanding of the impedance analyzer and a specialized
test fixture. The impedance analyzer was set up to show the magnitude and phase of the impedance of the
coil as many frequency sweeps were performed. Table 1 in the Figures/Tables section shows data
gathered during one short frequency sweep. The receiver coil went through the same testing. The
magnitude and phase data were transformed into real and imaginary parts. The resonant frequency was
then determined by looking at when the imaginary part of the impedance was zero. The impedance of the
coils at the resonant frequency was observed. The measurements were verified by hand calculations and
the inductance of each five turn coil was estimated.

The first attempt at making a matching circuit was to design a transmission line. The transmission line
approach would have allowed for the most accurate impedance matching, because there would not have
been limitations due to real-world component values. For this technique, the impedance of the copper
coils at resonant frequency needed to then be normalized by using

70



where Z
o
is the impedance of the power amplifier, Z
L
is the impedance of the coils at resonant frequency,
and z
L
is the normalized impedance. The normalized impedance was then plotted on the Smith Chart and
the chart was used to help characterize the transmission line impedance matching prototypes.
Unfortunately, the prototypes developed using this method would have resulted in milled circuit boards
over one meter in length. The more practical option then became creating a matched network using
lumped components.

Again the Smith Chart was consulted to determine which components and values to use. This method
resulted in two different prototypes. The first prototype consisted of a 3.3 H inductor in series with the
coil and a 72 pF capacitor in parallel with the inductor. The alternative was a 75 pF capacitor in series
with the coil and a 3.1 H inductor in parallel with the capacitor. A set of prototypes was constructed.
One was connected to the output of the transmitter coil and one to the input of the receiver coil. The
system was tested and the matched network transferred power with an efficiency of 2%. (The unmatched
network had a power transfer efficiency of less than .02%). After several more tests, the 75 pF capacitors
were removed and replaced with 30-65 pF variable capacitors so that the circuit could be tuned more
quickly and more easily. Using an LC checker and many more tests, the impedance matching circuits
were hand-tuned further. The variable capacitor on both matching circuits was turned to its smallest
setting, 30pF. The 3.3 H inductor on the transmitter coil was replaced with a 2.7 H inductor in series
with a 5.6 H inductor. The inductor on the receiver coil remained unchanged. Tests with this matched
network yielded the most stable and consistent results. This matched network also provided the greatest
power transfer efficiency.

The variation between the calculated component values and component values that were used in the final
impedance matching network can be attributed to circuit board that was used. The board itself is made of
a copper substrate separated by a fiberglass core, which creates a large parallel plate capacitor. The
capacitance of the board is very low, but using components with such small values in combination with
the exacting nature of the project, the capacitance of the board made a difference. This is not the only
reason the discrepancy occurred. The environment that the tests were run in was not an ideal one.
Having fluorescent lights on in the lab caused the noise floor to be raised and could have affected the
accuracy of the small RF measurements. When the coils were in operation, the system was affected by
everything in room such as the metal equipment housings. If the SMA connectors were tampered with
from test to test, the results would also turn out largely different. The sensitive nature of the system
created many challenges that were overcome by the adjustment of the matched network, resulting in
straying from the original calculated prototype.

Another circuit had to be built so that the battery could charge effectively, while the power transfer
efficiency could be monitored on the spectrum analyzer. A full wave rectifier was built to utilize the
negative side of the input signal and create direct current to increase the efficiency of the charging circuit.
After building the basic rectifier, a yellow LED and a 10 resistor were added in series. The small
resistor was added to allow for open circuit measurements, while the LED was added as a visual check
that the battery was actually charging. In parallel with these components, the battery connector was
added in series with a single diode. The diode was incorporated to help prevent the battery from
discharging into the circuit when insufficient power is present. At the end of the circuit board an SMA
connector was added so that the spectrum analyzer could monitor the power transfer efficiency during
battery charging and future tests. With everything designed and built, the system was tested fully by
attempting to charge the 3.7 V lithium-ion polymer battery mentioned above.

Results Obtained
The resonant frequency of the coils was measured through several sweeps on the impedance analyzer to
be 10.7 MHz and at this frequency the impedance of each coil was measured to be 933 . The total
power transfer efficiency of the network without impedance matching circuitry was measured to be .02%
at a distance of 3 feet. The greater the distance between the coils, the smaller the power transfer
71
efficiency. This can be observed on Figure 1 of the Figures/Tables section. After connecting the first
lumped component prototypes to each coil, the maximum repeatable power transfer efficiency measured
was 2% at the same distance. Much lower than expected, the matching circuits were heavily tested and
tuned. The system most often operated with about 10% power transfer efficiency at 3 feet. The highest
power transfer efficiency measured was 12%.

When the charging circuit and battery were introduced into the system, the circuit was measured to have
20 volts at the output. The battery was measured to charge at around 4 volts and the current passing
through it ranged from 14 to 20 mA. The resonant frequency slightly changed once the battery was
connected for charging and was measured to be 10.6 MHz. From the spectrum analyzer, it was
determined that the power was transferred most effectively at 10.625 MHz when the battery was
charging. This phenomenon can be seen in Figure 2 of the Figures/Tables section.

Figures /Tables

Table 1. Data gathered during a short frequency sweep using the impedance analyzer.
Frequency (MHz) |Z| (k) ()
10 1.176 144.5
10.1 1.125 146.7
10.2 1.08 149.6
10.3 1.03 153.2
10.4 0.995 157.6
10.5 0.964 162.8
10.6 0.942 168.4
10.7 0.934 174.1
10.8 0.932 -179.8
10.9 0.939 -173.5
11 0.957 -167.2

Power Transfer as a Function of Distance (-10 dBm Applied)

Figure 1. Data taken from the spectrum analyzer as the distance between the coils was steadily increased.
72

Power Transfer Test While Charging Battery

Figure 2. Frequency sweep during battery charge process to show greatest power transfer efficiency.

References
1. Pozar, David. (2005). Microwave Engineering (3
rd
ed). New Jersey: John Wiley & Son.
73
Process Design of Reinforced Polymeric Resins Impregnated
With Multi-Walled Carbon Nanotubes

Student Researcher: Beatrice M. Burse-Wooten

Advisor: Dr. Abayomi Ajayi-Majebi, P. E., CMfgE

Central State University
Department of Manufacturing Engineering

Problem Statement
Polymeric resins without fiber reinforcement are weak and cannot deliver acceptable engineering
performance. Even when reinforced they may not be strong enough to withstand a certain force. Utilizing
RTM, or Fiber Reinforced Resin Transfer Molding is needed to increase the strength of composite
materials.

Problem Significance
There are millions of dollars are tied up in the deployment of composite materials in the Aerospace and
Automotive Industry. Weak composite materials have led to loss of many lives for example the Airplane
crash in a New York City Neighborhood due to a failed composite component. Things of this nature can
be avoided with the right experimentation and testing made to increase the strength of composite
materials being used today.

Problem Solution
With this experiment, using carbon nanotubes and nanotechnology infusion approaches, reinforced
polymeric resinous materials can be made even stronger. Process design study will determine the
percentage strength increase. This study exploits the observation that a carbon nanotube / fibers, with a
unique structure of one-billionth of a meter, is sixteen times stronger than steel and 1/6 the weight of
steel. By increasing the strength of the polymeric materials from 1% to 4% and using 0% as a control
variable, we will test the strengths using IZOD Testing and ANOVA. Using this information we will
determine the best alternative. After this experiment is conducted we believe that the improvements will
lead to better quality, reliability, and improved performance and according to Dr. W. Edwards Deming
leads to higher profits for the organization.

Project Execution Strategy
Several steps will be taken to carry out the experiment. These steps include:
1. Specify appropriate statistical experimental designs to save cost, increase testing efficiency and
guarantee desired statistical soundness of designed experiments.
2. Fabricate samples using multi-walled carbon nanotubes.
3. Conduct the test on braid reinforced Multi-Walled Carbon nanotubes and traditional reinforced
samples.
4. Determine impact strength of the nanotechnology materials.
5. Develop empirical models.
6. Provide rational explanation for the impact strength increases resulting from the selected
nanomaterials fabricated at CSU using materials obtained from the Zyvex Nanotechnology
Manufacturing Company under various nanomaterials loadings.

Design Process
The first step in the design process is the (1) Mold Design. The mold is made of three aluminum plates
with the dimensions of 24 X 20. The plates will be attached together and tighten with 106 screws and a
mold that can hold up to 42 samples. The second step is the (2) Mixture Design. The mixtures will be
determined by the computer program giving the percentage of how much resin, fiber, hardener, and nano
to mix. There will be different mixtures for the samples, one being with 0% nano, and four more with
0.5%, 1%, 2% and 4% nano. Next is the (3) Testing of Samples. The testing will be performed using the
74
Izod Impact Tester and the Tensile Tester shown in Figure 1. The final step is the (4) Data Analysis.
Data will be collected and put into a SPC program using either the T-Test or ANOVA testing methods.


Figure 1. Tensile Tester used to test the samples from step 2.

Sample Characteristics, Preparation and Testing

Multi-walled Carbon
Nanotubes in "Conterra"
Diethyltoluenediamine (Epikure)
hardener
Flash Point 249F 275F
Specific Gravity 1.15 1.02
Boiling Point 260C 308C
SAMPLE CHARACTERISTICS

** Conterra material that was purchased already mixed with different nano percentages

The above chart displays the different characteristics that are materials used contain. Other characteristics
include the viscous clear liquid, light colored appearance that the Carbon Nanotubes take on and the
pungent smell of the Diethyltoluenediamine (Epikure) hardener.

When preparing the modes, a mold release agent needs to be applied to the surface of the mold for ease of
sample removal after mixing, pouring and curing takes place. Using IZOD samples molds, 42
nanotechnology samples per batch, are fabricated at high temperatures. The high precision temperature
oven shown in Figure 2 is used to receive and cure nanomaterial functionalized epoxy resin systems
prepared by Zyvex
1
at either the 4% or 2% by weight formulation.


Figure 2. High Precision Temperature Oven

1
Zyvex High Performance Materials is in collaboration with Central State University in reference to this project.
75
While the resins are in the oven they are being monitored using different sensors and chart recorders.
When the heating samples come out of the oven the numbers recorded will be used in analyzing the
strength of the materials. Using ANOVA at a 5% level of significance, the analysis will be made and the
best alternative will be chosen.

Estimated Cost


Above is an estimated cost of how much it will cost to complete this experiment. We are currently
working on grants alongside the advisor of the project. After we finish getting the correct equipment
needed to finalize the experiment we will be working on putting the samples into the oven and collecting
our data.

Acknowledgments
1. Dr. Abayomi Ajayi-Majebi, PE. CMfgE, Project Advisor
2. Melvin Shirk Lab Technician
3. Two (2) CSU Engineering Students Sophomore/Junior/Seniors
4. Dr. Allen Jackson of Wright State University Multifunctional Nanotechnology Center for
work on Field Emissions SEM.
76
Creating a Lunar Biosphere

Student Researcher: Audra D. Carlson

Advisor: Karen Henning

Youngstown State University
Beeghly College of Education

Abstract
In this lesson students are challenged to create a self-sustaining enclosed biosphere containing plants and
animals. In preparation for the project, students will learn about the unique features of our Earth that
allow it to sustain life. These features will be compared to the environments on the two most popular
destinations for humans to visit in the future; the Moon and Mars. Upon reaching the understanding that
the harsh environments of the Moon and Mars cannot sustain life as we know it, students will be asked to
design a biosphere like those that may one day be used by astronauts in order to survive for prolonged
periods of time on other planets.

Lesson
This lesson is an adaptation of the Lunar Biosphere lesson featured in Exploring the Moon: A Teachers
Guide with Activities. Students will use 2-liter bottles to house a miniature ecosystem while working in
groups of 3 or 4. After learning about the complex interactions between organisms and their environments
within small groups, students will choose from various soil compositions, amounts of water, light, and
small animals to inhabit their ecosystem. The goal of each ecosystem is to allow the inhabitants to survive
and thrive during the 2 week testing period.

Detailed notes will be kept by each student regarding soil composition, amounts of water added, and plant
and animal characteristics. Students will develop a hypothesis predicting the performance of their
biospheres based on the various elements added and their knowledge of ecosystems. Bottles will be sealed
until the end of the 2 week period. Students will then measure and record any growth of organisms and
compare results with their hypothesis.

Pedagogy
The underlying theory for this hands-on activity is constructivist. Within the contexts of this theory,
students learn best from experiences in which they are not simply given the answers, but guided towards
them. This encourages students to utilize their existing knowledge in order to help them process a new
understanding of real life situations.

The students gain background information on ecosystems and biomes before constructing their
biospheres. This provides a basis and framework for new knowledge to be gained during the activity. The
activity allows the students to interact with the material and to draw connections between the hands-on
experience and the content. The activity is then the culminating part of their study of ecosystems and
biomes, making the concepts more concrete for students. They can apply what they know to making their
biomes more hospitable and discuss possible variables that might have changed the outcome of the
biomes that they and their classmates have constructed.

Objectives
Students will be able to understand that an organisms habitat directly affects its ability to thrive
and survive.
Students will be able to conduct a scientific investigation using scientific techniques and
appropriate instruments.
Students will be able to communicate their findings.

Ohio Content Standards
Grade Five Life Sciences Benchmark:
77
C. Compare changes in an organism's ecosystem/habitat that affect its survival.
Grade Five Life Sciences: The Diversity and Interdependence of Life indicators 4 and 6.
Grade Five Scientific Inquiry Benchmarks:
A. Use appropriate instruments safely to observe, measure and collect data when conducting a scientific
investigation.
B. Organize and evaluate observations, measurements and other data to formulate inferences and
conclusions.

Student Engagement
The beginning of the lesson required a more passive role for the students as they listened to a lecture
regarding ecosystems and interdependence. Later in the lesson, student groups completed a vocabulary
worksheet and participated in a brainstorming activity leading up to the introduction of the biosphere
project. During the construction of the biosphere students were very involved. They were sharing ideas,
asking questions and formulating hypotheses.

Materials Used
Soil Materials: Vermiculite, organic low-quality topsoil, organic premium topsoil, organic
fertilizer, water- sand, gravel and charcoal are also good choices to have available.
Plants: 3 fast growing seasonal varieties- for early spring in Ohio Dusty Miller was chosen for its
sensitivity to excess moisture, Swiss Chard for its overall hardiness, and Pansy due to its
sensitivity to poor soil. These will vary depending on hardiness zone.
Animals: Sow bugs and earthworms, also spiders, beetles, snails.
Project materials: 2-liter bottles with lids and bottom cut off by adult using exacto knife or razor,
waterproof tape, measuring cups and spoons, index cards, pencils, yarn, funnel, data sheets, rulers

Students worked together to fill the bottom of the 2-liter bottles with about 2 cups of soil mixture. They
then added the three plants after being carefully measured from base of stem to top of leaves. Once plants
were intact, the tops of the bottles were taped back together with waterproof tape as to keep water from
escaping. A funnel was used to add the desired amount of water and small organisms through the spout at
the top of the bottle. The bottles were sealed and kept in a specified area assuring the same amount of
sunlight until opened to check results.

Results
The students greatly anticipated the chance to measure the growth of their specimens. While all of the
plants did stay alive, several of the biomes lost their sow bugs. Students were very active in offering their
own ideas as to why this happened as well as why some plants grew better in some biospheres, while
others did not. This helped the class realize that different organisms require different environments;
therefore one soil composition would not suit all plants.

Assessment
Students were asked to present their biospheres to the class. In the presentation, students were expected to
include their soil composition, amount of water, amount of sunlight, all of the organisms included, the
beginning and ending measurements, and their hypothesis. Students also had to explain whether or not
their hypothesis was accepted as well as what they would change about their experiment (if anything).

Conclusion
This activity allowed students to solidify the concepts of an interconnected web of life within an
ecosystem. Students had the opportunity to apply knowledge gained while gathering evidence to help
them reach their own conclusions. The role of the teacher as a partner in gaining knowledge rather than a
person dispensing facts encouraged students to be inquisitive and to take responsibility for their own
learning. When data was compared, it was concluded that each plant was successful in a different
composition of soil and amount of water. Even the absence of earthworms made a difference in one of the
projects. By the end of the lesson, students were able to conclude that each ecosystem is fragile and each
component plays an important role in a systems health.
78
The Sierra Project: Unmanned Aerial Systems for Emergency Management Demonstration Results

Student Researcher: Robert C. Charvat

Advisors: Dr. Kelly Cohen, Dr. Manish Kumar

The University of Cincinnati
School of Aerospace Systems and Fire Science

Abstract
The SIERRA(Surveillance for Intelligent Emergency Response Robotic Aircraft) Project is an
organization of University of Cincinnati faculty, graduate students, and undergraduate students who are
currently developing next generation UAS(Unmanned Aerial Systems) for Emergency management
application in the area of wildland fires. The team has partnered with both West Virginia Division of
Forestry and Marcus UAV Corporation to develop a program in which can provide cost effective solutions
to wildland fire management. The program features the implementation of both a Tactical UAS and a
computer based EMRAS(Emergency Management Resource Allocation System) which is expected to
lead the way to a new generation of affordable technology to improve wildland fire response and
management. This paper is the 2
nd
iteration of NASA-OSGC Paper on this topic and represents the
demonstration results.

Introduction
Wildland fires are a natural occurrence which throughout much of the country and are essential aspects to
a healthy ecosystem. These fires are known nationwide to cause large amounts of destruction. Each year
the United States spends an estimated $1 Billion on wildland fires, and suffers amounts as high as $20
Billion in damages. Though Federal and State agencies have a significant amount of resources to battle
these fires, they continue to threaten natural resources and citizens.


Figure 1. Typical Fire Time Line
Figure 1 above addresses several main concepts of fire management which are the focus of this project.
In Figure 1 by decreasing the time needed to safely manage a fire it is contained quicker. This leads to
less fire growth and less damage from the fire. Table 1 accounts for the land loss per year and shows an
increasing trend.

Project Objectives
The main objective of this project was to utilize Unmanned Aerial System (UAS) Technology with an
active fire organization through working with government and manufacturing. During this objective the
technology would be demonstrated to show its capability and studied to better understand how it could be
used. This study would require several flights involving all persons involved. At the conclusion of this
study the goal was for this technology to be demonstrated and more clearly understood how it could be
utilized during follow up research. The end goal of this research is to support the potential for active fires
to be monitored by UAS technology to improve first responder effectiveness.
79

Figure 2. SIERRA Team Partnerships Figure 3. Vehicle setup and capacities

Figure 2 above shows the 4 main types of partnerships in this project. The partnerships of manufacturing,
emergency management, government and research organization provided for an environment for all
parties to collaborate during development and promote positive working relationships for the technology.
Figure 3 on the right shows the 4.5 ft wingspan Zephyr UAS which was used during the research. This
vehicle featured a computer interface for control and video/gps data stream returns to identify fire and
provide its location.



Figure 4. Situational optimality program Figure 5. Coopers Rock Forest burn site

Figure 4 shows the situation optimality program in which could be a potential benefit provided by
autonomous fire tracking a UAS could provide. By being able to merge data from many different sources
it provided for a program in which could make recommendations to fire managers on where and how to
fight fire in a live scenario. This could decrease time to react to a fire, increase safety, and save money.
Figure 5 shows Cooper Rock State Forest, this was the location of the first demonstration flight at a 40
acre controlled burn. This burn was to clear out thick vegetation and rehab the area. The blue dot shows
the launch point of the UAS in which was very close to the fire to simulate the ability of this vehicle to be
launched from virtually anywhere.

Results and Discussion
The Zephyr UAS by Marcus UAV Corporation was selected to flight based on 3 fundamental design
aspects. The first consisted of its high wing loading which made it more stable to fly, and less susceptible
to wind gusts. Second the Zephyr was a flying wing which required less moving parts to control, and
lastly the Zephyr came standard ready to fly with all the required electronics to provide for
implementation of more advanced ground technologies from its data streams. These are all key benefits
of the platform in which have been prioritized in the development of this vehicle type for further use in
wildland fires.

Downsides of the Zephyr were based on training, operation, and repair issues. The main training issue
consisted of the difficulty of flying a flying wing aircraft and that to gain a high level of expertise, the
team would need to practice with the system before flight, and would most likely have several crashes
before becoming effective pilots. Additional a well-developed operational doctrine of how to use the
80
aircraft was not available and resulted in many complications such as battery failures, non-compatibility
between computer platforms, and difficulty in implementation the vehicle in the field environment.
Lastly the aircraft was not easy to repair which prevented training on it, and though it is considered low in
price for a UAS its cost of 10,000 was still enough to prevent it from performing boundary condition
testing in which would have to be more well understood before field trials. Overall these downsides did
not prevent the project from completing its missions but clearly demonstrated areas of improvement for
this technology.


Figure 6. Student UAS Launch Figure 7. UAS Surveillance Flight over burn



Figure 8. Computer interface for UAS Flight
Figure 9. Photo above Command Area

The vehicle provided aerial surveillance in two flights in West Virginia while flying in support of the State
of West Virginia Department of Forestry. These flights were successful in showing the major concepts of
the vehicle such as take-off and landing from a remote location, live video feed of wildland fire activity,
long endurance flights with low risk to personnel, and lastly the ability to save money vs. manned aerial
flights. Burn 1 was specifically successful at showing the robustness of the platform, and its ability to fly
in a hazardous environment. Burn 2 was successful at showing the ease of using the platform and its true
capability to have low altitude flight while accurately collecting information.

In Conclusion, by completing these tasks the vehicle clearly demonstrated its potential to change the way
future generations approach wildland fires by using UAS technology to increase safety and reduce costs.
The project provided positive outcomes for all parties by demonstrating areas of improvements in the
technology, new possible technology uses, advanced capability uses, and a model of success for future
technology implementation. Importantly with the potential of situational optimality programs, it provided
to be a possible new data source which could provide date sets in which before could not easily be
obtained. As UAS technology has proven this potential with wildland fire fighting it is expected it will do
the same with many other applications as well and represents a fundamental change in the future of
aerospace technology and how it can positively influence our lives.

Acknowledgments
The author would like to thank OSGC for their support, as well as the State of West Virginia Division of
Forestry, University of Cincinnati, Marcus UAV Corporation, NASA(various offices), as well as Dr. Kelly
Cohen, Dr. Gary Slater, Mr. Eugene Rutz, and Dr. Manish Kumar for the continued support of the project.

References
Available by request: R. Charvat.
81
Biodegradable vs. Non-Biodegradable Slurry in Deep Trench Excavations

Student Researcher: Darren M. Conley

Advisor: Dean M. Bortz M.A., CSI, CDT

Columbus State Community College
Department of Construction Sciences

Project Objectives
The objective of this project is to examine the issues concerning biodegradable slurry mixtures and non-
biodegradable slurry mixtures in deep trench excavation and construction. Slurry construction can be used
to seal excavated walls against water intrusion or as a barrier to contain deposited contaminants which
may threaten the fresh groundwater supply and/or the surrounding soil deposits.

Methodology Used
I used a simple compare/contrast framework to examine different types of slurry that are used with certain
construction activities.

Findings were that biodegradable slurry and non-biodegradable slurry are used in situations where a
prescribed construction activity dictates the type of slurry employed.

In construction, slurry is a term often used to reference a thick, watery semi-solid held in suspension
thats used in projects where large sub-grade filtration, shoring, or containment is required.

When constructing trenches, slurry has two chief responsibilities. First, its used to shore up the walls of
an excavation by applying its own hydraulic pressure to counter-act any external pressures from
surrounding soil and/or groundwater. Second, depending on how the excavated wall will be used, slurry
will temporarily or in some cases, permanently seal the walls to prevent any groundwater intrusion. Bio-
degradable slurry consists of either a guar-gum or polyacrylamide polymer mixed in proportion with
water.

Guar-gum made from the guar bean, chiefly made in India and Pakistan, is the most commonly used
polymer because of its inexpensive cost as well as its high viscous and stability characteristics. A major
limitation is an effective life of approximately one day.

Guar-gum naturally deteriorates due to enzyme action leaving only sugars (galactose and mannose)
behind to be consumed by micro-organisms present in the surrounding soil. This limitation is corrected by
supplying chemical additives- (biocides and/or PH control) to the slurry mix which help to slow the
biological activity. This in turn will extend the slurries effective life (workability) to an average of
fourteen days.

Once installation of the products into the trench has been completed, the slurries viscosity is then broken
down by the application of enzyme degradation chemicals which will allow the groundwater to flow
freely. Residual amounts of polymer not broken down will be consumed by microbes in the soil.

Polyacrylamide is an inert, synthetic polymer that isnt as cost effective or as viscous as guar-gum. An
advantage is it degrades very slowly; thus, has a much longer, natural effective life. Unlike guar-gum,
polyacrylamide does not require any type of biocide to prevent its pre-mature degradation. When faced
with an excavation in an environmentally sensitive area, polyacrylamide is a popular choice because its
usage does not require any chemicals or materials that may come into conflict with local laws restricting
the use of biocides.

82
Non-biodegradable slurry, known for high impermeability, consists of a commonly used mixture of
sodium bentonite and water.

Sodium bentonite is a naturally occurring, clay-like rock formed from volcanic ash and found mostly in
the North Western United States and Greece. Attractive to the construction industry for its high
absorbency (it expands fifteen to eighteen times its dry volume when combined with water), clumping
tendencies, and use as a sealant, is relatively inexpensive to refine. It penetrates into adjacent soil from a
few inches to several feet depending on the characteristics of the soil.

The Environmental Protection Agency (EPA) considers sodium bentonite to be very effective in sealing
porous soils that surround toxic landfills, abandoned oil and water wells, sewage lagoons and most other
hazardous waste areas. Effective mixing ratios of sodium bentonite to water also vary with the
characteristics of the soil, and failure of the slurry can occur if the bentonite is not ground finely enough,
mixed with chemically contaminated water or in contact with soils containing salt water.

Two common methods have been developed to contain sub-surface hazardous waste. This is
accomplished through the installation of impermeable sub-surface barriers in the form of synthetic, high
density, polyethylene sheets or a soil/cement bentonite slurry mixture.

High density, polyethylene sheets (HDPE) are installed vertically into the ground (usually in thirty foot
increments) by way of a special trencher that also backfills as the sheets are installed. This form of
containment minimizes handling of handling of hazardous waste and also any soil disruption at the site.
The second method of containing sub-surface waste can be accomplished through the application of either
soil bentonite slurry (SB) or cement bentonite slurry (CB).

SB walls are constructed using normal slurry methods, the difference being a blend of excavated soil, dry
sodium bentonite and water is poured into the trench and eventually sets up to a clay-like semi-solid.
Some advantages to this method are its flexibility with ground movement, its high resistance to
contaminated groundwater and a very low permeability.

CB walls are constructed with sodium bentonite mixed with cement. This slurry has a thin viscosity
which allows it flow easier through sub-surface voids and cracks thus increasing its effectiveness. This
method is referred to as the one step method because after excavating, the slurry is left to harden
overnight with no concern needed for back-filling. Other advantages to cement bentonite are increased
trench stability and strength as well as its ability to be used in areas of narrow or restricted access.

Results Obtained
A conclusion has been determined that in comparing biodegradable and non-biodegradable slurry, that
different applications exist depending on the prescribed construction activity.

References
1. Dunlap, David W. Holding Back the Flood. The New York Times. Oct. 8, 2005. (Pg.1)
2. Gutberle, L.C. Slurry Walls. Virginia Tech Civil Engineering Department. 1994. (Pg.2)
3. Nelson, Bryn. Retaining Walls Keeping Water Away. Newsday. Sept. 18, 2001. (Pg.2)
4. Paul, David B. Davidson, Richard R. Cavalli, Nicholas J. Slurry Walls: Design Construction and
Quality Control. ASTM. Philadelphia, PA. 1992. (Pg.2,3)
5. Ryan C. Spaulding C. Strength and Permeability of a Deep Soil Bentonite Slurry Wall.
Proceedings of the GEO-Congress 2008 ASCE, March 2008, New Orleans, LA. (Pg.2,3)
83
Radiation Recoil Effects on the Dynamical Evolution of Asteroids

Student Researcher: Desire Cotto-Figueroa
Advisor: Thomas S. Statler
Ohio University
Department of Physics and Astronomy

Abstract
Gravitational forces and collisions were once considered the primary mechanisms governing the evolution
of asteroids, but today it is understood that non-gravitational forces play a critical role. Radiation recoil
forces are caused by the anisotropic emission of thermal photons from the surface of a rotating object that
is heated by sunlight. The Yarkovsky effect is a radiation recoil force that results in a semimajor axis
drift in the orbit that can cause main belt asteroids to be delivered to powerful resonances from which
they could be transported to Earth-crossing orbits. This force depends on the spin state of the object,
which is modified by the YORP effect, a variation of the Yarkovsky effect that results in a change of the
spin rate and obliquity (i.e. the angle between the orbital plane and the spin axis of the object). The
results obtained by this research will help to explain how the Yarkovsky and the YORP effects act among
Near-Earth Asteroids (NEAs) and therefore will contribute to the understanding of the dynamical
evolution of the NEA population.

Project Objectives
Asteroids are left over pieces from the formation of the solar system about 4.6 billion years ago and the
vast majority is found within the main belt between the orbits of Mars and Jupiter. Near-Earth Asteroids
(NEAs) are asteroids whose orbits bring them in close proximity with the Earth. As their typical lifetime
is far less than the age of our Solar System, it is thought that NEAs are objects from the main belt,
constantly delivered to their current orbits by various mechanisms. Gravitational forces and collisions
were once considered the sole mechanisms for asteroid delivery, but today it is understood that radiation
recoil forces are a critical factor in the evolution of the smaller objects (D <40 km). The Yarkovsky and
YORP effects are a force and torque, respectively, imparted by the recoil kick from infrared photons
radiated from the Sun-warmed surface of an asteroid. The Yarkovsky effect can modify the semimajor
axis of an asteroid, which is the average radius of the orbit. The YORP effect can modify the spin rate and
the obliquity, which is the angle between the orbital plane and the spin axis of an asteroid. In the classical
picture of the YORP cycle, the YORP effect gradually increases the obliquity of an asteroid as it makes it
spin up until a certain point. Then the asteroid starts to slow down asymptotically approaching a stable
obliquity.

Most asteroids larger than a few hundred meters are aggregates. Aggregates are agglomerations of
fragments, ranging in size, that are held together by gravitational forces and/or material strength.
Simulations to date have assumed a constant YORP torque that continuously spins up the object past the
point where mass shedding and possible re-accumulations of the shed mass occur. As a result, the YORP
effect is a preferred candidate for the formation of binary asteroids. However, from the results found by
T. S. Statler (2009) we know that this scenario is not realistic. The YORP effect has an extreme
sensitivity on the topography of the asteroids and a minor change in the surface of an aggregate asteroid
can stochastically change the YORP torques. Using identical objects and adding the same crater or
boulder in different positions, T. S. Statler (2009) showed that the resulting torques could differ by orders
of magnitude or even change sign. Moreover, K. A. Holsapple (2010) showed that if an object is spun up
and allowed to deform continuously, the deformation increases the moment of inertia sufficiently, such
that the increase in the angular momentum results in a decreasing spin rate. Therefore, the idea that an
object would evolve continuously through the YORP cycle seems to be too simple.

The time scales over which mass reconfiguration can occur are much shorter than the time scales over
which the YORP effect changes the spin rates and obliquities. If the continuous reconfiguration leads to a
84
shape of the aggregate that is nearly symmetric, the YORP torques could become negligibly small or even
vanish. On the other hand, due to the extreme sensitivity of the YORP effect on the topography of
asteroids, reconfiguration of an asteroid that for example is spinning up could lead to a change in sign of
the YORP torques and make the asteroid spin down. Both cases would imply a self-limitation in the
evolution of the spin state due to the YORP effect and the objects would not follow the classical YORP
cycle. Moreover, subsequent reconfigurations could lead to random variations in the YORP torques
making the evolution of the spin state completely stochastic. One objective of my research is to model
self-consistently the YORP effect on the spin states of dynamically evolving aggregates. I will do an
extensive analysis of how shape changes affect the spin state evolution of aggregate objects by doing a
correct calculation of the YORP effect rather than simply applying a constant torque. This analysis will
let us test whether the YORP acceleration is self-limiting and whether the shape changes interrupt the
YORP cycle and make the spin evolution stochastic.

The change in the obliquities of the asteroids should leave a distinctive signature, driving the spin axis of
most asteroids to obliquity values of 0, 90 and 180 degrees. But to obtain a direct measurement of their
distribution of obliquities is not an easy task. To determine the rotation poles of asteroids will require
radar observations or multiple lightcurves at different illumination and orbital phases for each NEA. So
far, there are only about 20 NEAs for which rotation poles have been determined. Instead of obtaining a
direct measurement, the obliquity of an NEA can be inferred if the orbital semimajor axis drift rate due to
the Yarkovsky effect is known. From the linear heat diffusion theory for a spherical body, the semimajor
axis drift rate varies linearly with cosine obliquity. Therefore if we obtain the semimajor axis drift rates
of the NEAs from astrometric data, we can estimate their distribution of obliquities. Another objective of
my research is to search for this signature of the YORP effect in the population of small NEAs by
constraining the obliquity distribution. If the distribution shows that the obliquities of the asteroid tends
towards those values, it would provide supporting evidence for the significance of the YORP effect as the
main physical process in the spin evolution of small NEAs.

Methodology Used
Obliquity Distribution of NEAs:
The Yarkovsky effect, which is by definition a secular effect, manifests itself as a steady drift in the
semimajor axis. The best estimate of the semimajor axis drift is obtained when the shape and spin state of
a NEA are known, in which case a finite element nonlinear thermal model can be used to estimate the
heat radiated by each facet of its surface. An estimate can still be made without knowing the shape of the
NEA by use of a linear heat transfer model if the spin state of the NEA is known. But for the great
majority of NEAs, there is no information available about their shape or rotation pole. When there is no
information available, we can still establish the presence of a semimajor axis drift through the estimation
of a transverse non-gravitational acceleration. The dynamical model used in the Jet Propulsion
Laboratory (JPL) Comet and Asteroid Precision Orbit Determination Package (henceforth referred to as
ODP) written by Steven R. Chesley and others at JPL, already includes non-gravitational forces, which
are important in the orbital evolution of comets and asteroids. ODP uses optical and radar astrometric
observations to fit orbits of objects. The best fit is the orbit that minimizes the sum of squares of the
residuals, which are the difference between the measured and the computed sky positions. Recently, S.
R. Chesley et al. (2008) used this approach to estimate the semimajor axis drift rates and uncertainties of
683 NEAs. Many indicated an implausible drift rate, presumably due to astrometric errors.

Subsequently, S. R. Chesley et al. (2010) developed an astrometry debiasing technique to remove the
astrometric errors that come from systematic local errors in the star catalogs that are used in the data
reduction. Here, we follow the same approach and select a total of 801 NEAs that may possibly reveal
the presence of a semimajor axis drift based on the following criteria: a diameter smaller than 10 km
(since the Yarkovsky effect strength increases for smaller objects); optical astrometric observations
extending over a time interval greater than ten (three) years without (with) radar observations. We apply
the new astrometry debiasing technique to the data in order to remove astrometric errors. The astrometric
observations are also weighted using different values considering more or less accurate observations
depending on the observing program, the star catalog and the epoch of observation. All available optical
and radar astrometry data is then used to fit the orbits of the NEAs using ODP.
85

We estimated the semimajor axis drift rates and associated uncertainties due to the Yarkovsky effect for
the sample of 801 NEAs. In order to determine which cases are spurious we estimated a maximum
possible semimajor axis drift rate for each NEA. We consider a semimajor axis drift to be spurious when
it is more than 1.5 above 1.25 times the maximum possible semimajor axis drift rate. We chose 1.25,
allowing for a possible greater value of the semimajor axis drift rate due to a different combination of
values for the parameters than those assumed. On this basis we find that 149 out of 801 are spurious
cases (see Figure 1). The semimajor axis drift rates and the respective uncertainties for the non-spurious
cases are shown in Figure 2. Objects that have negative drifts indicate a retrograde rotation (90
obliquity 180) and objects with positive drifts indicate a prograde rotation (0 obliquity 90) If an
object has a semimajor axis drift rate close to zero, the obliquity value must be close to or statically
consistent with 90, its mass or diameter are rather high, or the thermal inertia is very low. Among those
objects with a significant Yarkovsky signal, retrograde rotation predominates. This predominance of
retrograde rotators is well explained by studies of the delivery of NEAs from the main belt, which have
found that NEAs will come through the
6
resonance generally only by retrograde rotation. The
6

resonance is a secular resonance between asteroids and Saturn.

Although there is a simple dependence between the semimajor axis drift rate and the obliquity for
spherical bodies with a fixed spin rate and thermal inertia, the true scenario is more complicated. This
simple dependence can't be used to infer the distribution of obliquities since as shown in Figure 1, the
absolute magnitude of 1 is exceeded implying unphysical values of cosine obliquity. We will develop a
code that will explore a wide variety of models for the distribution of obliquities of the NEAs. Our goal
is to identify the intrinsic obliquity distribution that is consistent with the results obtained for the
semimajor axis drift rates of NEAs. For each NEA, the code will take as an input the semimajor axis
drift rate, the respective uncertainty and the physical parameters on which the semimajor axis drift
depends. A trial intrinsic obliquity distribution will be defined, which will be folded through all
observational errors and selection biases simulating the observational process. The resulting predictions
for the semimajor axis drift rates will be compared against the actual distribution of semimajor axis drift
rates. A new trial intrinsic obliquity distribution will be selected and this process will be repeated. After
many trials, the results will be compared in order to identify which set of intrinsic obliquity distributions
are most consistent with the data and to rule out those that are not.

Aggregate Dynamics:

A minor change in the surface of an asteroid could stochastically change the YORP torques. We will do
an extensive analysis of how shape changes affect the spin state evolution of aggregate objects. For this
analysis we will combine two codes, TACO and pkdgrav. TACO is a thermophysical asteroid code
developed by Thomas S. Statler, the author and other students at Ohio University. TACO models the
surface of an asteroid using a triangular facet representation, which is the typical method used due to its
geometrical simplicity. This code can self-consistently compute the torques from the YORP effect. The
code pkdgrav is a cosmological N-body code modified by Derek Richardson and others at the University
of Maryland to simulate the dynamical evolution of asteroids represented as aggregates of spheres using
gravity and collisions. Each code uses the most appropriate representation of an asteroid to do its
calculations. Spheres are convenient to simulate the dynamical evolution of aggregates but the triangular
tiling is the method of choice for calculating the YORP torques due to its geometrical simplicity.

We have developed an algorithm to fit a triangular tiling over an object created for pkdgrav, composed of
spheres, in order to be used with TACO to compute the YORP torques. An example of a triangular tiling
fit obtained for an aggregate of spheres is shown in Figure 3. Besides this new algorithm, there are
essential aspects of physics that have to be correctly represented by the code. We have to obtain a
realistic representation of the surface of an asteroid and need the ability to modify the representation of
the surface as minor changes occur among others. We have developed the code to handle these issues
properly. The last step, which we are currently working on, is to merge the routines of TACO that
86
compute the self-consistent YORP torques with the code pkdgrav, which will be used to simulate the
dynamical evolution of aggregates.

Once we have completely merged the two codes, we will test if an aggregate can evolve to a shape for
which the YORP effect becomes negligibly small. As the YORP effect causes an asteroid to spin up, the
asteroid could be forced to change into new shapes. If the shape of the aggregate becomes nearly
symmetric, the YORP torques could become negligibly small or even vanish. On the other hand, the
YORP effect has an extreme sensitivity on the topography of the asteroids. As mentioned in the Project
Objectives, T. S. Statler (2009) showed that using identical objects and adding the same crater or boulder
in different positions resulted not only in torques that differed by orders of magnitude but also by sign.
Therefore, a minor change in the surface of an asteroid that is spinning up could reverse the sign of the
YORP torques and cause the asteroid to spin down, limiting the evolution of the spin state through the
YORP cycle. The timescales at which minor changes in the surface occurs are much smaller than the
timescales required for an object to evolve through the YORP cycle. The continuous changes in the
surface of an aggregate could then cause a different evolution of the YORP torques and therefore the
object may not necessarily evolve through the YORP cycle as was previously thought.

We will simulate a variety of realistic aggregates and let them evolve in order to find such configurations
for which there is a YORP effect self-limitation. These simulations will let us estimate numerically the
probability that YORP torques on realistic aggregates become self-limiting and for which shapes this
occurs. We will also simulate a variety of realistic aggregates and follow their evolution in order to
compute the sequence of YORP torques through which they evolve. We will determine whether changes
in the surface results in a spin evolution that is stochastic or to what degree the YORP cycle is altered.

Significance and Interpretation of Results
The results obtained by this analysis will help to explain how the YORP effect acts among the NEAs, and
contribute to the understanding of the dynamical evolution of the NEA population. Extensive analyses of
the basic behavior of the YORP effect have been previously conducted, in the context of the classical
YORP cycle. Here we explore the possibility of a new behavior of the YORP effect, in which a minor
change in the surface of an aggregate asteroid can stochastically change the YORP torques. The
statistical probabilities that the YORP effect on aggregate asteroids indeed becomes self-limiting or that
the YORP cycle is interrupted with minor changes in the surface, could result in a new diagnostic of
material properties of an asteroid population. For monolithic asteroids, we would expect the basic
behavior of the YORP cycle. Therefore, we would expect that most obliquities would be driven towards
asymptotic values of 0, 90 and 180 degrees for a population that consist of monolithic asteroids. For
aggregate asteroids, the spin state evolution may be stochastic and the obliquities may not tend towards
the asymptotic values.

The predictions made from the statistical probabilities can be tested against the obliquity distribution of
NEAs obtained from the semimajor axis drift rates due to the Yarkovsky effect. To find that most
asteroids have obliquity values near the asymptotic values would imply that most are monolithic and that
the YORP cycle has played an important role in their evolution. The rest of the objects could possibly
consist of aggregate asteroids. To find a random distribution in the obliquities of NEAs would imply that
the population consists of aggregate asteroids and that minor changes in the surface have modify their
spin evolution due to the extreme sensitivity of the YORP effect on the topography of asteroids. To find
a random obliquity distribution of NEAs along with peaks towards the asymptotic values would imply a
population of asteroids that consist of a mix of aggregate and monolithic asteroids.

Further observational evidence of whether a population consists of monolithic asteroids or not could be
obtained through the use of lightcurves. A light curve is the measurement of the change in brightness of
an object over a period of time. From our simulations, we may be able to predict characteristic shapes to
which aggregate asteroids evolve. Statistical analysis of these shapes and their simulated light curves
would allow us to identify what kind of light curves may be characteristic of aggregate asteroids. A
simple example would be when the continuous reconfiguration of an aggregate leads to a shape that is
nearly symmetric, in which case we would expect a light curve with a very small amplitude.
87

The discovery of NEAs is very important, but it is also of great importance to characterize these objects,
and to understand their origin and their evolution. This analysis will provide essential insight about the
material strength of the NEAs and whether they are monolithic, aggregates or a mix of monolithic and
aggregate asteroids. It is important to characterize them in order to develop a correct strategy in the
future to deflect a threatening object away from Earth in case of an imminent impact and to understand
how the object would respond. This analysis will also help to explain how the Yarkovsky and the YORP
effects act among the NEAs, and contribute to the understanding of the origin and evolution of NEAs and
therefore to the origin and evolution of our Solar System.

Figures

Figure 1. Absolute Magnitude of the semimajor axis drift rates and the associated uncertainties
normalized by the maximum semimajor axis drift rate. Black, green, blue and red asterisks represents
NEAs with SNR 1, 1 SNR 2, 2 SNR 3 and SNR 3, respectively. The slanted solid lines
indicate an SNR equal to 1, 2 and 3. The vertical blue dashed line indicates a semimajor axis drift rate
that is 1.25 times the maximum semimajor axis drift. All the NEAs that within 1.5 do not reach the
vertical line are below the slanted blue dashed line. These are spurious cases and are represented by
crosses.
88


Figure 2. Semimajor axis drift rates and the associated uncertainties. Black, green, blue and red asterisks
represents NEAs with SNR 1, 1 SNR 2, 2 SNR 3 and SNR 3, respectively. The slanted solid
lines indicate an SNR equal to 1, 2 and 3 to each side. Only non-spurious cases are plotted. A negative
value of the semimajor axis drift rate indicates a retrograde rotation while a positive value indicates a
prograde rotation. Among those objects with a significant Yarkovsky signal, retrograde rotation
predominates







89


Figure 3. Example of the triangular fit obtained for an aggregate of spheres made to represent the asteroid
25143 Itokawa.

References
S. R. Chesley et al. 2003 Science, 302:17391742
S. R. Chesley et al. 2008 Bulletin of the American Astronomical Society, p. 435
S. R. Chesley et al. 2010 Icarus, Volume 210, Issue 1, p. 158-181
M. Cuk and J. A. Burns (2005) Icarus, 176:418431
K. A. Holsapple (2010) Icarus, 205:430442
A. Kryszczynska et al. 2007 Icarus, 192:223237
D. C. Richardson et al. (2000) Icarus, 143:4559
D. C. Richardson et al. (2009) Plan. Space Sci., 57:183192
D. P. Rubincam (2000) Icarus, 148:211
J. G. Stadel PhD thesis, University of Washington, 2001.
T. S. Statler (2009) Icarus, 202: 502513
D. Vokrouhlicky (1998) Astron. Astrophys. , 335:10931100
D. Vokrouhlicky and P. Farinella (1998) Astron. Astrophys. , 335:351362
D. Vokrouhlicky (1999) Astron. Astrophys. , 344:362366
D. Vokrouhlicky et al. (2000) Icarus, 148:118138
D. Vokrouhlicky and D. Capek (2002) Icarus, 159:449467
K. J. Walsh et al. (2008) Nature , 454:188191
D. K. Yeomans (1994) ACM, volume 160 of IAU Symposium, p. 241

90
Wind Tunnel Blockage Corrections for Flapping Wing Models

Student Researcher: Robert W. Davidoff

Advisor: Dr. Aaron Altman

University of Dayton
Department of Mechanical and Aerospace Engineering

Abstract
This aerodynamics project looks at testing flapping-wing models to better characterize the effect of tunnel
blockage on force and moment data. To enable this, data on 3-inch and 6-inch plates of aspect ratio 2
moving in two kinds of motion, pitch and plunge, was collected in the AFRL Horizontal Free-surface
Water Tunnel (HFWT). Experiments were performed at two Reynolds numbers (25k, and 50k) with a
variety of amplitudes and frequencies. By comparing force and moment data derived from these
experiments, blockage effects could be observed by comparing normalized force and moment data
between similar runs of the 3-inch and 6-inch flat plates.

Project Objectives
In aerodynamics, wind tunnel testing provides a critical check stage for designs headed to the real world,
or an invaluable source of small-scale data that can be used to refine theoretical models. However, this
testing has its own requirements. Whereas the real situations being modeled concern external flows
around an air vehicle, wind tunnel flows are also internal flow inside the tunnel boundaries and thus are
affected by the additional factors of blockage effects and wall interference effects. In order to produce
wind tunnel data that has value in design testing or is a useful base for theory, these effects must be
understood and compensated for. Such blockage corrections have a long history for stationary models,
such as wing test sections or models of aircraft or aircraft components, refined over more than a century
of aerodynamic wind tunnel testing. However, flapping-wings, with their associated plunge, pitch, and
even more complex motions, severely test the bounds of current understanding of blockage effects and
wall effects on moving models. Much of current testing in this field is applying corrections based on work
done originally in 1979, which in turn is based on earlier work, none of which was specifically designed
to deal with flapping-wing motions. Not surprisingly, there are indications that these corrections may not
be optimal for the motions involved in flapping-wing problems.

The objective of this research is to produce a better understanding of blockage effects and wall effects on
flapping-wing flows. Such an understanding is critical to allowing research on flapping wings to be
conducted in such a way as to better avoid or correct for the effects of blockage of the tunnel volume or of
the presence of tunnel confines. To do so, the performance of 6-inch and 3-inch flate plate wing segments
in similar flows were examined in pure-pitch and pure-plunge motion cases of varying characteristics,
both centered in the tunnel and off-center in the tunnel to examine the effects of the tunnel walls on the
flow. By comparing force and moment coefficients in the two sets of scenarios, characterization of where
blockage and tunnel wall effects become significant can be observed and methods of correcting for them
could be derived for flapping wings.

Methodology
Flapping wing testing involves investigating a wider variety of parameters than conventional steady-flow
models. In addition to conventional parameters like Reynolds number and angle of attack, flapping wing
testing also involves periodic parameter such as frequency and amplitude in several potential types of
motion. Since varying these parameters results in radically altered flow characteristics, the blockage and
wall effects in collected force and moment data could potentially also vary radically.

To characterize the variance in blockage effects with changes in these parameters, existing data collected
on 3-inch and 6-inch rectangular plates of aspect ratio 2 by Indra Sai Kiran Jangam at the AFRL
Horizontal Free-surface Water Tunnel (HFWT), This facility consists of a test section 46 cm wide and 61
cm high, with a length of 300 cm and is capable of flow velocities ranging from 3-45 cm/s. A three-
91
degree-of-freedom motion rig enables motion of the model in three axes. Two plunge rods allow the
model to be moved in plunge (motion vertically), or pitched (rotated about a horizontal axis perpendicular
to the stream). A carriage allows the model to be moved in the streamwise direction to generate surge
motions. While rather intrusive to the flow, this setup allows controlled but aggressive flapping motions
involving any combination of the three basic movements to be generated for testing.

For this collection of the data used in this project, the pitch-plunge rig was programmed to generate
several varieties of pure-pitch and pure-plunge motion. To compare the effects of pitch, tests were
performed with amplitudes of 5 and 30 degrees about a mean angle of attack of 0 degrees. To observe the
effects of plunge, tests were performed with motion amplitudes of 10% and 50% of the chord length of
the flat plate. These tests were performed over several frequencies between 0.25 and 3 Hz at Reynolds
numbers of 25k and 50k. In addition, tests at 10 degrees pitch amplitude and 10% chord length plunge
amplitude were performed at 100% chord length offset from the tunnel center in order to examine if this
wall proximity introduced significant additional wall or blockage effects.

This test data was normalized and inertial force effects removed. The resulting data was then averaged
using a 10-datapoint moving average and filtered using a Matlab filtfilt command. The moving average
was intended to remove high-frequency noise to make the Matlab filtfilt numerically stable and allow it
to remove additional remaining noise. Filtfilt, was used to filter data at half-order forward and half-order
backward in order to avoid phase-shift of data. All drag and lift coefficients are based on corrected free
stream dynamic pressure. Some characterization of blockage effect variation with changes in flapping
parameters is presented below.

Results Obtained
Comparison of data collected in runs at the tunnel centerline with data centered 1 chord length offset from
the centerline suggests that the additional blockage effects caused by such wall proximity are marginal in
both pitch and plunge cases. While differences can be observed in the data presented in Figure 2, they do
not appear significant.

Frequency variation, on the other hand, has stronger effects. As can be seen in Figure 3, increasing
frequency leads to greater magnitude coefficients of lift and drag, though some alteration of the lift and
drag cycle contour can be observed.

Some phase shift of lift cycle can be observed with increasing motion amplitude, as in the 6-inch and 3-
inch pitch amplitude comparisons shown in Figure 4. However, while some decrease in lift force
amplitude could be seen in the 6-inch 5 degree amplitude curve, compared to the 3-inch 5 degree case, the
amplitude of the 6-inch case actually increased relative to the 3-inch case in the 30 degree amplitude data.
In both the 5 degree and 30 degree amplitude data, drag was relatively unchanged with only slight
variations in the curve.

Figures/Charts

Figure 1. Images of AFRL Horizontal Free-surface Water Tunnel (HFWT) test setup showing motion rig.
92

Figure 2. Plots illustrating marginal effect of wall proximity in tests performed at 1 chord length off
tunnel centerline compared to tests with otherwise similar parameters at the tunnel centerline. For both
pitch and plunge, the illustrated plot shows the maximum variation observed in the dataset; other cases in
pitch and plunge showed the same or smaller effects.


Figure 3. Plots showing effects of increasing frequency on co-efficient of lift and drag variation over a
single cycle. The magnitude of the coefficients of lift and drag increases in pitch. With plunge, the same
holds for coefficient of lift, but for drag it seems to vary less, and the 3 Hz case has an odd character.
93

Figure 4. Plots showing effects of increasing pitch amplitude in 0.5 Hz case at Re 25k. Note apparent
phase shift in lift and moment coefficient curves. Drag data is fairly consistent for pitch, though
apparently shifted slightly up. While magnitude for plunge case is roughly consistent (though also
shifted), altered curve behavior is apparent.

Acknowledgments
The author of this report would like to thank Indra Sai Kiran Jangam and Dr. Aaron Altman for their
assistance in data collection and analysis. The author would also like to thank Ms. L. Miranda Gavrin of
MIT for her ongoing support in this and other endeavors.
94
Commercial UAV Autopilot Testing and Test Bed Development

Student Researcher: Joseph M. DiBenedetto

Advisor: Dr. Michael Braasch

Ohio University
School of Electrical Engineering and Computer Science

Abstract
There are many commercial uses for small scale Unmanned Aerial Systems (UAS) in domestic airspace. Before
UASs can be permitted into airspace over populated areas they must be proven safe, reliable and robust. There are
many concerns to be addressed with UASs in commercial airspace. They need to be able to accurately follow a
flight path, as well as sense and avoid obstacles that they might encounter, with a focus on avoiding the object.
They need to be able to do these tasks reliably and efficiently. There are several commercial-off-the-shelf
autopilots available that have a wide variety of capabilities. With an airframe capable of providing an appropriate
test platform, analysis of a high-end commercial-off-the-shelf autopilot can be pursued. The functionality and
reliability of the unit will be tested as well as the failsafe sensors built into the system.

Project Objectives
This project strives for the development of a small scale UAS test bed and the testing of a high cost commercial-
off-the-shelf (COTS) autopilot. The autopilot chosen was the MicroPilot MP2028 designed by MicroPilot. The
development of a suitable test bed to be used for testing small scale UAS systems in an accurate manner as well as
testing the reliability and robustness of the MicroPilot MP2028 are the main objectives of this project.
Determining flight duration and payload capabilities for the test bed will also be addressed during the project.
Verifying the capabilities and testing the user interface of the MicroPilot MP2028 is also a major objective of the
project. During the project anything that may compromise the safety of other aircraft in the immediate airspace as
well as any structures or people on the ground near the area of flight will be recorded and mitigated because safety
is one of the most important factors in the National Airspace System.

Methodology Used
Due to the interchangeability of its payload bay as well as the size of the bay itself the GALAH designed by
Autonomous Unmanned Air Vehicles was selected as the airframe to serve as the base for the test bed. The
GALAH airframe is a pusher design meaning that the engine is on the back of the airframe and the payload bay is
in the front of the airframe. Therefore to increase the payload capacity of the airframe weight needs to be added to
the back of the airframe. A Desert Aircraft DA-50 two cycle engine, which is slightly larger than the
recommended engine size, was chosen to increase the power of the airframe while adding payload capacity to the
front of the airframe without adding dead weight to the back of the airframe. After assembly of the GALAH was
complete and all of the needed components were added to facilitate the airframe being air ready, a Certificate of
Authorization (COA) was obtained from the Federal Aviation Administration (FAA). The COA is a document
issued by the FAA allowing an aircraft to be flown in the national airspace. The COA document requires
information about the airframe such as: the communication specifications, the failsafe states, the location where
the airframe will be flown and the pilots qualifications. During the flight tests the pilot had a class two flight
physical and had passed the written portion of the pilots test for full scale aircraft, an observer also had a class two
flight physical. To allow for the initial testing of the airframe a ballast weight was added to the front of the
payload area to obtain the proper center of gravity for the airframe. Short flight tests were conducted
preliminarily; after each flight the GALAH was inspected for any signs of wear or structural fatigue. Several
flights were conducted to test the engine and control surfaces of the airframe, with each flight a success. Next,
extended test flights were conducted to test the engine power and control surface response in flight. These flights
were also used to gather data to determine the estimated maximum flight time of the airframe with the current fuel
tanks. Upon completion of these test flights the airframe was ready start testing UASs.
95
Several steps were taken before the MicroPilot MP2028 was integrated into the GALAH test bed to ensure
maximum safety of the airframe and anything around the flight field. First the simulation software that
accompanied the MP2028 was used to verify as many settings of the autopilot as possible as well as all of the files
that would be used to control the MP2028 during the test flights. After all of the simulations were completed
successfully, the MP2028 remote unit was integrated into the test bed for initial testing. Several bench tests were
then conducted to verify that the MP2028 remote unit was installed properly. During the bench test the GPS
signal was faked, making the MP2028 think it had acquired GPS lock, to allow for the autopilot to fully initialize
without obtaining a GPS location. Using the sensor displays on the ground-control-station user interface
computer, the airframe was pitched, rolled, and yawed without the autopilot engaged to verify that the displays
responded properly, showing that the remote unit was configured properly in respect to how it was installed in the
airframe. Next the airframe was again rolled, pitched, and yawed, this time with the autopilot engaged, to verify
that the control surfaces responded properly to the movements, correcting for the changes in orientation to attempt
to maintain level flight on the proper heading. After all of the control surfaces were configured properly,
communication testing was conducted. The MP2028 system has a duel communication link that is used to send
control data to the airframe from the ground control station, with the primary link also serving as a data link for
flight status information. Tests were conducted to verify that the loss of one of the communication links did not
disrupt the pilots ability to control the airframe. After communication testing was completed ground testing of
the MP2028 started. During ground testing the MP2028 locked itself into autopilot mode several times under
different conditions, not allowing the pilot to retake control of the airframe. This raised a lot of concern about the
safety of testing the MP2028 in the air without a failsafe to bypass the MP2028 control.

To address these safety concerns a board was designed that would take two inputs, one from the MicroPilot
system and the other from a standard RC receiver. The board has one output to the control surfaces of the airframe
creating a way to take over control of the airframe in case the MicroPilot locked into autopilot mode. The board
was designed using relays and an XOR gate to allow the selection of one of the two input signals and to make
sure that the signals were not distorted by the board at all. After the relay board was constructed, it was integrated
into the GALAH test bed and tested to make sure that it did not interfere with the operation of any of the other
systems being used. After verifying that the relay board functioned properly and that all of the control surfaces of
the airframe responded properly under both standard RC receiver control and MicroPilot control a new COA was
obtained for this new configuration of the test bed.

After the COA for the GALAH and MicroPilot with the relay board integrated was obtained initial flight testing
commenced with a few data collection flights. These flights used the data collection capabilities of the MP2028 to
obtain data about the flight characteristics of the GALAH that were needed by the MicroPilot system. After
collecting the needed data on rotation speed, climb speed, cruise speed, approach speed, and control surface
deflections, the needed information was input into the control file for the MP2028. The next test flights to be
conducted were designed to allow the gains of the control loops to be set. During these test flights the airframe
reacted in a very unpredictable manner when under the control of the MP2028. The gains were then reduced by
25%, according the MicroPilot manuals, and the GALAH was flown again. The flight resulted in the same very
unpredictable behavior as before. The process of lowering the gains and flying the GALAH again was repeated
several times with unpredictable behavior continuing each time.

It was decided at this time that another method for finding the proper gains was needed. With the flying season
coming to an end, the decision was made that it would be more effective to try to model the airplane and calculate
the gains during the winter months. A doctorial student working with flight control algorithms was recruited to
work on the model and find the proper gains.

Results Obtained
The GALAH test bed final airframe weight was 22 pounds including a five pound weight in the nose of the
airframe to maintain the proper center of gravity. The balance weight is reduced to four pounds when the
MicroPilot MP2028 in installed in the payload bay of the GALAH. The payload capacity of the airframe was five
96
to eight pounds; this depends on where in the payload bay that it is placed. This is because the farther the payload
is away from the center of gravity the more effect it has on the center of gravity. The maximum flight duration is
approximately 20 minutes depending on how hard the engine is run during the flight. The harder the engine is run
the lower the flight time. The maximum flight time does not include the approximately five minutes of extra fuel
in the left fuel tanks safety reasons. The airframe responds very well to the control surfaces being moved and
handles very elegantly, according to the project pilot. The conclusion is that the GALAH based test bed is reliable
and robust enough to serve as a UAS test bed and meets the requirements for testing needs.

The testing of the MicroPilot MP2028 has thus far been inconclusive. The initial ground testing showed that the
MP2028 functioned reliably after it was properly installed and configured. Though this cannot be concluded with
any confidence with the amount of testing that has taken place thus far, with further testing the MP2028 system
may be shown to be a robust and reliable system. Before this testing can be conducted the gains must be set
properly for the GALAH airframe. The claim that the MP2028 can be used by anyone, even someone with no
prior experience in the field has so far been completely unfounded. The process of setting the gains has been very
involved and even with a basic understanding of control theory, a lot of effort and time was spent understanding
how the system gains function. The process for setting the gains provided by MicroPilot proved ineffective when
the initial gain settings were not relatively close to the airframes needed values. The control files require at least a
basic understanding of programming languages to facilitate anything more than basic waypoint tracking. The
MP2028 will require more testing before any additional conclusions can be made concerning issues with
integrating COTS autopilot systems into the National Air Space System.

Acknowledgments
The author of this paper would like to thank The Ohio University Avionics Engineering Center for supplying the
equipment and materials for this project. Also the author would like to thank Dr. Michael Braasch for the setup
and organization of this project. Finally, the author would like to thank Dr. Jim Zhu, Mr. Jamie Edwards,
Mr. David Edwards, and Mr. Tony Adami for their cooperation during this project.
97
Nutritional Disparity Among African Americans

Student Researcher: Kristen D. Edwards

Advisors: Dr. Asit K. Saha, Dr. Sean S. Kohles

Central State University
Department of Mathematics and Computer Science

Introduction
Vitamin D is classified as fat-soluble pro-hormones that helps the absorption and metabolism of calcium
and phosphorous in human body. There are two sources body can have the vitamin D, sunlight and
nutritional supplements. Due continuous development of science and technology through industrial
movements human population is gradually less exposed to the sun light compared to what they
experienced in olden days. As a consequence, now a days, a significant reduction in the natural
production of vitamin D within the human body.

Another source of vitamin D is dietary supplement, such as, some types of fish (salmon, tuna and
mackerel), fish liver oils are considered to be the best sources. Some vitamin D is also present in beef
liver, cheese and egg yolks. Most of these are Vitamin D3. Some mushrooms provide variable amounts of
vitamin D2.

There are five forms of vitamin D that has been discovered, vitamin D1, D2, D3, D4, D5. Vitamin D2
(ergocalciferol) and D3 (cholecalciferol) are the two forms that seem to matter to humans the most.
Vitamins D2 and D3, collectively are known as calciferol.

The National Health and Nutrition Examination Survey (NHANES), USA collected data that found that
9% (7.6 million) of children across the USA was vitamin D deficient while another 61 percent, or 50.8
million, was vitamin D insufficient.

For humans, Vitamin D is obtained in two ways food and sun exposure. From this initial condition it has
to undergo two hydroxylation reactions to become active in the body. The active form of vitamin D in the
body is called Calcitriol (1,25-Dihydroxycholecalciferol).

The flow of calcium in the bloodstream increases when Calcitriol promotes the absorption of calcium and
phosphorus from food in the gut and reabsorption of calcium in the kidneys. Calcitriol also plays a key
role in the maintenance of many organ systems. This prevents hypocalcemictetany and is essential for the
normal mineralization of bone. Hypocalcemictetany is a low calcium condition in which the patient has
overactive neurological reflexes, spasms of the hands and feet, cramps and spasms of the voice box
(larynx).

Vitamins D2 or D3, which one of these vitamins are most important for humans?

Both vitamins D2 and D3 are used in human nutritional supplements. Pharmaceutical forms include
calcitriol (1alpha, 25-dihydroxycholecalciferol), doxercalciferol and calcipotriene. The majority of
scientists state that D2 and D3 are equally effective in our bloodstream. However, some say that D3 is
more effective. Animal experiments, specifically on rats, indicate that D2 is more effective than D3.
What do we need vitamin D for?

Vitamin D has various functions, it is crucial for the absorption and metabolism of calcium and
phosphorous, especially the maintenance of healthy bones. It is also an immune system regulator.

98
Scientists from The University of Colorado Denver School of Medicine, Massachusetts General Hospital
and Childrens Hospital Boston say it may be an important way to arm the immune system against
disorders like the common cold.

It may reduce the risk of developing multiple sclerosis. According to Dennis Bourdette, chairman of the
Department of Neurology and director of the Multiple Sclerosis and Neuroimmunology Center at Oregon
Health and Science University, USA, multiple sclerosis is much less common the nearer you get to the
tropics, where there is much more sunlight.

According to a study of 3000 European men between the ages of 40 and 79, Vitamin D may have a key
role in helping the brain to keep working well in later life.

According to research carried out at the Medical College of Georgia, USA, Vitamin D is probably linked
to maintaining a healthy body weight.

Researchers from Harvard Medical School found after monitoring 616 children in Costa Rica, It can
reduce the severity and frequency of asthma symptoms, and also the likelihood of hospitalizations due to
asthma.

It has been shown to reduce the risk of developing rheumatoid arthritis in women.

Radiological experts from the New York City Department of Health and Mental Hygiene say a form of
vitamin D could be one of our body's main protections against damage from low levels of radiation.

Various studies have shown that people with adequate levels of vitamin D have a significantly lower risk
of developing cancer, compared to people with lower levels. Vitamin D deficiency was found to be
prevalent in cancer patients regardless of nutritional status, in a study carried out by Cancer Treatment
Centers of America.

Sunlight and Vitamin D Requirements
The following factors may reduce your body's vitamin D synthesis: If you live far from the equator, your
sunlight exposure will be less during many months of the year. Cloud cover, Smog, Sunscreens.

If your body cannot produce enough vitamin D because of insufficient sunlight exposure you will need to
obtain it from foods and perhaps supplements. Experts say that people with a high risk of vitamin D
deficiency should consume 25 g (1000 IU) of vitamin D each day so that there is a good level of 25-
hydroxyvitamin D in the bloodstream. Elderly people, as well as people with dark skin should consume
extra vitamin D for good health.

Reference
1. Christian Nordqvist(2009): Medical News Today, 24 Aug 2009
http://www.medicalnewstoday.com/articles/161618.php
99
Computational Study of Cylinder Heating/Cooling for Boundary Layer Control

Student Researcher: Derick S. Endicott

Advisor: Dr. Jed E. Marquart

Ohio Northern University
Department of Mechanical Engineering

Abstract
Computational Fluid Dynamics (CFD) has become an increasingly important field of study in the past
decade, and is becoming more prevalent as computational power is ever improving. The objective of this
research is to use CFD to investigate the topic of boundary layer control for flow over a circular cylinder.
It is well known that adding or removing energy from the boundary layer as the flow passes over the
cylinder surface can control boundary layer separation. Past research in this area has included using
blowing/suction and plasma injection as a means to control the energy of the boundary layer. The focus of
this research is to study the feasibility of using simple heating and cooling locations on the surface of a
cylinder as a means to add or remove energy to or from the boundary layer, thus delaying or speeding the
separation of the flow. The research will also focus on finding the optimal locations, size, and amount of
heat that these locations should have such to produce the greatest effect on boundary layer separation.

Project Objectives
The objective of this research is to gain an understanding of boundary layer control via the
implementation of thermal energy on the surface of a circular cylinder in cross flow. In addition to this, a
familiarity with the CFD software was developed as well as a familiarity with the Unix operating system.
These software programs included Pointwise, Cobalt, and Fieldview. Pointwise is a robust grid
generation software specifically designed for creating CFD meshes. Cobalt is a parallel, compressible
Euler/Navier-Stokes flow solver capable of providing CFD solutions to complex geometries. Fieldview is
a post-processing software that provides interactive feedback on the solution generated. Once an
understanding of the software was gained, the CFD mesh for the chosen geometry was developed,
seeking ways to balance grid quality and complexity with the number of grid cells. After a satisfactory
grid had been developed, baseline cases were compared with known experimental results to validate the
CFD solutions. A number of cases were then chosen based on different heating and cooling quantities at
different Reynolds numbers. These cases were examined to gain a better understanding of the results
obtained.

Methodology
Approximately two weeks were spent at the beginning of the project gaining familiarity with the CFD
software involved in the research project. This was accomplished by experimenting with various tutorials
available with the software used at ONU for CFD experimentation.

After a familiarity with the programs was developed an exhaustive search was completed for research
related to the topic of boundary layer control. Most research up to the beginning of this research project
was related to mass suction and/or blowing as well as plasma injection as a means to add or remove
energy from the boundary layer. A small amount of research was found focusing on surface heating of flat
plates, but to the extend of the authors knowledge no other research had been published on studying
boundary layer control via heating and cooling locations on the surface of a cylinder in cross flow.

To begin the research, a grid had to be developed while keeping in mind a balance must be sought
between grid quality and the run time that grid complexity can cause when trying to create solutions.
Many different grid combinations were developed trying to preserve this balance to the optimal size and
quality. This experimentation in grid development included changing the types of cells used in different
parts of the 3-dimensional cylinder grid creating combinations. The purpose of trying these different
combinations was also to find the best possible way to blend a very tight boundary layer with the wake
volume behind the cylinder in order to best capture the viscous flow effects as the fluid passes over the
100
surface of the cylinder. The empty grid showing the 3-dimensional cylinder is show below in Figure 1.
Figure 2 represents the 200 individual sections of the cylinder surface running spanwise, in which heating
or cooling surface temperature can be applied.


Figure 1. Empty Cylinder grid Figure 2. Close up of Cylinder Surface domains

In creating an acceptable grid, there were several parameters and criteria that had to be considered. The
first was developing a grid with a y+ of less than or equal to 1. The parameter y
+
is essentially a non-
dimensional measurement of the distance from the surface of the cylinder to the first grid point normal to
that surface. In Cobalt the parameter y
+
is defined by:

(1)

Where s is the wall spacing of the first cell normal to the cylinder surface, U
fric
is the friction velocity at
the nearest wall, is the air density, and is the dynamic viscosity of air.

The next important parameter when designing a CFD grid is grid quality. Grid quality is defined in Cobalt
and a general guideline for a creating an acceptable solution, should be about 90%. This grid quality is a
measurement of a variety of things including cell skewness (height vs. width), changes in volume from
cell to neighboring cell and the angular skewness within cells. The final important parameter to consider
when developing a grid is cell number. Due to the level of computing power available, special attention
had to be paid to try and reduce the total number of cells why maintaining solution integrity because more
cells means more run time between solutions.

Some of the grids that were developed to experiment with these parameters are shown below in Table 1.

Table 1. Comparison of grid types based on cell extrusions used
Boundary Layer Wake Pros Cons
Hexahedral
Hexahedral structured
block
Best grid quality
High number of cells
due to structured cells
Hexahedral
Isotropic tetrahedron
baffle
High quality wake
structure
Required a higher y+
to smoothly mesh
wake and cylinder
cells
Hexahedral
Anisotropic
tetrahedron baffle
Less cells overall
Highly skewed cells
between wake baffle
and cylinder extrusion
caused low quality
Anisotropic
tetrahedron
Anisotropic
tetrahedron baffle
Achieved lower y+
with less cells
Poor grid quality
throughout BL and
wake
101
There were two types of wake regions considered. The first was a structured block attached to the
downstream side of the cylinder extrusion. The second type of wake region considered was a baffle that
spanned the length of the grid coming from the downstream side of the cylinder extrusion. Using a wake,
cells were grown in the normal direction off of a rectangular domain using either isotropic or anisotropic
tetrahedron. The two types of wakes are highlighted below in Figure 3 and Figure 4.


Figure 3. Baffle wake plane Figure 4. Structured wake block

After considering the aforementioned parameters, the final grid chosen was the hexahedral boundary layer
combined with the hexahedral structured wake block. This mesh did produce the highest number of cells,
however, it provided the highest quality grid by far, giving an overall grid quality of 94%. This final mesh
consisted of 3 blocks, the final being an unstructured isotropic tetrahedral block. This grid contains
approximately 1.5 million cells and requires roughly 28 seconds per iteration running on 20 processors.
This grid is shown below in Figure 5.


Figure 5. Final grid, hex BL and hex wake block

Due to time constraints of the project and the extensive time that 3-dimensional grids require to produce a
quality solution, a 2-dimensional grid was developed to do the initial experimenting in order to develop
trends and find out what temperature/area combinations have effect and which do not. This 2-dimensional
grid is an extension of the 3-dimensional one, using the concept of a structured boundary layer and a
structured wake domain. The 2-dimensional grid is shown below in Figure 6.
102

Figure 6. 2-dimensional grid used for developing experimental trends

In order to validate the quality of our 3-dimensional grid and the solutions developed by Cobalt, it was
compared against experimental data. Using data from Achenbach (1968) for pressure distribution on the
surface of a cylinder at a Reynolds number of 5E+06. Matching that Reynolds number and using the
chosen 3-dimensional grid, the following plots were developed in Figure 7, showing the accuracy of the
Cobalt data in comparison to experimental data on pressure coefficient.


Figure 7. Achenbach experimental C
p
data compared
with data developed by Cobalt, Re=5E+06
Figure 8. C
p
data shown in Fieldview at
Re=5E+06

Next, a strategy had to be developed for finding trends that would be caused by the surface heating and
cooling. In order to keep the initial tests simple and find out the best course of action to pursue, 3 different
Reynolds numbers were used, each having 7 different cases, which used different heating or cooling
amounts. During these first experiments the entire surface of the cylinder was heated or cooled a certain
amount above or below the free stream temperature of the air. For Reynolds numbers of 1000, 10000, and
100000, each of the following cases were performed at a free stream temperature of 519 Rankine:

1. Baseline: surface temperature of cylinder matches free stream temperature.
2. Heated 10 Above: surface of cylinder is 10R above free stream at 529R.
3. Heated 100 Above: surface of cylinder is 100R above free stream at 619R.
4. Heated 400 Above: surface of cylinder is 400R above free stream at 919R.
5. Cooled 10 Below: surface of cylinder is 10R below free stream at 509R.
6. Cooled 50 Below: surface of cylinder is 50R below free stream at 469R.
7. Cooled 100 Below: surface of cylinder is 100R below free stream at 419R.

103
Results


As shown in Table 2 above, the cases developed a trend pointing to the assumption that cooling the entire
surface generally will lower the total drag force on the surface and heating the entire surface will raise the
total drag force on that surface. Each cases percentage in drag change is based in comparison to the
baseline cylinder case for that particular Reynolds Number where the surface temperature of the entire
cylinder was at the same temperature as the freestream air flowing over it. The trend that developed is
much clearer in Figure 10 below, which shows the Drag Coefficients vs. Reynolds number including most
of the different cases that were considered. The trend points to the fact that most heating will increase the
drag and all cooling will decrease the drag. The heating and cooling seem to have a larger effect at a
lower Reynolds Number, which can be expected as the flow is attached to the cylinder surface longer at a
lower Reynolds Number.


Figure 10. Coefficient of Drag vs. Reynolds Number at 5 temperature cases

0.7
0.9
1.1
1.3
1.5
1.7
1000 10000 100000
C
o
e
f
f
i
c
i
e
n
t

o
f

D
r
a
g

Reynolds Number
Drag Coefficient vs. Reynolds Number for
Temperature Various Cases
Base
Base +400R
Base +100R
Base -50R
Base -100R
Case Air Temp
(R)
Reynolds
Number
Cylinder Temp
(R)
Heating/Cooling C
d
Conclusion
1 519 1000 419 Cooling 1.157 14.8% reduction in drag
2 519 1000 469 Cooling 1.160 14.6% reduction in drag
3 519 1000 509 Cooling 1.296 4.6% reduction in drag
4 519 1000 519 Baseline 1.358 Baseline
5 519 1000 529 Heating 1.274 6.2% reduction in drag
6 519 1000 619 Heating 1.370 0.88% increase in drag
7 519 1000 919 Heating 1.525 12.3% increase in drag
8 519 10000 419 Cooling 1.080 4.5% reduction in drag
9 519 10000 469 Cooling 1.105 7.3% reduction in drag
10 519 10000 509 Cooling 1.129 0.2% reduction in drag
11 519 10000 519 Baseline 1.132 Baseline
12 519 10000 529 Heating 1.133 0.1% increase in drag
13 519 10000 619 Heating 1.169 3.2% increase in drag
14 519 10000 919 Heating 1.215 6.8% increase in drag
15 519 100000 419 Cooling 0.789 8.1% reduction in drag
16 519 100000 469 Cooling 0.823 4.22% reduction in drag
17 519 100000 509 Cooling 0.847 1.46% reduction in drag
18 519 100000 519 Baseline 0.859 Baseline
19 519 100000 529 Heating 0.859 0.13% reduction in drag
20 519 100000 619 Heating 0.905 5.24% increase in drag
21 519 100000 919 Heating 1.006 17% increase in drag
Table 2. Preliminary results of entire surface heating/cooling cases
104
Conclusion
The preliminary results show a promising trend, which would require further and more in depth research
to really discover the mechanics behind what is going on in the boundary layer and how the temperatures
are having effect within the flow. Further research could look at changing how many of the surface
heating strips are turned on and if their location can produce an optimizing effect. A wider range of
Reynolds Numbers need to be investigated as well as more temperatures within the range of air
temperatures that allow a compressible flow assumption. Once a more complete trend is established, the
research could move into 3-dimensions and eventually into a wind tunnel laboratory experiment.

105


Elastic Constants of Ultrasonic Additive Manufactured Al 3003-H18
Student Researcher: Daniel R. E. Foster

Advisor: Dr. S. Suresh Babu

The Ohio State University
Department of Materials Science and Engineering (Welding Engineering Program)

Abstract
Ultrasonic Additive Manufacturing (UAM), also known as Ultrasonic Consolidation (UC), is a layered
manufacturing process in which thin metal foils are ultrasonically bonded to a previously bonded foil
substrate to create a net part. Optimization of process variables (amplitude, normal load and velocity) is
done to minimize voids along the bonded interfaces. This work however pertains to the evaluation of
bonds in UAM builds through ultrasonic testing of builds elastic constants. Results from UAM parts
indicate anisotropic elastic constants and also a reduction of up 48% in elastic constant values compared
to a control sample. In addition, UAM parts are approximately transversely isotropic, with the two elastic
constants in the plane of the Al foils having nearly the same value, while the properties normal to foil
direction have much lower values. The above reduction was attributed to interfacial voids. In contrast,
the measurements from builds made with very high power ultrasonic additive manufacturing (VHP-
UAM) showed a drastic improvement in elastic properties, approaching the values similar to that of bulk
aluminum.

Project Objectives
Ultrasonic Additive Manufacturing (UAM), also known as Ultrasonic Consolidation (UC), is a new
manufacturing process in which metallic parts are fabricated from metal foils. The process uses a rotating
cylindrical sonotrode to produce high frequency (20kHz) and low amplitude (20 to 50 m), mechanical
vibrations to induce normal and shear forces at the interfaces between 150 m thick metallic foils[1]. The
large shear and normal forces are highly localized, breaking up any oxide films and surface contaminants
on the material surface, allowing for intimate metal-to-metal contact. As the ultrasonic consolidation
process progresses, the static and oscillating shear forces cause elastic-plastic deformation. The above
deformation also leads to localized high temperatures by adiabatic heating. The presence of high
temperatures may trigger recrystalization and atomic diffusion across the interface, leading to a
completely solid-state bond [2]. This process is repeated, creating a layered manufacturing technique,
which continuously consolidates foil layers, to previously deposited material. After every few foil layers,
CNC contour milling is used to create the desired part profile with high dimensional accuracy and
appropriate surface finishes [3].

Accurate values for elastic constants are needed to predict thermomechanical processes during a UAM
build, as well as, the final properties. Modeling of the UAM process has been done by Huang et al. [4] as
well as Zhang et al. [5]. These authors assumed isotropic properties of UAM builds, similar to that of bulk
metals, in their models. Assuming such a condition is indeed convenient, but it is unclear whether UAM
builds satisfy this ideal condition. Hopkins et al., [6] and Hahnlen et al., [7] have shown that the strength
of a UAM part depends on the orientation of testing direction with respect to build directions. The
authors, however, did not report the elastic constants. It is quite conceivable that elastic constants also
depend on material direction. Therefore, the current research focuses on the measurement of the elastic
constants in the 3 material directions.

Methodology Used
Ultrasonic Testing (UT) was used to evaluate material properties. An ultrasonic transducer is used to
convert an electrical signal to an elastic wave that propagates through the test sample. Comparing the
106


Time-of-Flight of the propagating wave versus known reference materials, the elastic wave velocity of a
material can be determined. The two types of elastic waves most widely used in UT are longitudinal
(compression) and transverse (shear) waves. Longitudinal waves travel through materials as a series of
compressions and dilations. Transverse waves propagation causes particle vibration transverse to the
wave propagation direction. The phase velocities of both types of waves in a given material are
dependent on the elastic properties of the materials being tested [8].

Measuring wave velocities in a material can be used to determine elastic constants. In Figure 1, side-to-
side motions parallel to the incident plane indicate shear wave propagation into the plane. The motions
normal to the plane indicate longitudinal wave propagation into the plane. The velocity of each wave is
dependent on the material density and directional stiffness. Note the direction coordinate system in
which axis-1 is the sonotrode rolling direction, axis-2 is traverse to the rolling direction (sonotrode
vibration direction), and axis-3 is the build direction. The resulting shear and longitudinal wave velocities
can then be used to find the elastic constants of the sample. Using the stiffness matrix, the elastic
compliance matrix can be calculated using S
ij
= (C
ij
)
-1
[9]. Knowledge of the stiffness and compliance
matrix is important, as it is used to calculate stresses and strains (
i
= C
ij

j
and
i
= S
ij

j
) and engineering
constants G, E, .

The process parameters used to create UAM samples with 37%, 59% and 65% bonded areas are listed in
Table 1. All builds were made with 149 C preheat of the substrate. Consolidation for these samples was
carried out on the Beta Ultrasonic Consolidation System created by Solidica Inc., which is capable of
applying up to 2.2KN of normal force and 26 m of vibrational amplitude. The consolidation of each
layer was performed using a tacking pass followed by a welding pass on each metallic foil. A single tape
width build was used to construct 37% and 65% bonded area builds by continuously depositing one foil
on top of the previously deposited foil. The 59% bonded area samples used the same process parameters
as that of 37%, except the layering pattern was changed to the brick wall layering sequence. In this
building sequence, 2 adjacent foil layers were deposited followed by 3 adjacent foil layers on top. This
pattern is repeated to construct the build. Care was taken so that samples used from the brick wall build
did not contain foil edges.

The 98% percent bonded area sample was made using the Very High Powered Ultrasonic Additive
Manufacturing (VHP UAM) System produced by Edison Welding Institute and Solidica Inc. This higher-
powered ultrasonic consolidation system is capable of applying up to 45KN of normal force and 52
microns vibrational amplitude. The additional vibrational amplitude and normal force induce a greater
amount of plastic deformation at the faying interfaces, compared to the Beta system, leading to an
increase in bonded area. The VHP UAM system was used to construct a single tape width UAM build
without a preheat or tacking pass. The VHP UAM system used is a prototype and is not fully automated,
in contrast to the fully automated Beta system. Due to the difficulties of manually aligning and laying
foil, VHP UAM sample height was limited; therefore, only V
33
, V
44
, V
55
measurements could be obtained.

It was uncertain whether bond quality varied throughout the depth of UAM builds. Variations in bonding
throughout a builds height could have an effect on the accuracy velocity measurements. For that reason it
was important to quantify how much variation was occurring. To investigate this, a step sample was
constructed with step heights increasing heights from 2.5mm to 30mm. To create this step build, a
152mm by 25mm by 30mm 59% bonded area UAM block was created in which steps of increasing height
were cut using EDM. Wave velocity was measured along axis-3 of each step. Changes in measured
velocity at different steps would be an indication that the bonded areas varied throughout the depth.

It was decided to use Percentage of bonded area instead of Linear weld density (LWD) to evaluate
the amount of bonding at the welding interface. The LWD measurements use a cross-section of a UAM
build, as seen in Figure 1a. Comparing the length of voids at the interfaces to the total length of the
interface, the amount of bonding (LWD) can be measured. This method of evaluation can be subjective
107


with adjacent interfaces displaying a range of bonded values [6]. Percentage of bonded area
measurements evaluate interface bonding by surveying fracture surfaces (Figure 2b). Bonded areas are
deformed and have greater heights than unbonded areas. Because this type of measurement has been
shown to be repeatable and accurate, it was chosen to be the primary technique used to determine the
amount of interfacial bonding. To measure percent bonded area, high contrast images of the fracture
surface were taken with a Meijer optical microscope. Then Fiji image processing software [8] was used
to determine a suitable contrast threshold in which bonded areas would be highlighted due to their darker
appearance Figure 2c. Once an appropriate threshold was found, the area fraction of the highlighted
sections was used as the Percent bonded area measurement. The VHP UAM sample could not be
constructed to a suitable height to be fractured in order to obtain a Percentage of bonded area
measurement; consequently LWD was measured and assumed to be the same as percentage of bonded
area for that sample.

Results Obtained and Interpretation of Results
Characterization of step build
Measured velocities from the step build are displayed in Figure 3a. Neither longitudinal (V
33
) nor shear
(V
55
) velocity measurements on the step build show significant change with step height. This observation
suggests that the amount of bonded area (and therefore stiffness) does not vary significantly with build
height. The minimum UAM thickness needed to get accurate measurements is 5 mm, which is consistent
with recommendations from ASTM standard E494 [9]. V
33
could be measured up to 23 mm before
scattering of the ultrasonic waves due to the voids prevented velocity measurements, while V
55
could be
measured at all step heights above 5 mm.

Characterization of Control Sample
The foils used for UAM have a H18 temper designation. Under H18 designation, the initial annealed
aluminum was cold rolled to a reduction in thickness of 75%. Cold rolling can affect material properties,
causing mechanical properties to be different in each material direction. To ensure that the stiffness of
UAM samples was not due to the initial condition, an Al 3003-H14 control sample was ultrasonically
tested. Although the H14 temper is not cold worked to the degree of H18 temper (50% thickness
reduction compared to 75% thickness reduction), it can provide stiffness values that are close to those of
Al 3003-H18 foil. Also measurements from the control sample could be used to rule out doubts whether
observed stiffness changes in UAM builds were due to the initial rolled state of the foil.

Al 3003-H14-control sample had slightly higher elastic constant values compared to literature values for
Al 3003 [10] for all elastic constants except C
66
(Table 2a). The largest difference was in C
11
, C
22
, C
33
,
and C
12
which had changes of 12%, 9%, 6% and 19%. C
44
, C
55
, and C
66
differed from literature values by
a maximum of 3%. The slightly larger constants obtained for C
44
, C
55
, and C
66
are within acceptable
experimental error (4%), but comparison for the C
11
, C
22
, C
33
and C
12
cases show significantly higher
stiffness constants even when measurement errors are considered. Overall, cold rolling Al 3003 has a
slight stiffening effect on the material.

Characterization of UAM samples
Elastic constants for UAM samples with 35%, 59% and 65% bonded areas are compared to Al 3003-H14-
control sample in Table 2 b-d. In all cases the UAM material had lower elastic constant values than the
control material. The disparity between UAM samples and the control sample decreased as the
percentage of bonded area increased.

The difference between the control sample and UAM samples in Tables 2 b-d is especially large for C
12
,
C
33
, C
44
and C
55
. In the case of the 37% bonded area sample, each of these constants displayed a
reduction between 31-51% compared to the control sample. The magnitude of measured elastic constants
increased with an increase in bonded area. For example, in the case of 59%bonded area, there was a
108


reduction of 23-37% vs. control and in the case of 65% bonded area, there was a reduction in elastic
constants between 10-28% vs. control.

Constants C
11
, C
22
, and C
66
also had a reduction in value that was dependent on percentage of bonded
area, but their reduction was not as extensive as that of C
12
, C
33
, C
44
, and C
55
. At 37% bonded area, the
UAM samples had a 1-20% difference in C
11
, C
22
and C
66
compared to the control. As the percentage of
bonded area increased for these constants, the discrepancy between UAM and control results reduced
with a maximum of 14% reduction in the case of 59% bonded area and 16% in the case of 65% bonded
area.

The elastic constants in the VHP UAM sample were not significantly affected by the presence of voids,
due to 98% bonded area, and therefore, had similar properties to those of the control sample. There was a
0.5% difference in C
33
and a 7% difference in C
44
and C
55
compared to control, as seen in Table 6.
Although elastic constants in the other material directions could not be tested, it is likely that those
properties would be close to those of the control sample. This is due to material properties in the axis 1
and 2 material direction being less severely affected by voids at the interfaces when compared to axis-3
direction. Since the VHP UAM sample demonstrates that the properties in the axis-3 direction are similar
to those of the control, the properties in the axis 1 and 2 directions are likely also close to those of the
control. These results indicate that the multidirectional stiffness of the VHP UAM sample material are
likely very close to that of the Al tapes used to construct the sample.

Stiffness values for C
33
, C
44
and C
55
from Table b-d are plotted in Figure 3b. Effective stiffness decreases
linearly with a decrease in percentage of bonded area. The data from VHP UAM supports this
observation. As a result of the linear relationship between percentage of bonded area and stiffness, bond
quality of UAM components can be determined non-destructively. By measuring longitudinal or shear
wave velocity and relating that value to a curve, such as Figure 3c, bond quality of a UAM part can be
determined. The linear relationship between wave velocity and percentage of bonded area is of great
importance because it allows UAM parts to be characterized on the shop floor in minutes right after
consolidation without the need for time consuming cutting, polishing and optical microscopy.

LWD and percentage of bonded area are compared in Table 1. LWD measurements indicate a much
higher degree of bonding compared to the percentage of bonded area measurements. The 37% bonded
area sample had a 75% LWD, the 59% bonded area sample had a 91% LWD and the 65% bonded area
sample had a 91% LWD. Stiffness vs. LWD for C
33
, C
44
and C
55
area are plotted in Figure 3e. The
resulting linear trend between stiffness and LWD is much poorer than that between stiffness and
percentage of bonded area in Figure 3d. Plotting LWD vs. stiffness resulted in a linear trend with large
deviations from the trend line while stiffness vs. percentage of bonded area plot followed the linear trend
much more closely. This is an indication that the percentage of bonded area measurement technique is
more consistent and accurate than LWD in determining bond quality in UAM components.

It is hypothesized that the change in material stiffness is due to the presence of voids at the welding
interface. These void volumes are filled with no matrix material and thus have negligible mass and
strength. As a result, when the material is loaded, the bulk foil portion of the UAM part yields a small
amount, while the interface region under the same load will yield excessively. This is because the load
bearing cross-sectional area at the interface is smaller due to the presence of the voids for a given load
(Figure 4). The combined loading response from the bulk foils and interface region results in an overall
greater yielding of the part. This phenomenon creates a component with an effective stiffness that is
lower than the foils used to construct it.

The yielding of an UAM part is analogous to springs in parallel and series. C
33
, C
44
and C
55
relate to the
stiffness of a part when the partially bonded interfaces are in the load path, while elastic constants C
11
, C
22

and C
66
relate to the material stiffness when the interfacial region is parallel to the load path. When
109


spring elements are in series, like in the case of C
33
, C
44
and C
55
, are strained in the 33, 13, 31, 23, or 32
directions, all elements are in the load path and experience the same load but displace different amounts
based on their stiffness. The lower stiffness interfacial regions yield excessively to the load while the
bulk foil yields as expected, resulting in parts with less effective stiffness in those directions. When
spring elements in parallel, as in the case of C
11
, C
22
and C
66
, are strained in the 11, 22, 12, 21 directions,
interfacial regions a well as the bulk foil regions displace the same amount. However most of the load is
carried by the bulk regions of the foil and not by the partially bonded region. This is the reason there is a
smaller stiffness reduction in the axis 1 and 2 material directions compared to the axis 3-direction.

The stiffness components in the 1 and 2 directions are very similar, demonstrating that the material is
approximately transversely isotropic. C
11
and C
22
, along with C
44
and C
55
in each case have values that
are within 3.5% of each other. This indicates an axis of symmetry and that the material can be
approximated as transversely isotropic. In transversely isotropic materials an additional elastic constant
can be calculated using the relation
C
12
= C
11
2*C
66
. (1)
Calculations of C
12
are listed in Table 2 b-e. Ongoing work is focusing on computational simulation of
such preferred deformation by prescribing position dependent properties.

This discovery of lower effective stiffness and transverse isotropy is of great importance. Accurate elastic
constants are needed to model the UAM process as well as to adopt UAM parts in general engineering
design. For example, modeling the lateral displacement of a large UAM build due to the shear forces
caused by a sonotrode itself has been considered to be important. This tendency has been used to
rationalize the processs inability to making UAM builds above a critical height. Modeling such
phenomena would need an accurate shear modulus in the transverse direction, which is C
44
(C
44
= G
23
). If
one would compare stress results using 37% bonded area G
23
(18.1 GPa) to isotropic G (25.9 GPa), there
would be a measurement error of 31%. This large error in calculations could lead to simulation results
that do not accurately describe response to loads that UAM parts experience.

Conclusions
The parts made by ultrasonic additive manufacturing were evaluated with ultrasonic testing. Wave
velocity, percentage of bonded area and material stiffness do not change significantly with build height.
Shear and longitudinal wave velocity had no significant change with build height.

The effective stiffness of Al 3003-H18 UAM parts was reduced due to the presence of voids. When a
load is applied, the interfacial welded regions that contain the voids yield more than the bulk foil regions,
resulting in the overall part becoming less stiff than the aluminum used to construct it. The reduction in
stiffness components can be as high as 50% in the axis-3 direction while up to only 18% in the axis 1 and
2 directions. Using percentage of bonded area is a more accurate than using Linear Weld Density in
determining bond quality as it more closely follows a linear trend with stiffness.

Al 3003-H18 UAM components are approximately transversely isotropic. Material properties in the axis-
1 and 2 material directions are approximately the same (maximum of 3.5% difference) while the material
properties in the 3-d direction are much lower. These properties measured in samples made by VHP
UAM were close to those of the bulk material. The cold worked state of the Al 3003 foils used in UAM
is not the cause of stiffness reduction in Al-3003 UAM parts. Cold working Al-3003 increases elastic
constants by as much as 10% in H14 state. Material velocity measurements can be used as a Non-
destructive test to evaluate bond quality UAM builds.

110


Figures/Charts


Sheet
Normal (3)
Transverse
Direction (2)
Rolling
Direction (1)



Figure 1: Ultrasonic velocity relation to elastic constants and material direction


(a)
(b) (c)

Figure 2: Cross-section of UAM build (a), UAM Fracture Surface (b), Image threshold measurement using Fiji
(c)

111


Step Height (mm)
0 5 10 15 20 25 30 35
W
a
v
e

V
e
l
o
c
i
t
y

(
k
m
/
s
)
0
1
2
3
4
5
6
V
33
V
55

Bonded Area (%)
0 20 40 60 80 100
S
t
i
f
f
n
e
s
s

(
G
P
a
)
0
20
40
60
80
100
120
C
33
C
44
C
55

(a) (b)

Bonded Area (%)
0 20 40 60 80 100
W
a
v
e

V
e
l
o
c
i
t
y

(
K
m
/
s
)
0
1
2
3
4
5
6
7
V
33
V
44

V
55

Linear Weld Density (%)
0 20 40 60 80 100
S
t
i
f
f
n
e
s
s

(
G
P
a
)
0
20
40
60
80
100
120
C
33
C
44
C
55

(c) (d)
Figure 3: Wave velocity vs. build height for the Step Build (a), Stiffness vs. Percentage of Bonded area (b), Wave
velocity vs. Percentage of Bonded area (c), Stiffness vs. LWD (d)
Al Foil
Partially
Bonded
Interface
Al Foil
{
{
{
}
}
}
Al Foil
Partially
Bonded
Interface
Al Foil
UAM Structure with no load UAM Structure under load.
(Excessive yielding in
interfacial regions)
F
F

Figure 4: Schematic of UAM response to loading

112


Sample
Number
Percentage
of bonded
area
LinearWeld
Density
Force
(N)
Amplitude
(m)
Weld
Speed
(mm/s)
Force
(N)
Amplitude
(m)
Weld
Speed
(mm/s)
1 37% 75% 200 9 51 1000 26 42
2 59% 91% 200 9 51 1000 26 42
3 65% 91% 350 12 33 1000 25 28
4 98% 98% Not Used Not Used Not Used 5500 26 35.5
Tacking Pass Welding Pass

Table 1: UAM Process parameters with respective bond quality


Literature
Value Al
3003-H18
(GPa)
Al 3003-
H14 (GPa)
5%
Difference
between Al
3003-H14
and
isotropic
Al 3003
C
11
102 115.7 12%
C
22
102 112.6 9%
C
33
102 108.9 6%
C
44
25.9 26.1 1%
C
55
25.9 26.1 1%
C
66
25.9 25.2 -3%
C
12
50.2 62.2 19%
Al 3003-
H14 (GPa)
UT testing of
37% bonded
area UAM
sample (GPa)
Difference
between Al
3003-H14
andUAM
samples
C
11
115.7 92 -20%
C
22
112.6 94.6 -16%
C
33
108.9 53.3 -51%
C
44
26.1 18.1 -31%
C
55
26.1 18.1 -31%
C
66
25.2 25 -1%
C
12
62.2 41.9 -33%
Al 3003-
H14 (GPa)
UT testing
of 59%
bonded
area UAM
sample
(GPa)
Difference
between Al
3003-H14
andUAM
samples
C
11
115.7 99.5 -14%
C
22
112.6 100.2 -11%
C
33
108.9 68.8 -37%
C
44
26.1 19.9 -24%
C
55
26.1 20.6 -21%
C
66
25.2 25.8 2%
C
12
62.2 47.9 -23%

(a) (b) (c)


Al 3003-
H14 (GPa)
UT testing
of 65%
bonded
area UAM
sample
(GPa)
Difference
between Al
3003-H14
andUAM
samples
C
11
115.7 96.7 -16%
C
22
112.6 99.5 -12%
C
33
108.9 78.2 -28%
C
44
26.1 23.4 -10%
C
55
26.1 23.1 -11%
C
66
25.2 25 -1%
C
12
62.2 47.7 -23%
Al 3003-
H14 (GPa)
UT testing
of 98%
bonded
area VHP
UAM
sample
(GPa)
Difference
between Al-
3003-H14
and VHP
UAM
sample
C
33
108.7 109.2 0.5%
C
44
26.1 28.1 7%
C
55
26.1 28.1 7%

(d) (e)
Table 2: Elastic constant comparison of control and UAM builds

Acknowledgements
The authors would like to thank Dr. Marcelo Dapino, Christopher Hopkins, Ryan Hahnlen, and Sriraman
Ramanujam of Ohio State University as well as Matt Short and Karl Graff of Edison Welding Institute
(EWI) for their input on the project. Financial support for this research was provided by the Ohio Space
Grant Consortium (OSGC) and Ohios Third Frontier Wright Project. This research is currently under
review in ELSEVIERs Ultrasonics peer-reviewed journal.

113


References
[1] D. R. White, Ultrasonic Consolidation of Aluminum Tooling, Advanced Materials and Processes,
161 (2003) 64-65.
[2] R. L. O'Brien, Ultrasonic Welding, in: Welding Handbook, 1991, pp. 783-812.
[3] G. D. Janaki Ram, C. Robinson, Y. Yang, B. E. Stucker, Use of Ultrasonic Consolidation for
Fabrication of Multi-Material Structures, Rapid Prototyping Journal, 13 (2007) 226-235.
[4] C. J. Huang, E. Ghassemieh, 3D Coupled Thermomechanical Finite Element Analysis of Ultrasonic
Consolidation, Material Science Forum, 539-543 (2007) 2651-2656.
[5] C. S. Zhang, L. Li, Effect of Substrate Dimensions on Dynamics of Ultrasonic Consolidation,
Ultrasonics, 50 (2010) 811-823.
[6] C. D. Hopkins, Development and Characterization of Optimum Process Parameters for Metallic
Composites made by Ultrasonic Consolidation, in: Mechanical Engineering, The Ohio State
University, Columbus, 2010.
[7] R. M. Hahnlen, Development and Characterization of NiTi Joining Methods and Metal Matrix
Composite Transducers With Embedded NiTi by Ultrasonic Consolidation, in: Mechanical
Engineering, The Ohio State University, Columbus, 2010.
[8] W. Rasband, J. Schindelin, A. Cardona, FIJI, in, pacific.mpi-cbg.de, 2010.
[9] ASTM, -Standard Practice for Measuring Ultrasonic Velocity in Materials, in: E494 -10 -, ASTM
International, 2001.
[10] J. W. Bray, ASM Handbooks Online, 2 (1990) 2961.
114
Microgravity on the ISS

Student Researcher: Brea R. Furman

Advisor: Linda Plevyak

University of Cincinnati
Department of Early Childhood Education

Abstract
This series of activities has been compiled for a 3rd grade classroom, using the Microgravity and Me:
Family Activity Kit, as well as the lessons accompanying the Liftoff to Learning: Living in Space CD-
ROM and the Space School Musical DVD. The students will watch the clip At Home with Commander
Scott Kelley on NASA TV as an introduction to the activities. The students will then participate in four
different activities at three centers around the classroom that are about life on the ISS. The purpose of
these activities is to encourage students to begin thinking about living on the ISS in conditions of
microgravity. Students will begin to understand that life on the ISS is vastly different than what we
experience on Earth, and that adjustment and compromise are crucial to success in those conditions.

Center #1: The activities in this center are designed to help the students understand that life on the ISS is
more than just floating around. Not only do the astronauts spend time working and researching, but they
also experience some uncomfortable feelings. On Earth, we are used to the force of gravity and hardly
notice the affects. It is especially difficult for young children to remove themselves from their egocentric
understanding, and realize that not everyone experiences the same feelings or sensations.

Center #2: After watching Commander Scott Kelleys tour of his personal living quarters on the ISS, the
students are challenged to design their own living quarters, should they be traveling to the ISS. Again,
this forces the students to expand their views outside their personal experiences of living quarters. This
activity encouraged students to evaluate what small items are most important to them. Connecting to
microgravity, the students also have to think about how they would fasten their belongings to the walls.
This led many students to bring up the point that they might have to fasten themselves to something while
they were working or reading a book.

Center #3: The expedition patch design activity builds on the necessity of an astronaut to have the ability
to work cooperatively with others. This centers exercise allows students the chance to work with other
students with whom they may or may not have chosen to work. The students must collaborate to come up
with common answers to the questions, or agree to disagree. The students must then synthesize their
answers and create a pictorial representation of their collaborative efforts to portray themselves as a
cohesive team.

Ohio Academic Content Standards
Science, Grade 3
Physical Science, Forces and Motion, 3. Identify contact/noncontact forces that affect motion of an object
(e.g., gravity, magnetism and collision).

Science and Technology, Understanding Technology, 2. Describe ways that using technology can have
helpful and/or harmful results.

Science and Technology, Abilities To Do Technological Design, 5. Describe possible solutions to a design
problem (e.g., how to hold down paper in the wind).

Learning Objectives
Students will discuss and record their thoughts regarding completing work in microgravity while
experiencing dizziness and reduced gravity.

115
Students will apply knowledge of the living quarters on the ISS to design their own desired space on the
ISS.

Students will work in groups of three to analyze answers to questions, and design an ISS Expedition
Patch for their group based on their answers to the questions, using previous patches for inspiration.

Materials
Projector
Access to NASA TV video At Home with Commander Scott Kelley
Instructions for students at each center (see attached)
Notebook paper
Printer paper
Large, thick sponges (can be found at any hardware store)
Rubber bands
Expedition patch examples (either projected or printed)
Large writing surface (chart paper, white board, chalk board, Smart Board)
Markers (dry erase, paper markers, Smart Board pens)

Learning Theory
Vygotsky tells us that social interactions allow children to grow mentally. This is why it is crucial for
teachers to allow and encourage their students to work together. When students are engaged in centers,
they will be collaborating with each other to complete the tasks. This also encourages students to solve
problems on their own by communicating with their peers rather than seeking out the teacher.

Piaget talks about how important it is for children to learn through real life experiences. By connecting
this lesson to daily activities and experiences, such as sleeping, walking, completing assigned tasks,
arranging a living space, and the force of gravity, students will have a deeper understanding of the world
around them and beyond.

These activities are aligned with the constructivist approach to learning. The students are actively
involved in their own learning through interactive activities designed to guide the childs thinking from
the force of gravity they experience on Earth, to the significantly reduced gravity experienced on the ISS.
Students are encouraged and required to work with peers, which, as John Dewey stated in My Pedagogic
Creed in 1897 (Washington, DC: The Progressive Education Association, 1897), is the definition of true
education.

Procedure
Introduce the lesson by reviewing the definition of gravity, what gravity does on Earth, and how we
interact with gravity (e.g. throwing a ball straight into the air, our hair hangs down at our ears, if you drop
something it will break). Depending on the level of the students, discuss the prefix micro. Ask the
students what it means, what words do they know have that prefix, etc. Add the prefix micro to the word
gravity. Discuss the meaning. Relate this to life in space on the ISS. Discuss what the ISS is; what do the
students know about it? If time permits, a KWL chart could be used.

Begin instruction by showing the NASA TV video At Home with Commander Scott Kelley. Continue
by introducing the centers and demonstrating how each will be completed. Discuss safety rules and
procedures. Read each centers instruction aloud and address any confusion or questions. Demonstrate the
dizziness activity, or ask for a student volunteer. Demonstrate rubber banding the sponges to feet
properly. Explain the purpose of an expedition patch and remind students to use what they saw in the
video to complete the problem solving activity.

When students have completed the centers, or time is called, discuss some of the thoughts, struggles,
solutions, etc. that the students came up with at their centers. Discuss what the students wrote down at
center #1, and share drawings from stations #2 and #3. Complete the KWL chart.
116
Assessment
Students will be assessed using anecdotal notes based on their participation in these activities. Students
will be expected to work cooperatively with peers, including sharing materials and ensuring that each
participant plays an active role in decision making.

Critique and Conclusion
Overall the lesson went smoothly by creating centers. This allowed all of the students to experience all of
the activities, and limited the amount of materials necessary. Combining two activities into center #1
required only enough sponges to cover the shoes of 1/6
th
of the class. However, many students had
difficulty managing their time at center #1, due to the two different activities and the writing portions of
each. If I were to teach this lesson again, I would set a silent timer at center #1 to help the students pace
themselves to complete both activities in the time allotted.

At center #2, many students wrote about or drew unrealistic items such as their household pet, or a video
game system. One student even drew his brother strapped to the wall. Ideally, with more time, it would
have benefited the students to discuss in more depth what Commander Scott Kelley had in his living
quarters and why he might have chose those items, and not his pet rabbit or his Nintendo game.

Center #3 could also have benefited from a timer. Many students spent so much time trying to agree on
answers to the questions that there was not enough time to design the patch. I would consider limiting the
time allocated to discussing answers, and, if time permits, begin by showing an example patch and
discuss how the astronauts may have answered these questions based on the patch design.

Classroom Center Instructions
Center #1

Activity A: Dizziness
Many astronauts feel dizzy when they are in space.
Follow these instructions to make yourself feel dizzy:
1. Tilt your head to one side so your ear is almost touching your shoulder.
2. Spin half way around, so you are facing backward.
3. Quickly pick up your head to its normal position.

Imagine that you are an astronaut on the International Space Station and you must repair a piece of
machinery that is not working, but you are dizzy.

Discuss how feeling dizzy might affect your ability to repair the machinery.

On a piece of paper write down some of your thoughts.

Activity B: Moon Walk
The moon has less gravity than Earth, so walking on the moon feels different than walking on Earth.
Follow these instructions to experience what walking on the moon might feel like:
1. Rubber band a sponge to the bottom of each shoe.
2. Take long slow steps.

Imagine that you are an astronaut on the moon and you must collect rocks and conduct experiments on the
rocks you find.

On a piece of paper write about how you think finding and conducting experiments on rocks on the moon
might be different than here on Earth.

Center #2
Activity A: Problem Solving

117
As you saw on the video, on the International Space Station each astronaut has his or her own small
private living area. Some astronauts bring photographs or small items from home. They also have
computers and a place to sleep in their area.

Imagine that you are on your way to the International Space Station. What items from home would you
bring? Where would you sleep? Where would you do your work? Remember that in microgravity, all of
your belongings must be fastened to the walls.

On a piece of paper either draw a picture of how you would use your personal living area or write about
it.

Center #3
Activity A: Expedition Patch Design

Every group of astronauts who goes to the International Space Station has a patch that represents them as
a group. When designing the patches they consider four questions. Think about how you might answer
the questions:
1. What are some tools we might use on the space station?
2. What motivates us?
3. What do we do when we are faced with challenges?
4. What are the results of our hard work?

Working with one or two other people at your center, design an expedition patch. Think about your
answers to the four questions. Use the patches that astronauts have used in the past to inspire your patch.

118
Summary Web Service

Student Researcher: Martha E. Garasky

Advisor: Dr. Peter Jamieson

Miami University
Department Computer Engineering

Abstract
The objective for this project is to create a service that the public can use to summarize documents. Using
this service will allow research to be more effective due to the summaries being more accurate than the
information found in an abstract. This also saves time by having the program scan the document quickly
and efficiently to determine whether the article is relevant for research purposes rather than a person
doing it manually. The ultimate goal for the project is to have this service available on the web as a
browser extension. As of now the program is not web ready. The program is being created as an
executable that will be available online for download for a user to run from their desktop offline. This
service will be executed through the implementation of Affinity Propagation. This algorithm takes a
collection of data points and through a series of real-time message passing between the points, creates
clusters that are each represented by one point called an exemplar. For the purpose of this project, the
sentences within a document are being considered as the data points. The summarizing sentences that the
user will receive will be the exemplar for each cluster.

Project Objectives
Affinity propagation is a new algorithm that was created in 2007. Unlike other algorithms, such as K-
clustering, it simultaneously considers all data points as potential exemplars. This algorithm has been
used to solve a variety of clustering problems and it uniformly found clusters with much lower error than
those found by other methods, and it did so in less than one-hundredth the amount of time. Because of its
simplicity, general applicability, and performance, it is believed that affinity propagation will prove to be
of broad value in science and engineering.
[1]


Affinity propagation has been used prior to this project as the clustering algorithm to find the
summarizing sentences within a document. This projects object is to try and recreate this work and make
it available as a user friendly interface.

Methodology
The algorithm that is being utilized is the affinity propagation (affinity clustering) algorithm. Simplified,
the way this algorithm works is by passing messages between data points. In the case of this project, each
data point is a sentence in the analyzed text. Initially, before any messages can be passed, each data point
has to be compared to all other data points. These calculations are considered similarities. Once the
initial similarities are calculated, they do not change between each message pass. The action of passing
messages implies that each sentence is compared to the other and based on a two part comparison,
information is gathered. This eventually leads to the formation of clusters around specific points, called
exemplars. These clusters are formed based on the fact that all of the sentences are closely related in one
way or another. The values that change after each pass are called responsibilities and availabilities.
Responsibilities reflect the accumulated evidence for how well-suited one point is to serve as the center of
a cluster for the other point. Availabilities, on the other hand, reflect the accumulated evidence for how
appropriate it would be for a point to choose another point as the center of a cluster. This information gets
updated after every iteration and eventually will form clusters. Figure 1 demonstrates this visually. The
119
centers of these clusters are called exemplars. The summary will be an accumulation of these
exemplars.
[2]


Before any sort of product could be created, and then tested for accuracy, the question of which
programming language to use had to be answered. Dr. Gannod, who works primarily with web-based
coding, was interviewed for his opinion on what language to use. He responded with three suggestions:
Java, Visual Studio .NET, and Groovy on Grails. Groovy on Grails was scrapped immediately due to its
steep learning curve and the inefficient use of time spent learning a new programming language.
[3]
Because Java allows an easy way to create a user interface, it seemed like the best option and was chosen
for use. Due to time constraints, the product was then chosen to be an executable available for download
that first displays a basic window where the user inputs information. The executable can take the user
input as copy and pasted article content. By pressing the summarize button, the window will produce the
appropriate summary.

After the programming language was determined, the separate pieces of the algorithm were created.
These pieces were similarities, responsibilities, availabilities, and determining exemplars. The value of
similarities depended on the material on which the algorithm is being used. All of these parts of the
program were written in Java using the software program Eclipse.

A graphical user interface (GUI) was created using a different software program, Netbeans, but was
still written using the Java programming language. The GUI, Figure 2, had to include an area where the
user could input the text of the document that they want to be analyzed, a button to press to begin
analysis, a button to press to clear the text area in case of wanting to start over, and an exit option.
Netbeans was used to create the GUI because it allowed for modeling it by dragging and dropping the
items needed on the window. This saved time to focus on other parts of the project because the coding
was done automatically after placing the item on the window.

Once the algorithm was written as well as the GUI, the pieces were placed together and the debugging
process began. Data from the example that this project is based on was used to determine if the results
being produced by the program were accurate and correct.

Results Obtained
As of now an offline executable of the given project has been created. Using Java, the user will be
presented with a popup window interface where they will be able to copy and paste the text they want to
analyze. After pasting the text, Figure 3, the user has the options of clearing the text from the window,
summarizing the text, or exiting altogether. If the clear button is pressed, all text within the window will
be deleted. If the summarize button is pressed, the text within the window will be replaced with the
summarizing exemplar sentences that the program determines, Figure 4. If the exit button is pressed, the
program will quit and the window will be shut down. This program is still being tweaked for optimal
functionality. Before the end of the semester it will be available online for download, though.

Future work will include converting the project into a web service format. This will include becoming
familiar with the scripting language chosen for this task and redrafting the program to that particular
language. Once these two tasks are accomplished, the objective for the project will be completed.




120
Figures and Charts


Figure 1. Illustrates how clusters gradually emerge
during the message-passing procedure.
Figure 2. Graphical user interface (GUI) seen by
user for the program


Figure 3. GUI after user copies and pastes text to
analyze
Figure 4. GUI with summarizing sentences
displayed
References
1. http://www.psi.toronto.edu/index.php?q=affinity%20propagation
2. http://www.sciencemag.org/content/315/5814/972.full
3. http://grails.org/

Acknowledgments
Dr. Jerry Gannod, Director, Miami University Mobile Learning Center
Professor, Dept. of Computer Science and Software Engineering
Affiliate, Armstrong Institute for Interactive Media Studies
Miami University, Oxford OH 45056
121
Analysis of a Passive Adaptive Hydrodynamic Seal

Student Researcher: Aubrey A. Garland

Advisor: Dr. Hazel Marie

Youngstown State University
Department of Mechanical Engineering
Abstract
This project involves the development and analysis of a finger seal assembly that acts as a contacting
labyrinth seal to replace the current seal in a dry compressor. The objective is to design the new seal
without having to redesign any of the other compressor parts. The current seal plates, elements and
springs will be replaced with the new design such that the new seal assembly acts as a contacting
labyrinth seal with built in pressure equalization port.

Project Objectives
The purpose of this research is to perform analysis of a passive adaptive hydrodynamic finger seal.
Through the use of computer simulation and data analysis the motion of a portion of an assembled seal is
to be examined in response to pressures representing the high to low pressure drop that would be
experienced by the seal when in use. Analysis of the two individual finger designs will be used to perform
a dynamic system model representing the motion of each finger in response to the fluctuating pressure on
the foot from the air between the seal and the rotor. Taking into consideration operating conditions, it is
expected that each finger will displace in response to an increase in pressure. When the system returns to
its initial conditions, the fingers will fully recover to their original position.

Through the years the seals used in modern gas turbine engines have morphed from rigid non-compliant
seals such as labyrinth seals that utilizes a number of labyrinths to create air flow restriction between high
and low pressure areas to modern adaptive seals such as the brush seal. The finger seal was initially
developed in 1991. Like the brush seal it is designed to adapt to the motion of a spinning rotor while
maintaining the seal integrity. The finger seal, however, is designed to be more responsive with a lower
likelihood of failure during dramatic pressure drops and it is less expensive to manufacture. (Marie, 2005)

Methodology
The process for this experiment included utilizing Autodesk ALGOR Simulation Professional 2011 to
analyze the relationship between the Modulus of Elasticity and the Equivalent Spring Constant associated
with the geometry of the finger seal. Several models of a portion of an assembled finger seal were
constructed within SolidWorks. (Figures 1 and 2)

These models were then analyzed using ANSYS 14.0 Mechanical Simulation software in order to
evaluate displacement along the y-axis and the subsequent recovery of the finger seal with established
pressures on the finger pads. Finally the finger seal was modeled as a mass-spring-damper system within
MATLAB. Using Phosphor Bronze as the material for the finger seal, the equivalent mass of the stick and
foot portion of the seal was determined using SolidWorks. The equivalent spring constant for the finger
seal was determined using the relationships obtained from the ALGOR simulations. The equivalent
damping and spring constants for the air were obtained from previous research.

Results Obtained
By varying the seal geometry the equivalent stiffness of the seal finger can be altered to suit the seal
environment and performance requirements. Results for the variation of the number of fingers, , and the
angle of the finger sticks, Dcc, show a power equation relationship between stiffness and the varied
geometric parameters. (Figures 3 and 4)

For a given geometry, there is a strong linear relationship between the Modulus of Elasticity and the
equivalent spring constant, which is expected because the finger acts as a cantilever beam and the kequ
for a straight cantilever beam is:
122


Solving the kequ equation for L, the finite element analysis results indicate an effective beam length of
1.234in. The actual arc length of the finger is 1.222in., which shows excellent correlation. (Figure 5)

In the dynamic simulation, y(t) represents the displacement input to the seal that models a rotating
imbalance in the rotor. The x(t) represents the reaction/displacement of the seal to the imbalance input.
The results (Figure 6) indicate that the movement of the finger will mirror that of the rotor, which is the
desired result.
Significance of Results

The result of the dynamic model simulation indicates that the finger seal geometry provides excellent
passive-adaptive behavior of the seal to changes in the rotor motion. The dynamic model simulation,
however, is over-simplified because it does not account for the effect of the frictional contact between the
finger seal layers as damping. One of the problems that arose during the research process, which
prevented the damping to be included, was due to the fact that only a portion of the entire seal was being
analyzed. The damping characteristics of the frictional contact are part of the ongoing work that will be
added to the dynamic model once valid results are achieved from the ANSYS analysis. Improve model
used for analysis in ANSYS Mechanical to properly mimic the existence of the full alternated Finger
Seal.

Future goals for this research would include developing a detailed procedure for the replication of this
experiment within ANSYS for future use within a classroom environment. Also, analysis of the full
Finger Seal in ANSYS Fluent to study the flow of air from a high pressure area to a low pressure area to
determine the effectiveness and reliability of the seal under working conditions.

Figures and Charts



Figure 1. SolidWorks model of assembled finger seal
used in ANSYS analysis.
Figure 2. Section view of model used in ANSYS
analysis.


Figure 3. kequ vs

y = 0.0029x
3.9709

R = 0.9997
0
100
200
300
400
500
0 5 10 15 20 25
E
q
u
i
v
a
l
e
n
t

S
p
r
i
n
g

C
o
n
s
t
a
n
t

(
l
b
f
/
i
n
)

Repeat Angle (degrees)
kequ vs
123

Figure 4. kequ vs Dcc


Figure 5. Results from Autodesk Algor analysis of Type 1 Finger


Figure 6. Results from the MATLAB Analysis of Dynamic System Model for Type 1 Finger

Acknowledgments
Many thanks go out to Dr. Hazel Marie, Mark Macali, Matthew Pierson, and Nick Matune.

References
1. Marie, H. (2005). A Study of Non-Contacting Passive Adaptive Turbine Finger Seal Performance.
Doctoral Dissertation. The University of Akron.
2. Palm III, William J. (2010). System Dynamics (2nd ed.). New York, NY: The McGraw-Hill
Companies, Inc.
y = 17.521x
6.9045

R = 0.9994
0
500
1000
1500
2000
2500
0 0.5 1 1.5 2 2.5
E
q
u
i
v
a
l
e
n
t

S
p
r
i
n
g

C
o
n
s
t
a
n
t

(
l
b
f
/
i
n
)

Diameter of the Circle of Centers (inches)
kequ vs Dcc
0
20
40
60
80
100
120
6.00E+06 1.10E+07 1.60E+07 2.10E+07 2.60E+07 3.10E+07 3.60E+07
E
q
u
i
v
a
l
e
n
t

S
p
r
i
n
g

C
o
n
s
t
a
n
t

(
l
b
f
/
i
n
)

Modulus of Elasticity (psi)
kequ vs E for Type 1 Finger
124
Approximate Inverse Dynamics for Dynamically Constrained Path-Planning

Student Researcher: Adam R. Gerlach

Advisor: Dr. Bruce K. Walker

University of Cincinnati
School of Aerospace Systems

Abstract
The Defense Advanced Research Agency (DARPA) is currently developing a new class of servicer
spacecraft to perform autonomous rendezvous and docking of target spacecraft. These spacecraft are
characterized by heightened levels of autonomy in both orbital and close proximity pose estimation. To be
successful, these spacecraft require technical advances in compliance control, machine vision, real-time
pose estimation, and path-planning.

Even though path-planning algorithms have been studied extensively in the fields of robotics, artificial
intelligence, and control theory, the maturity of planning algorithms that consider the dynamic constraints
of a system is relatively low. The proposed research is to develop a new dynamically constrained path-
planning algorithm based on radial basis functions. The use of radial basis functions may lead to a general
path-planning algorithm that is system- and dimension-independent. Although the development of this
algorithm is fueled by autonomous rendezvous and docking applications of spacecraft, its use can be
extended to any autonomous system.

This paper details a robust method for approximating the inverse dynamics of a system which will serve
as a foundation for the future development of a path-planning algorithm.

Project Objectives
Path-planning traditionally refers to the construction of a continuous or discrete path in the state space of
a system that achieves a specified goal. Ideally this path is optimal in some sense, i.e., distance, time, or
control effort. Figure 1 shows a simple example of the shortest path in a state space defined in

for
moving an autonomous ground vehicle from an initial state to a desired goal state.


Figure 1. An example path in

demonstrating the shortest path between an initial state and goal state[1].

Although all path-planning algorithms consider the state space of system at hand, they typically ignore the
natural dynamics and constraints of the system. This leads to optimal paths in the state space that may not
be realizable by the physical system or require sophisticated feedback control laws or an excessive
amount of control effort to follow. Because of this the optimal path provided by the path-planning
125
algorithm may result in a highly sub-optimal path when considering the system dynamics. In order to
consider the differential constraints of the physical system, the path must be planned in the input space of
the system as opposed to the system's state space.[2] The path in the state space is then found via the
integration of the state transition equation


() () (() ())

(1)

where the given system is expressed as

( ) (2)

with being the system states and being the system inputs.
In order to perform such planning for both holonomic and non-holonomic systems, an algorithm is
needed that can map a specified initial state at time and a final goal state at time to the required
input at time :

[() ( )] () (3)

Such a mapping is the inverse dynamics of the system and finding such a mapping is the objective of this
project.

Methodology Used
Radial Basis Functions
The radial basis function (RBF) algorithm is a meshfree method for solving scattered data
interpolation and approximation problems. Not only are RBFs easy to understand and implement in
software, they are useful in solving complex problems such as non-linear partial (PDE) and ordinary
differential equations (ODE). Because a majority of mechanical systems can be modeled by ODEs, we
look to take advantage of the properties of RBFs to approximate the inverse dynamics of mechanical
systems.

In order to use RBFs to approximate the inverse dynamics of the system we must first generate the
scattered data to approximate. This is achieved by sampling a range of inputs and initial conditions for a
given system and numerically integrating the system forward for each state-initial condition
combination. The integration then produces the system output for the given initial conditions and the
specified inputs. For scattered data interpolation, a data site is defined as the concatenation of the input
and initial condition


[

()
]. (4)

The integration output is represented by (). When approximating the system dynamics we assume we
have a set of data {

()}, where , with data sites

, and function values


(

()

, where is the number of system states and is the number of system inputs. A
RBF interpolant

()

(5)

can then be used to match these data exactly, i.e., to satisfy the interpolation conditions [3]

) (

) . (6)
126

The coefficients

are found by solving the following linear system, thus enforcing the interpolation
conditions.


[
(

) (

)

(

) (

)
] [

] [
(

)
] (7)

In order to approximate the inverse dynamics of the system we simply switch the system output ()
with the input in Equation 4 and in our definition of our data set. Thus, Equation 4 becomes


[
()
()
]. (8)

and our set of data becomes {

}, where , with data sites

, and function values


(

. Now the solution of the inverse RBF interpolation problem



(9)

can be used to match the inverse data exactly, i.e., to satisfy the inverse interpolation conditions

) (

) . (10)

Basic Functions
By carefully selecting an appropriate basic function in Equation 7, the existence of

and thus the


solution to the linear system is guaranteed.[3] The following list indicates a few of the more frequently
used basic functions that results in interpolation matrices that are positive definite and thus invertible.
The Gaussian and inverse multiquadric basic functions include a shape parameter that is used to set the
localness/globalness of the basic function.

Gaussian


()
()


(11)

Inverse Multiquadric


()

()


(12)

Thin Plate Spline

()

() (13)

Wendland's Function

() {
( )

[( ) ] []

(14)
where and ()

( ).



127
Test Systems
In order to test our algorithm for approximating the inverse dynamics of a mechanical system, we
consider two systems; a planar pendulum and a planar elbow robot. These systems have one and two
degrees of freedom respectively.

Planar Pendulum
The first system we will consider is a planar pendulum with a massless rod as seen in Figure 2. This
system has two states, the angular position and velocity

. The equation of motion for this system is



(15)

where

is a damping coefficient, is gravitational acceleration, is the length of the rod, and is the
input torque.

Figure 2. Planar Pendulum.

In order to test our RBF method for approximating the inverse dynamics of this system we need a path in
the state space to follow. The desired path in the state space for this pendulum on the earth's surface is the
natural motion of the pendulum with no torques applied in the gravitational field of the moon, where

. In summary, we will use approximate inverse dynamics to simulate lunar


pendulum motion on earth.

This problem has a closed form solution (Equation 16) which will be used as a benchmark for the
approximate inverse dynamics solution.


(16)

The steps used for simulating the pendulum motion using approximate inverse dynamics are as
follows:
1. Select a simulation time step
2. Integrate Equation 15 from a specified initial condition () with

. This produces the


desired path for the system as if it were on the moon. This path is represented by

().
3. Select a range of input torques and initial conditions and create a set containing all of the
possible torque-initial condition combinations.
4. Loop through each torque-initial condition combination and integrate Equation 15 forward
with

from the selected initial condition with the selected input torque applied.
Record the system state output () and

() as

() where .
5. Create a data set where the system outputs () and initial conditions () serve as the data site
and the system input torques serve as the function values. The data set then becomes


{[

()
()
]

}, where (17)

128
6. Solve the system of linear equations in Equation 7.
7. Integrate Equation 15 from a specified initial condition () with

. At each time step


in the integration use the RBF expansion coefficients found in step 6, the current state of the
system () and the desired state and the next time

( ) to determine the required


input torque.

Planar Elbow Robot
The second system we will consider is a planar elbow robot as in Figure 3. This system has four states,
the angular position and velocity for each of the two joints represented by

, and

. The
equations of motion for this system are of the form

() ( ) () (18)


Figure 3. Planar Elbow Robot [4].

The test case for this system is to have the planar elbow robot mimic a single joint robot, i.e.

()
and

() , going through a planned motion from

() to

) with the following


angular velocity profile:

()
(


(19)

The use of the approximate inverse dynamics algorithm for this planar elbow robot follows directly from
the steps listed for the planar pendulum above.

Results Obtained
Planar Pendulum
The planar pendulum was modeled with and

and simulated in MATLAB [5] using ODE45.


The initial conditions were

and

. Similar results are achievable with different initial


conditions and model parameters.


Figure 4. Error.
129

Figure 5:

Error.


Figure 6. Input torque error relative to the closed form solution in Equation 16.

Planar Elbow Robot
The planar pendulum was modeled with

, and

. It was also simulated in MATLAB using ODE45.




(a) (b)
Figure 7. (a)

error (b)

error.

130

(a) (b)
Figure 8. (a)

error (b)

error.

Significance and Interpretation
The approximate inverse dynamics method presented in this paper has been shown to sufficiently map the
initial conditions and desired outputs of the two systems introduced to a desired input. Small state errors
are realized for both systems. However, the planar elbow robot case, the errors are diverging. This can be
attributed to the fact that the required torque

becomes very large, much larger then the torque range


used to approximate the inverse dynamics. This can most likely be alleviated by enlarging the torque
range.

Although more research needs to be performed, these results lead the author to believe that approximate
inverse dynamics by use of RBFs can be extended to any system that can be modeled by a set of ODEs
regardless of the number of degrees of freedom. For systems with many degrees of freedom, more
efficient computational techniques will need to be investigated because the computational complexity of
solving the linear system grows exponentially with the number of state variables.

References
[1] R. J. Geraerts and M. H. Overmars, "Enhancing corridor maps for real-time path planning in virtual
environments," Computer Animation and Social Agents, pp. 64-71, 2008.
[2] S. M. LaValle, Planning Algorithms. Cambridge University Press, 2006.
[3] G. E. Fasshauer, Meshfree approximation methods with MATLAB. World Scientific, 2007.
[4] M.W. Spong, S. Hutchinson, and M. Vidyasagar, Robot modeling and control. John Wiley & Sons,
2006.
[5] MATLAB, version R2011b. Natick, Massachusetts: The MathWorks Inc., 2011.
131
Spectroscopy: Explorations with Light

Student Researcher: Anna J. Gill

Advisor: Dr. Cathy Mowrer

Marietta College
Department of Education

Abstract
How do scientists know the chemical composition of stars, when they have never been to them? One way
is to use a spectroscope. When different elements are heated, they give off specific colors of light that can
be detected using a spectroscope. By dispersing the light given off by stars through a spectroscope, we
can look at images showing unique pattern of dark and colored bands. These bands correspond to specific
chemicals in the burning star. Scientists can then compare their emission spectra to the spectra of known
elements from laboratory experiments like flame tests to determine the composition of stars.

Students can simulate this process through several methods, which will be explored over the course of
three days. On the first day, students will learn about spectroscopes and will practice recording the spectra
emitted by various light sources. The next day, students will construct a basic spectroscope using
materials like CDs, cardboard boxes, and tape. Finally, students will be shown a demonstration of a flame
test lab and will discuss how scientists pull all of this information together to determine the composition
of stars.

This series of lessons is designed for the 9
th
grade level and will take 3 days to complete. However, the
lesson could easily be modified for various grade levels and amounts of time.

Lesson Plan
Grade Level: 9 (can be modified for other grade levels)
Class: Physical/general science
Lesson Title: Spectroscopy
Unit Title: Light Energy and the Electromagnetic Spectrum

Essential Questions: How do scientists use spectroscopy to determine the chemical composition of stars?

Necessary Prior Knowledge: Students need to know that stars, like our sun, are giant bodies of burning
gases. Students must also understand that these stars are very far away, and thus cannot be studied
closely. This lesson is part of a larger unit on light energy and the electromagnetic spectrum.

Student Materials: diffraction grating slides, colored pencils, cardboard boxes at least 8 cube, toilet
paper tubes, old CDs, scissors, Exact-o knife, glue, duct tape, cardboard, black construction paper, ruler,
pencil, lab journal, Exploring Diffraction with Spectroscopes handout

Teacher Materials: same as above, as well as: Flinn Science flame test kit, large beakers, distilled water,
wooden stir sticks, Bunsen burner, safety goggles, lamp with fluorescent bulb; lamp with incandescent
bulb

Technology Usage: laptops for each student with internet access, Smart Board or Mimeo (nice to have
but not necessary)





132
Standards Objectives Learning Tasks Assessment
Day 1: Demonstrate that
electromagnetic radiation is a
form of energy. Recognize
that light acts as a wave.
Show that visible light is a
part of the electromagnetic
spectrum (e.g., radio waves,
microwaves, infrared, visible
light, ultraviolet, X-rays, and
gamma rays). Ohio
Academic Content
Standards, Grade 9, Nature
of Energy, 18.
Students will use a
spectroscope to
determine the spectra
of various light
sources around the
school.
Students will draw the
spectra of the various
light sources they
observe onto the
Exploring Diffraction
with Spectroscopes
handout.
They will turn this
handout in to the
teacher to be graded.
Day 2: Same as above Students will
construct a basic
spectroscope and test
it in class.
Students will work in
lab groups to construct a
basic spectroscope from
provided materials.
Then, they will test their
spectroscopes on
provided lamps and
other light sources.
Observations during
the construction of
the spectroscope.
Testing the
spectroscope upon
completion.
Day 3: Same as above Students will be able
to state how
spectroscopes are used
to determine the
composition of stars.
Students will watch a
flame test
demonstration. Then,
they will practice online
using the NASA
websites Spitzer
Spectrometer Match
Game to connect the
lessons from the past
three days.
Discussion with
students

Lesson Procedures: Day 1
Lesson Introduction
Log into your Brain Pop account and show students the Electromagnetic Spectrum video.
(http://www.brainpop.com/science/energy/electromagneticspectrum/) Discuss which portions of the EM
spectrum we will be working with over the next three days (visible light).

Lesson Procedures
1. Explain what a spectroscope is, what it is used for, and how it works. Show students a diffraction
grating slide and demonstrate its use.
2. Pass out diffraction grating slides to the different groups. Then, explain to students why they will see
different results when they look at the classrooms fluorescent lights than when they look at an
incandescent bulb (light from heated solid vs. light from glowing gas).
3. Have students work in pairs to move about the school building (to approved destinations) observing
different light sources and their spectra. Students will follow directions from the Exploring
Diffraction with Spectroscopes handout from the NASA Optics Educators Guide. Students will
also record their data on this handout.


133
Closure
Students will return to the classroom with 10 minutes remaining in class. Students will draw what they
saw using available resources (Smart Board technology would offer many colors to draw he spectra with.
Dry-erase boards, overhead projectors, and chalkboards may be used if a variety of colors are available
for writing with).

Homework/Extension Activities
No HW tonight, but offer students an extra credit opportunity: if they check out a diffraction grating slide
at the end of the day, they can go home and try to find other light sources besides those found in the
school. If they draw the spectra and record the location of these lights, they have the opportunity to earn
bonus points on the unit test. (Suggested light sources: candle, sunlight on a white wall, different types of
compact fluorescent or regular fluorescent light bulbs, mercury vapor lights, sodium lamps from parking
lots, LED lights, neon lights in stores, etc.)

Lesson Procedures: Day 2
Lesson Introduction
Begin class by sharing results of extra credit searching from the night before. Recollect any loaned
diffraction grating slides. Review with students what a spectroscope does. Then, explain that we will be
building a spectroscope in class today.

Lesson Procedures
1. Show an example of a basic spectroscope that you have already constructed. For instructions,
there are several sites you can use. Two good ones are
http://coolcosmos.ipac.caltech.edu/cosmic_games/spectra/makeGrating.htm and
http://sci-toys.com/scitoys/scitoys/light/cd_spectroscope/spectroscope.html . Give an overview of
how the spectroscope is constructed.
2. Pass out materials to students as well as printed instructions.
3. Students work in groups of 3-5 to construct the spectroscopes. Teacher circulates the room to help
them as they work.
4. As students finish their spectroscopes, they can test it using the provided lamps and other light
sources in the classroom.
Closure
Give students enough time to clean up from the activity before the end of the period.

Lesson Procedures: Day 3
Lesson Introduction
Review with students what they have learned the past two days about spectroscopy. Tell them that today
they will find out how to connect spectroscopy with how scientists discover the composition of stars.

Lesson Procedures
1. Teacher will explain the difference between emission line spectrum (what they have observed
these past few days) and the absorption line spectrum (what they will see during a flame test lab).
2. Teacher will demonstrate the flame test lab by: first, soaking wooden sticks in distilled water;
second, putting a small amount of the flame test chemicals, one at a time, onto the sticks; and
third, putting the stick with chemicals into the Bunsen burner flame. The students will make
observations in their lab journals as to the color of the flame. During this lab, the teacher will
discuss why the flames have different colors. During the lab, students will also observe the flame
using the diffraction grating slides and the spectroscopes they made.
3. After cleaning up from the flame lab, have students log onto the class set up laptops. Students
will locate the Spitzer Spectrometer game online
(http://coolcosmos.ipac.caltech.edu/cosmic_games/spectra/spectrometerMatch1.htm ). The
134
teacher will give instructions and students will play for about ten minutes. At the end of ten
minutes, students will log off the laptops and put them away.
Conclusion
Teacher will help student piece together the learning from the past three days by leading the class in a
guided discussion. The teacher will ask how the spectrum of an element is like the fingerprint of a person.
The teacher will help students connect observation from the flame test to observations through the Spitzer
Spectrometer online. Target: students can verbalize that elements give off different emission spectra when
heated. Each spectrum is unique to that element. When scientists carefully observe the spectra of stars,
they can thus deduce which elements the star is made of by comparing their emission spectra to known
emission spectra found in the lab.

Resources
The following resources were consulted in the creation of this unit and/or are resources used by students
during the unit.

1. "Building a Simple Spectroscope." Science Toys. Web. 5 Apr. 2012. <http://sci-
toys.com/scitoys/scitoys/light/cd_spectroscope/spectroscope.html>.
2. Electromagnetic Spectrum. Brain Pop. Web. 5 Apr. 2012.
<http://www.brainpop.com/science/energy/electromagneticspectrum/>.
3. Exploring Diffraction With a Spectroscope. Huntsville, AL: NASA Marshall Space Flight Center.
PDF.
4. "Flame Test and Spectroscopy." Academy Of Science. Web. 05 Apr. 2012.
<http://www.jpsaos.com/lumsden/flame_test_spectroscopy.htm>.
5. NASA. "Make Your Own Spectrometer." Cool Cosmos. NASA. Web. 05 Apr. 2012.
<http://coolcosmos.ipac.caltech.edu/cosmic_games/spectra/makeGrating.htm>.
6. Science Mission Directorate. "Visible Light" Mission: Science. 2010. National Aeronautics and
Space Administration. 07 Apr. 2012.
7. "Spitzer Spectrometer." Cool Cosmos. NASA. Web. 5 Apr. 2012.
<http://coolcosmos.ipac.caltech.edu/cosmic_games/spectra/spectrometerMatch1.htm>.
135
Wheres Waldo?
A Design Paradigm in Pattern Recognition, Machine Learning and Image Processing

Student Researcher: Benita I. Gowker

Faculty Advisor: Fred Garber

Wright State University
Department of Electrical Engineering

Abstract
The illustrated series of Wheres Waldo books provide an excellent set of challenge problems for
designing and testing hardware/algorithms to achieve human target recognition capability. Developing an
algorithm for finding Waldo in many varied environments will incorporate aspects of pattern recognition,
machine learning and image processing. This algorithm must be able to recognize Waldo in various
poses, scales and outfits in the different images. This leads to the creation of a series of Waldo filters.
These will be designed to match the image of Waldo, with any of the aforementioned characteristics, to
the image that will be searched.

While the current project applies to identifying a character in a very complicated image, this achievable
human capability has a broad spectrum of applications that could range from target segmentation and
automatic target recognition in synthetic aperture radar (SAR) images to cancer detection in medical
imaging. The algorithm will be tailored for specific problems, but the fundamentals of the algorithm will
be intact.

Introduction
In order to provide a machine with the capability to locate a specific character in an image, one must
develop a method for the machine to look at images. Creating a program to look at the image for a
target will incorporate algorithms in the fields of image processing, pattern recognition, and machine
learning.

Though the path to teaching a computer to find Waldo in any given image will be difficult, any
headway in this topic provides vast benefit. For instance, as mentioned previously, image processing,
pattern recognition, and machine learning for automatic target recognition is receiving significant
attention for their applications for the public and private sectors. This algorithm to find Waldo in an
image presents numerous applications in both of these sectors. In SAR, computer-aided target detection
could employ a similar algorithm to reduce the burden on analysts and provide confidence information for
target recognition estimates. In the public sector, as mentioned previously, a similar algorithm could be
used in computer aided cancer detection and geo-informatics.

The broad spectrum of applications with which this technology can be used presents potential profitability
for a wide range of clients. With slight modifications to the Wheres Waldo algorithm, government
clientele as well as public business owners will have use for this technology.

Constraints
As in most sensor signal exploitation problems, constraints reduce to compute cycles per unit area, and
accuracy of the resulting decisions versus the complexity of the presented scene and variability and
variety of the class of the object of interest.

Project Objective
For this project to be deemed successful, a few of options for the result of the algorithm are possible.
First, this algorithm could be used to analyze an image and provide a probability that Waldo is located
somewhere in the image. This seems to be a relatively simple idea, in that this algorithm could function
by locating areas in the image where color patterns appear to match Waldos appearance.
136
Secondly, this algorithm could analyze a given image and spotlight areas on the image where a possible
Waldo sighting exists. This too, seems to be a relatively simple approach, with just the added extension of
an identifier. This method, while practical, promises to yield many false alarms due to the possibility of
similar color patterns dispersed over the whole image.

Lastly, this algorithm could function by analyzing an image, filtering the image with a series of designed
Waldo filters, which match Waldo in any orientation/pose to a potential Waldo sighting in the image. This
method could yield a score matrix which holds the probability that Waldo was in a particular
orientation/pose. This method would reduce the probability of false alarms, but could be significantly
more complicated.

Methodology
The research approach includes problem size reduction, characteristics exploitation, algorithm
exploration, algorithmic localization of gain and time versus success rate. The work that was done in this
project is all based on exploration and research on the different approach needed to solve the problem. In
terms of problem size reduction, we check if Waldos image is split into cells, to determine if Waldos
image exists partially or entirely. If Waldos image partially exists in the cell, check other surrounding
cells. Using characteristic exploitation, I tried to exploit Waldos consistent image properties such as his
hat, glasses, shoes and pants and the inconsistent properties which include his backpack, cane and books.
Because Waldo always wears blue pants and his pants are always below a red and white striped shirt,
building a stripe filter that reduces the image to white and red helps find Waldo and rule out all unwanted
images.

Results Obtained
Applying problem size reduction strategy, the result was that the image was reduced by 52%. The time it
takes for the Matlab script to run was also reduced to 3 seconds and the pant-finder algorithm ran in 2570
milliseconds. The color filter code written was fast and simple, but the problems I encountered were that
it was sensitive to color contrast and had no effective solution to color contrast estimation. The image
segmentation program result was 26% reduction initially, which was later improved to a result of 52%
reduction, twice as fast as the former and then successfully reduced to 76%.

Significance and Interpretation of Results
The color filter code in Matlab filters out impossibilities by blues, whites and reds, which are the colors of
Waldos outfit. This cell takes advantage of the fact that for Waldo to exist within any two vertically
concatenated boxes, between the two of them, they must contain blue, white and red. Overall, the image
reduced to 76% made the code faster. The code was scale invariant and the confusers were also highly
reduced, but the only problem was it was contrast sensitive. The pant filter code was also very fast and
effective, but test showed that the pant filter failed for two reasons; one, g color contrast problem and the
other scaling problem.

Figures/Charts

Figure 1. Past result with 26% reduction Figure 2. Past result with 56% reduction

137


Figure 3. Past result with 56% reduction Figure 4. Iterative image segmentation of Waldos
whole image


Figure 5. Iterative image segmentation of Waldos Pants

Acknowledgments and References
Wheres Waldo, Martin Handford, Candlewick, 2007.
Digital Image Processing Using MATLAB, 2nd ed. , Rafael C. Gonzalez and Richard E. Woods
and Steven L. Eddins, 2009, Gatesmark.

References
Waldo Images:
1. http://www.myperfectautomobile.com/uncategorized/waldo.html
2. http://construction.seattlechildrens.org/2011/03/where%E2%80%99s-waldo%E2%84%A2-comes-to-
seattle-children%E2%80%99s-hospital
3. http://resumeroasts.blogspot.com/2012/03/wheres-waldo.html
4. http://waldo.wikia.com/wiki/Where's_Waldo%3F
5. http://www.grmwebsite.com/blog/bid/66913/Get-Found-with-SEO-The-Where-s-Waldo-of-Web-
Page-Marketing
6. http://bokertov.typepad.com/btb/2011/08/obamanomics-quote-of-the-day.html
7. http://www.holytaco.com/holy-taco%E2%80%99s-pitch-for-the-where%E2%80%99s-waldo-
movie/wheres-waldo-in-chicago-2/
SAR Image:
1. http://www.thespacereview.com/article/790/1
Cancer Image:
1. http://www.cancersurvivalrates.net/brain-cancer-survival-rates.html
Geo-informatics Ima
1. http://www.scivee.tv/node/3095/video

Special Thanks to:
Tony Buchenroth and Stephen Hartzell
138
Battery Management for a Lunar Rover

Student Researcher: Courtney A. Gras

Advisor: Dr. Tom Hartley

The University of Akron
Department of Electrical and Computer Engineering

Abstract
There is no question that energy storage is an important aspect in the design of a lunar rover. Without
energy, no mobility would be possible, and no life support systems could function. It is therefore
important to consider the design of a system capable of storing energy and supplying power to a lunar
rover. One often considers a system containing a solar array coupled with an energy storage system such
as batteries. When large groups of series-connected batteries charge and discharge, they ideally do so as
one unit. However, as cells age, it is possible for one cell to become more charged or discharged than the
rest of the pack, essentially becoming a load on the rest of the cells. This can lead to overcharging or
over-discharging, which causes permanent damage to the cells and shortens the useful life of the pack.
This problem most frequently occurs due to differences in cell construction, temperatures, capacity,
internal resistance, and aging. When considering the harsh environment of the moon, in addition to the
integral role energy storage plays in a rover, safety and reliability are issues not to be overlooked. It is
therefore, the goal of this project to propose a solution to the management of an energy storage system in
a lunar rover.

Project Objectives
The objective of this project was to design and test a battery management system a battery management
system capable of monitoring the state of the individual cells, and correcting imbalances, to extend the
life of the battery pack utilized in an extraterrestrial rover vehicle. To accomplish this objective, it was
also desired that a battery pack be characterized to meet the needs of an extraterrestrial roving vehicle
requirement.

Methodology
To design a battery management system capable of protecting a battery pack employed in an
extraterrestrial rover vehicle, special consideration was given to the vehicle and environmental
characteristics, in addition to the general battery analysis. For this study, two assumptions were made:

1. The extraterrestrial rover vehicle is characterized as illustrated in the NASA Small
Pressurized Rover Fact Sheet [1]
2. The selected battery chemistry is Lithium Iron Phosphate (LiFePO
4
), because their
specifications fit the requirements of the proposed system, and their qualities were well suited
for the application. A comparative analysis to the ABSL Sony 18650 cells was also
performed, but exceeds the scope of this paper.

The first part of this study involved the analysis of the rover vehicle and associated power requirements,
where the rover information was acquired from the NASA Rover Facts Sheet and the NASA LVR
Handbook [1][2]. Assuming a total rover weight of 1,333.33kg (on the moon) with a require driving
distance of 240km at 10km/hr, with standard coefficients of friction, the total required power was
calculated to equal 2,000W. Assuming an average speed of 6km/h, an upper limit on the required time for
rover operation was determined to be 40hr. Following these general rover calculations, sizing of the
battery pack to be managed was done, where the required energy storage was found to be 80,000Wh.
Choosing the nominal battery pack voltage to be 200V, the capacity of the battery pack was calculated as
400Ah.

139
After the battery pack was sized, the battery management system was designed. To correct for imbalances
in battery cells, a battery balancing technique must be employed. There are a number of ways to
approach battery cell balancing, but in this study, the dissipative approach was employed. In this method,
a cell bypass is enabled whenever a cells voltage approaches a maximum threshold during the charging
process. This enables the rest of the cells in the battery pack to complete their charging while eliminating
damage to the other cells and bringing the pack into balance. The circuitry used to accomplish this task is
illustrated in Figure 1. In addition to the standard dissipative approach, an active approach to cell
balancing was also employed as shown in Figure 2. In this method, a microcontroller was utilized to
monitor each cells current and past performance so that intelligent adjustments could be made to the
bypass turn-on voltage. To oversee this process, a pack controller was employed with individual cell
monitoring units placed on each battery cell.

In addition to the balancing portion of the design, the proposed management system may also include the
ability to identify each cell, and transmit real-time data to a master controller via radio frequency
technology. This communication of individual cell data will allow for real-time monitoring and data
logging. Such a design allows multiple battery cell locations, and reduces the risk associated with
running additional wires in a high-voltage environment. Finally, to ensure the system would comply with
standard fault-tolerances, emergency cell cut-offs were also built-into the system.

Results
Following the analysis, a battery management system printed circuit board (PCB) was designed and tested
on a pack of 49 90Ah Thundersky LiFePO
4
. This testing was accomplished in an electric vehicle setting,
to mimic the behavior of a rover vehicle as shown in Figure 3. To test the versatility of the management
system, an additional test was performed on a battery pack of 8 Nickel Zinc cells, whereby the pack was
cycled at 4.5A constant-current and discharge rate of 9A. Data from this test illustrating the cell bypass
functionality is shown in Figure 4.

To complete this study, a feasibility analysis was performed of the digital cell bypass method:

Size the size of the management boards being only 160mm x 58mm x 15mm, and conveniently
fitting on top of the batteries themselves, makes them a negligible addition to consumed space in
the rover.
Weight 200g per board is small in comparison to the weight of the batteries themselves, and
adds no major contribution to the total weight of the rover.
Wireless Design - Wireless communication enables ease of retrofitting an existing pack, the
ability to have multiple battery locations, and reduce the risk associated with running additional
wires in a high-voltage environment.
Power Consumption The wireless communication portion of the design requires a constant
15mA (this can be reduced if the communication is put into a sleep state), while the rest of the
management system requires an average of 75mA.
Cost The inexpensive cost of the system, $2,676, is a very appealing factor.
Fault Tolerance NPR 8705.2 requires that all batteries shall be two-fault tolerant to
catastrophic failure The proposed management system meets these requirements as now shown:
With the utilization of active/passive control, if the active system fails, the passive components
(rated significantly higher for this purpose) will continue to function as usual. Assuming the
active portion of the board has failed, the master controller will detect loss of communication, and
therefore, a failure.

Significance
Through the design and testing of a battery management system in bench testing, field testing, and even
testing with multiple battery chemistries, it was clear that a battery balancing technique employed through
a collection of individual cell management units utilizing wireless communication offered the protection
and reliability required for the fundamental protection of a battery pack in a lunar rover vehicle.
140
Figures




































Acknowledgments
Special thanks to Dr. Tom T. Hartley, and Benjamin Magistro for technical assistance, Myers Motor for
providing test vehicles, and the NASA Space Technology Competition for the inspiration to study the
subject.

References
[1] NASA Small Pressurized Rover Facts Sheet Retrieved from:
http://moontasks.larc.nasa.gov/docs/main_spr_factsheet_web.pdf.
[2] NASA LRV Handbook. (1971). LRV Operations Handbook Appendix A (Performance Data).
NASA-TM-X-66816.

Figure 1. Passive Cell Bypass Circuitry. Figure 2. Active Cell Bypass Circuitry.
Figure 4. Test data from cycle #354 of a test with 8 NiZn cells.
Figure 3. Management system test in
electric vehicle.
141
Characterization and Mirna Analysis of Breast Cancer Cell-Secreted Microvesicles

Student Researcher: Nicole D. Guzman

Advisor: Michael E. Paulaitis

The Ohio State University
Department of Chemical and Biomolecular Engineering

Abstract
The discovery of stable levels of miRNAs in blood [1-4] has triggered research trying to identify their
role in circulation and implications in mediating cell signaling during the initiation and progression of
cancer. The finding in 2008 that epithelial tumor-cell derived miRNAs are present in circulation [4] has
bolstered the study of circulating miRNAs as markers of endothelial derived tumors as well as for blood
malignancies. In these last four years, more than 200 publications have identified circulating miRNAs as
biomarkers for a range of cancer types and other malignancies [5].

Besides epithelial tumor cell-derived miRNAs, blood contains a mixture of miRNAs originating from all
the myriad of blood-cell types that are in circulation, including red blood cells, neutrophyls, lymphocytes,
platelets, and monocytes [1, 5, 6]. Since cancer-related inflammation is likely to alter the miRNA
expression of the cellular constituents of blood, systemic changes in the profiles of secreted miRNAs in
the blood stream are confounded with tumor specific changes until the effects of cancer on each of the
constituents of blood is quantified. Additionally, based on large scale profiling studies comparing normal
and tumor tissue [7, 8], it is observed that the vast majority of miRNAs species, in a particular cell type,
are expressed both in normal and diseased states, with only their abundances changing between cell
states.

A way to bypass the complexity of the still-to-be characterized miRNA dynamics originated by blood-cell
related changes in circulation during cancer initiation and progression is to analyze only the miRNAs that
are originating specifically from the tumor. Since cell-secreted microvesicles are known to contain
miRNA [9-15] a more effective way of measuring tumor induced changes in blood miRNAs, is to
quantify changes in the abundance and composition of miRNA species originating only from the
subpopulation of vesicles released by epithelial cells.

Since microvesicles contain distinct miRNA compositions to that of the donor cells [13, 14, 16], it is vital
to first characterize the changes in microvesicle miRNA abundance and composition which accompany
changes in the state of the cells of origin. In this study we have obtained the abundance, on a per cell
basis, of the mature miRNA content in the cells and secreted microvesicles of the non-transformed
MCF10A and the transformed MCF7 breast cell lines. We have determined that both MCF10A and
MCF7 cells secrete less than 1% of their intracellular miRNA pool within 72 hours. Importantly, the most
abundant miRNAs contained within microvesicles does not parallel those in the donor cells.

Materials and Methods
Cell culture
MCF10A cells were grown as described previously [17]. Briefly, cells were grown in DMEM/F12
(Invitrogen; Carlsbad, CA) containing 5% Fetal Bovine Serum (Invitrogen; Carlsbad, CA) and
supplemented with 0.02 g/ml Epidermal Growth Factor (PeproTech; Rocky Hill, NJ), 0.5 g/ml
Hydrocortisone (Sigma-Aldrich; St. Louis, MO), 0.1 g/ml Cholera Toxin (Sigma-Aldrich; St. Louis,
MO), 10 g/ml Insulin (Sigma-Aldrich; St. Louis, MO), and 50 U/ml and 50 g/ml penicillin-
streptomycin (Invitrogen; Carlsbad, CA). MCF7 cells were grown in RPMI media (Invitrogen; Carlsbad,
CA) containing 10% Fetal Bovine Serum (Invitrogen; Carlsbad, CA), 1% 100xNon-Essential Amino
Acids (NEAA) (Invitrogen; Carlsbad, CA), and 10 g/ml Gentamicin (Sigma-Aldrich; St. Louis, MO).
Cells were allowed to reach 70-80% confluency in complete culture media (containing FBS) in 10-cm
cell culture dishes at 37C in a 5% CO
2
humidified incubator. These subconfluent cultures were then
washed twice in 1x Phosphate Buffered Saline (Invitrogen; Carlsbad) and further incubated in complete
142
media or in serum-free media for 72 hrs. Visual inspection for detached cells confirmed no significant cell
death for either cell line grown in serum-free media over this period of time. Total cell number was
obtained by using a hemocytometer. For cell growth experiments, MCF10A and MCF7 cells were seeded
in 6-well plates. When cells reached 70-80% confluency, they were washed twice with PBS and medium
containing 0% or 10% FBS was added (time point 0). Cell number was obtained by Countess Automatic
Cell Counter (Invitrogen, Carlsbald, CA) at time point 0 and 72 hr. Experiments were performed
duplicate.

Microvesicle Isolation.
Secreted microvesicles (MVs) were isolated from the serum-free media after 72 hrs. On average, 40 10-
cm culture dishes of cells were required to obtain 100 ng of total RNA for analysis using the Nanostring
miRNA assay. Sequential centrifugation/ultracentrifugation was used to isolate the MVs, as described in
detail elsewhere [18]. Briefly, cells were removed from culture supernatant by two consecutive spins; first
at 300g for 5 min, and then at 2,000g for 20 min. Cell-free supernatant was transferred to 25 ml
polycarbonate ultracentrifuge tubes (Beckman Coulter; Brea, CA) and depleted of cell debris by a single
spin at 10,000g for 30 min using a Type 70 Ti rotor (Beckman Coulter; Brea, CA). Next, the supernatant
was transferred to clean polycarbonate tubes and ultracentrifuged at 100,000g for 70 min to pellet the
MVs. The supernatant from this spin was removed by aspiration, and the pellet was washed with sterile
1xPBS, resuspended, and spun a second time at 100,000g for 70 min. The washed pellet was re-
suspended in TRIzol reagent (Invitrogen, Carlsbad, CA). All spins are carried out at 4C.

RNA extraction
Total RNA was extracted by TRIzol Reagent (Invitrogen, Carlsbad, CA) from confluent cells grown in
complete media, or after 72 hrs culture in serum-free media. Total RNA was also extracted from the MVs
using TRIzol Reagent. On average, 230 ng of total RNA per 100 million cells was recovered after TRIzol
extraction of the MVs. Since the enzymatic ligation step of the Nanostring miRNA assay is sensitive to
contaminants from the TRIzol extraction, the RNA samples were purified further by adding 3.5 volumes
of ice-cold 95% chilled ethanol, 1/10 volume of 3M sodium acetate (pH 5.2), and 1 l of glycogen at 1
g/l (Invitrogen; Carlsbad, CA) to the TRIzol-extracted RNA samples, and incubated on ice for a
minimum of 30 minutes. After incubation, the samples were centrifuged for 20 min, and the pellet washed
twice in cold 70% ethanol before re-suspending in ultrapure DEPC-treated water (Invitrogen; Carlsbad,
CA). On average, 45 ng of total RNA per 100 million cells was recovered after this purification. Thus, to
guarantee 100 ng of total RNA for the Nanostring miRNA assay, a minimum of 350 million cells were
culture for every MV RNA sample. All samples were standardized to a concentration of 33 ng/l total
RNA before storage at -80C.
Total RNA was quantified using a NanoDrop 2000 spectrophotometer (Thermo Scientific, Wilmington,
DE). For all cellular RNA samples, the A260/A280 absorption ratio was greater than 1.9 and the
A260/A230 absorption ratio was greater than 2.0. In comparison, these absorption ratios for the MV RNA
samples were considerably lower: A260/A280 ratio of 1.6 and A260/A230 ratio of 0.7. Since the MV
RNA composition differs considerably from that of cellular RNA [13, 14, 19], these lower absorption
ratios do not necessarily reflect a lower quality of the RNA, but simply a lack of ribosomal RNA (rRNA).

Nanostring nCounter microRNA Assay.
Sample preparation and hybridization reactions were carried out according to the manufacturers protocol.
RNA samples were prepared for hybridization in a series of annealing, ligation, and purification steps.
First, a 3 l-aliquot of RNA sample (33 ng/l) was added to 3.5 l of annealing master solution
containing miRNA tag reagents and miRNA assay controls. This solution was then placed for 13 min in
the thermal cycler for annealing. Next, 2.5 l of ligation master solution and 1.0 l of ligase were added
to this solution, which was then placed for 24 min in the thermal cycler for ligation. Finally, 1 l of
ligation inactivating solution was added and the solution was placed 2 hr in the thermal cycler for
purification. The total RNA concentration in this solution was adjusted to 2 ng/l by adding 40 l of
ultrapure DEPC-treated water.

The hybridization solution containing 10 ng of total RNA was prepared by adding 20 l of reporter probe
and 5 l of capture probe solutions to 5 l of RNA sample solution. The RNA concentration in this 30 l-
143
hybridization solution was 0.33 ng/l. This solution was incubated in the thermal cycler for a minimum of
16 hours at 65C. After hybridization, the solution was placed directly in the nCounter Prep Station for
analysis. The complexes consisting of the miRNA and the capture and reporter probes were purified,
aligned, and counted using a CCD camera and a microscope objective lens. The nCounter Digital
Analyzer acquired images of each field of view using four different excitation wavelengths (480, 545,
580, and 622 nm). The processed images and raw barcode counts were tabulated and reported in a CSV
format. Subsequent data analysis was carried out in an Excel worksheet. Twelve RNA samples were
analyzed in a single Nanostring assay, and 654 human miRNAs were profiled in each sample. The 654
human miRNAs profiled are based on the known miRNA sequences released in version 14 of the Sanger
miRBase registry.

Nonspecific hybridization for the miRNA capture and reporter probes used in our assays was also
determined by running a no-template control assay. Correction factors were found for 66 out of the 654
human miRNAs that were then applied to the raw counts obtained for these specific miRNAs in our
subsequent assays.

Nanostring nCounter miRNA Data Analysis
The miRNA counts generated in the Nanostring assay after correcting for nonspecific hybridization were
analyzed to obtain miRNA copy numbers in each sample. The miRNA counts in each sample were first
normalized to account for sample-to-sample variations in a single Nanostring assay. The normalization
factors were within 1.00 0.22, and were computed by summing the counts for the six positive controls
in each sample and dividing by the average of this sum for all 12 samples. The normalized miRNA counts
were then converted to counts above a background threshold. We defined the background threshold as
two standard deviations above the background for an assay that was computed by averaging the
normalized counts for the eight negative controls in each sample. The obtained background threshold
counts for each sample were subtracted from the normalized miRNA counts. Counts below the
background threshold, which varied between 50 and 100 counts, were considered not detectable.
The normalized, background-adjusted counts for each miRNA were converted to miRNA copy numbers
using an averaged standard curve generated from the dilution series of the six positive controls for the
twelve samples in the Nanostring Assay. We assumed that the ligation efficiency for this synthetic
miRNA represented the average efficiency for the ligation assays for all 654 miRNAs profiled. A
synthetic miRNA standard curve has been used previously to obtain absolute miRNA copy numbers from
qRT-PCR Ct values [20].

The miRNA copy numbers obtained from this standard curve are based on 10 ng of total RNA in a
sample. For the cellular miRNA, these copy numbers are converted to miRNA copy numbers per cell
using the total RNA measured in a sample after TRIzol extraction and ethanol purification and the total
number of cells from which this RNA was derived. An estimated efficiency of 75% for the TRIzol
extraction of cellular RNA was taken from the literature [21] and applied to carry out this conversion. An
efficiency of 100% was assumed for the ethanol purification following TRIzol extraction, since the
measured total RNA concentration did not change significantly before and after this purification step. On
average, we measured a total RNA content of 10 pg/cell and 16.1 pg/cell for MCF10A and MCF7 cells,
respectively.

MV miRNA copy numbers per cell were similarly obtained from the total amount of RNA measured after
TRIzol extraction and the total number of cells from which the MVs were derived. For this conversion,
we assumed an efficiency of 15% in isolating MVs from the cells using sequential
centrifugation/ultracentrifugation based on previous work that reported this efficiency to be between 5
and 25% [22]. An efficiency of 10% for MV RNA recovery by TRIzol extraction was estimated by
comparing the total concentration of MV RNA measured after hypotonic lysis of the MVs to that
measured after TRIzol extraction. MV RNA recovery in the ethanol purification was estimated to be 19%
from direct measurements of the total RNA concentration before and after this step. We observed a
significant loss of the MV pellet, even with the addition of glycogen for this final clean-up procedure. The
low recovery of RNA from this step likely originates from difficulties in tracking the barely visible
precipitated pellet of microvesicle RNA during the multiple washing steps. In fact, it was not uncommon
144
to lose the microvesicle RNA pellet entirely during the last aspirations. Consequently, the overall
efficiency of MV RNA recovery was estimated to be 0.3%. It is highly important to account for the
efficiency in the MV isolation procedure not only because of rupture of vesicles during the high speed
ultracentrifugation spins, but also from the loss of material during the transfer and washing of the
precipitated pellet.

The Nanostring assays were also evaluated by computing the sampling efficiency, a lower detection limit,
and an inter-assay reproducibility. An average sampling efficiency of 1% was obtained by taking the
difference between the counts observed in the Nanostring assay and the actual theoretical number of
molecules for each of the six positive controls in each sample. The lower detection limit was defined as
the concentration above which the normalized counts for the dilution series of positive controls (128 fM,
32 fM, 8 fM, 2 fM, 0.5 fM, and 0.125 fM) were not significantly different from the average background
of an assay. This detection limit for our assays was determined to be 0.125 fM or 25 to 50 counts. Inter-
assay reproducibility was assessed by plotting miRNA copy numbers derived from independent assays on
a log-log graph. Linear fits to these plots generated correlation coefficients greater than 0.8, indicating
acceptable reproducibility.

Results and Discussion
A low number of known miRNAs are detected in breast cells and MVs.
We quantitatively profiled a total of 654 human mature miRNAs in MCF7 and MCF10A cells that were
serum deprived for 72 hours and of their secreted microvesicles. Setting a threshold of two standard
deviations above the average background, we detected a total of 187 miRNAs for the MCF10A cell line
and 168 miRNAs for the MCF7 cell line (Figure 1). Detected miRNAs were divided into three
categories: miRNAs detected only in the cells, miRNAs detected only in the MVs, and miRNAs detected
both in the cells and in the secreted microvesicles. Out of a total of 187 miRNAs detected in the MCF10A
cell line, 139 were present only in the cells, 2 only in the microvesicles, and 46 in both the cells and in
their secreted microvesicles. For the MCF7 cell line, out of a total of 168 miRNAs detected, 42 miRNA
species were present only in the cells, 5 in the secreted microvesicles, and 121 in both in the cells and in
the secreted microvesicles. The detection of a large number of miRNA species present both in the cells
and in the secreted microvesicles has been also observed by a previous study [16]. They detected 78
miRNAs in both MCF7 cells and in their secreted microvesicles. Out of these 78 detected by Pigati et al,
47 miRNA species were also detected in our results for this category. The detection of certain miRNA
species only within the cell, only within MVs or present in both cells and MVs suggest an active selection
by the cell. These observations supports the hypothesis put forth by Wang et al of the existence of an
unknown miRNA export process which identifies and selects some miRNAs for secretion.

The detection of miRNA species present only in the MCF10A and MCF7 microvesicles was intriguing. A
recent study by Wang et al observed an initial burst of secreted miRNA within 2 hours after the switch to
serum-free media in human hepatocellular carcinoma cells HepG2 [23]. This observation lead them to
hypothesize that secreted microRNAs could be originating from a pre-packaged and pre-synthesized pool.
We hypothesized that as the cells were maintained in serum-free media, certain miRNAs stopped being
expressed, such that after 72 hrs we could not detect their presence in the cells. To test whether
microvesicle only miRNAs originate from a cellular miRNA pool corresponding to the physiological state
previous to the switch to serum-free media, we proceeded to profile the miRNA pools from cells grown in
full media. We obtained that both MCF10A MV only miRNAs, miR-635 and miR-1290, were not
detected in the cells grown in full media. Likewise, miR-376a, miR-664, miR-630, and miR-1290, four of
the MCF7 MV-only miRNAs, were not detected in the cellular MCF7 profile originating from cells
cultured in full media. Moreover, these 4 miRNAs found in MCF7 MVs were not detected in previously
published MCF7 cellular profiles from cells grown in full media [16, 24]. Although technical
explanations can be given for the detection of certain miRNAs only within the secreted microvesicles,
such as aberrant hybridization between reporter and capture probes for that specific miRNA in the MV
samples or presence of these miRNA below threshold levels; a more mechanistic explanation may
encompass this observation. That is, we can hypothesize that miRNA species found at low abundances
(below threshold levels) in the cells are being specifically targeted for secretion instead of being routed
for degradation in the lysosome.
145

The total miRNA pool detected is concentrated in a small number of miRNA species in both cells
and MVs.
We determined the absolute number of miRNA molecules on a per cell basis for the cells and the secreted
microvesicles of the non-transformed epithelial cell line MCF10A and the transformed cell line MCF7.
Once we determined the individual copy numbers per cell for all miRs detected in both cells and
microvesicles for MCF7 and MCF10A cell lines, we proceeded to calculate the relative percentage of
each miRNA detected relative to the copy numbers detected for all miRNA species detected in the
samples. For both MCF10A and MCF7 cells, four miRNAs accounted for one third of the total
intracellular miRNA pool detected. These four miRNAs in MCF10A cells were miR-720, let-7a, miR-
205, and miR-92a (Figure 2) and for MCF7, miR-21, let-7a, miR-720, and miR-16 (Figure 3). In fact, for
both MCF7 and MCF10A cell lines, 50% of the total miRNA pool could be accounted for by less than 10
miRNA species. A previous study also observed the concentration of the total miRNA pool within a
subset of miRNA species [25]. They determined that almost half the miRNA pool in embryonic stem cells
could be accounted for by miR-21, miR 17-92 cluster, miR15b/16 cluster, and the miR 290-295 cluster.
Since these specific miRNAs have been associated to alterations in cell cycle, growth, and death, they
suggested that their high abundance could imply their involvement in control of cell division in ES cells.
Similarly, the most abundant miRNAs in MCF10A and MCF7 starved cells, except miR-720, have been
implicated in enhancing cell survival pathways and affecting cell growth. Specifically, up-regulation of
let-7a has been seen to induce growth arrest [26] and increased resistance to apoptosis [27]; up-regulation
of miR-16 induced growth arrest [28] but has also been observed to induce apoptosis [29]; up-regulation
of miR-205 significantly inhibited cell proliferation and growth [30]; and up-regulation of miR-21 has
been observed to block the expression of critical apoptotic related genes [31]. This could indicate that
only a small group of miRNAs are required to dynamically modulate a specific set of mRNAs involved in
the most relevant signaling pathways at any given state.

We observed a much more severe concentration of abundance in the miRNAs within microvesicles as
compared to the cells (Figures 2 and 3, bottom pie charts). In MCF10A microvesicles three miRs (miR-
630, miR-720, and miR 205) accounted for almost 60% of the total miRNA pool secreted by MCF10A
cells. Surprisingly, eight miRNAs accounted for 75% of the total microRNA profile in MCF710A
microvesicles (miR-630, miR-720, miR-205, miR-92a, miR-1274a, miR-221, miR-100, miR-1290). In
MCF7 MVs, miR-720, accounted for 45% of the total microRNA pool. Similarly, seven miRNAs
accounted for 75% of the total microRNA pool within MCF7 microvesicles (miR-720, miR-1274b, miR-
1274a, miR-1260, miR-100, let-7a, and miR-125b). A recent study profiled the miRNA content of MCF7
MVs using microarrays and found that miR-923, miR-720, and miR-21 had the highest relative fluorescent
intensities of all 344 miRNAs detected [16]. In their study, miR-720 accounted for 8.3% of the total
intensity of all miRNAs detected and miR-21 accounted for 6.4%. We obtained a higher relative
percentage for miR-720 (45%) and a lower value for miR-21 (2.1%). Since relative percentages are
dependent on both the species and number of miRNAs assayed, differences between our relative
percentages and those obtained by Pigati et al could reflect that they profiled a larger and differential pool
of miRNAs than our study (852 vs 654 miRNAs). Our study, for example, did not assay miR-923, the
highest miRNA detected in their study for MCF7 MVs.

We observed marked differences between the miRNA profiles obtained for the cells and for the
microvesicles of MCF7 cells. Three of the top four (miR-21, let-7a and miR-16) most abundant miRNAs
in MCF7 cells decrease in their relative percentages in the microvesicles. Inversely, three of the four most
abundant microRNAs in MCF7 microvesicles (miR-1274a, miR-1274b, miR-1260) have significantly
lower relative percentages in the cells. Differences between the MCF10A cells relative to their secreted
microvesicles were more heterogeneous. let-7a, and miR-29a, the second and fifth most abundant
miRNAs in MCF10A cells, decreased significantly within the microvesicles. The relative percentages of
miR-205 increased in the MCF10A microvesicles relative to the cells. For both MCF10A and MCF7
MVs, miR-720, increased in its relative percentage compared to the cells. Our results further support the
idea that certain miRNAs are targeted for secretion, as passive release would yield a population of
miRNAs in a ratio that paralleled the species and abundances of miRNAs detected in the cell.

146
An explanation as to why the overrepresentation of certain miRNA species in MVs may be related to
recent discoveries which observed the association between miRISC and the MBV secretion pathway.
Recently, two independent studies identified that GW-bodies, which are cytoplasmic aggregations of
GW182-rich miRISC and mRNA complexes, are physically and functionally coupled to MVB
membranes [10, 32]. Although AGO2 is found in lower levels in exosomes than in whole cell lysates
[15], they are markedly enriched in GW182, a protein thought to bring multiple Argonaute proteins
together for form the regulatory complex [10]. The hypothesis proposed is that the GW182 proteins, with
a fraction of miRNA-loaded Ago2 complexes, are specifically sorted into MVBs for secretion. The
differential relative percentages of miRNAs observed in our study within MVs could indicate that these
species are preferentially stabilized within these GW-182 rich-complexes. In fact, the differential
proportions observed between the secreted miRNAs relative to the intracellular compartment can be
supported by previous experiments which observed that the co-localization of let-7a-repressible mRNA
transcripts with MBVs in the cytoplasm did not correlate with their levels in the vesicles [10]. These
observations support the idea that only certain protein-miRNA-mRNA complexes are targeted for
secretion.

2.3.9 The abundance of miRNAs in secreted MVs relative to cells
We determined the absolute number of intracellular and secreted miRNA transcripts on a per cell basis to
evaluate whether the secretion of miRNAs impacted the abundance of the intracellular miRNA pool. The
total mature microRNA pool detected in a single MCF10A cell added up to 13,300 miRNA molecules,
while a single MCF7 cell contained a total of 24,850 miRNA molecules. We obtained that the total
number of miRNA molecules secreted by a single cell during a period of 72 hours was extremely low
relative to the intracellular miRNA pool. A total of 295 miRNA molecules per cell were secreted in
MCF7 MVs. The most abundant miRNA in MCF7 MVs, miR-720, had an abundance of 133 molecules
secreted per cell during 72 hours (Table 1). Likewise, a total of 66 miRNA molecules were detected in
the microvesicles secreted by a single MCF10A cell during a period of 72 hours. The most abundant
miRNA in MCF10A MVs, miR-630, had an abundance of 18 molecules secreted per cell during 72 hours
(Table 1).

The first study that revealed the presence of miRNAs in exosomes, conducted by Valadi et al [14], using
an array based platform compared the exosomal miRNA levels to their cellular levels. Equal amounts of
total RNA from cells and microvesicles were compared. Based on this comparison, they concluded that
certain miRNAs are expressed at higher levels in exosomes. Two subsequent studies validated this
observation and likewise concluded that some miRNAs are expressed at higher levels or enriched in
exosomes [16, 19]. When comparing the absolute number of miRNA molecules secreted on a per cell
basis within microvesicles relative to the absolute number of miRNA molecules expressed in a single cell,
we observe that not a single miRNA species had a higher abundance in the accumulated 3-day population
of secreted microvesicles than in the cell.

Using these absolute values, we quantified the percentage of the total cellular microRNA pool that was
secreted within microvesicles. We obtained that 0.33% and 0.73% of the total cellular miRNA pool in
MCF10A and MCF7 cells was secreted within microvesicles during 72 hours of serum deprivation. For
MCF7 cells, the individual secretion of miRNAs varied from 8.0% for miR-1274a down to 0.05% for
miR-26a (Table 2). Certain miRNAs, like miR-1290, miR-630, and miR-1246 were detected only in the
microvesicles and thus were considered to be 100% secreted. For MCF10A cells, the individual secretion
varied from a high of 38.4% for miR-630 down to 0.03% for miR-21. A previous study using qRT-PCR,
determined that 2% of the intracellular levels of miR-720 were detected within MCF7 microvesicles
during a 5 day period [16]. We obtained that 3.8% of the cellular abundance of miR-720 is secreted within
MCF7 microvesicles within a 72 hour period; further confirming the stark differences in the absolute
abundance of miRNA transcripts between cells and MVs.

Since our study determined such a low number of miRNAs being secreted on a per cell basis, we next
wanted to confirm our low values with quantitative data that has been published on exosomal miRNAs.
Considering efficiencies in ultracentrifugation, TRIzol extraction, and ethanol purification, we isolated on
average 137 ng of total microvesicle RNA per million MCF10A cells cultured in serum-free media after
147
72 hours. For MCF7, we obtained on average, 248 ng of total microvesicle RNA per million cells
cultured in serum-free media after 72 hours. On the other hand, on average for our two cell lines we
obtained a total of 13 g of total RNA per million cells. That is, less than 2% of total RNA in a cell is
secreted within microvesicles in 72 hr. Based on our profiling assays, of this 1% of total RNA secreted
only 0.001% were detected to be mature canonical miRNAs. Similarly low yields in total exosomal
RNA were recently reported by Balaj et al and Pegtel et al [12, 33]. Estimations of the total secreted
microRNA pool from the data published by Balaj et al and Pegtel et al give us a better understanding of
the quantities secreted by non-transformed and transformed cell lines. Taken together, both our results
and calculations based on published data suggest that a very low number of miRNAs are being released
on a per cell basis within secreted microvesicles.


Figure 1. Comparison of detected and undetected miRNAs in MCF10A (left) and MCF7 (right) cells and
secreted microvesicles (MVs). A total of 654 human mature microRNAs were analyzed in the Nanostring
miRNA assay. A microRNA transcript was considered detected if the counts obtained were greater than
two standard deviations above the average background for the specific assay, as described in the
Experimental Methods. Detected miRNAs were divided into three categories: (1) miRNA detected only in
the cells; (2) miRNA detected only in the MVs; miRNA detected in both the cells and MVs secreted from
those cells. The number of detected miRNA transcripts in each category, and the number of undetected
miRNA transcripts are given in the charts.

148

Figure 2. Compositions of the most abundant miRNAs detected in MCF10A cells (top) after 72 hrs of
serum depletion, and in microvesicles (MVs) secreted by MCF10A cells (bottom) over 72 hrs of serum
depletion. Selected miRNAs with copy numbers that differ significantly between the donor cells and
secreted microvesicles (miR-630, miR-1274a, miR-221, miR-29a), or between the two cell lines are also
included. Percentages are based on specific miRNA copy numbers/cell relative to the total copy
number/cell for all miRNAs detected.
149

Figure 3. Composition of the most abundant miRNAs detected in MCF7 cells (top) after 72 hrs of serum
depletion, and the microvesicles (MVs) secreted by MCF7 cells (bottom) over 72 hours of serum
depletion. Selected miRNAs with copy numbers that differ significantly between the donor cells and
secreted MVs (i.e. miR-1274a, miR-1274b, miR-1260, let-7a), or between the two cell lines are also
included. Percentages are based on specific miRNA copy numbers/cell relative to the total copy
number/cell for all miRNAs detected.

150
Table 1. miRNA copy numbers per cell detected in MCF7 and MCF10A cells after 72 hrs of serum
depletion, and in microvesicles (MVs) secreted from MCF7 and MCF10A cells over 72 hrs of serum
depletion. The miRNAs include those shown in Figure 2 and 3. The percentages in parentheses are
computed based on specific miRNA copy numbers/cell relative to the total copy number/cell for all
miRNAs detected.
miRNA
miR copy
numbers per
MCF10A cell
MV miR copy
number/MCF10A
cell
miR copy numbers
per MCF7 cell
MV miR copy
number/MCF7
cell
let-7a 1,132 (8.5%) 1.0 (1.6%) 2,458 (8.7%) 9.0 (3.0%)
miR-16 372 (2.8%) 0.5 (0.8%) 1,475 (5.2%) 2.8 (1.0%)
miR-19b 430 (3.2%) 1.0 (1.5%) 191 (0.7%) 0.2 (0.1%)
miR-21 196 (1.5%) 0.1 (0.2%) 3,569 (12.6%) 6.1 (2.1%)
miR-24 320 (2.4%) 1.4 (2.1%) 570 (2.0%) 2.6 (0.9%)
miR-26a 101 (0.8%) ND 1,270 (4.5%) 0.4 (0.1%)
miR-29a 436 (3.3%) 0.8 (1.2%) 382 (1.3%) 1.1 (0.4%)
miR-92a 801 (6%) 3 (4.5%) 388 (1.4%) 6.1 (2.1%)
miR-100 350 (2.6%) 1.7 (2.5%) 1,104 (3.9%) 14.5 (4.9%)
miR-125b 254 (1.9%) 0.5 (0.8%) 597 (2.1%) 8.1 (2.8%)
miR-200c 337 (2.5%) 0.3 (0.4%) 883 (3.1%) 2.1 (0.7%)
miR-205 1,127 (8.5%) 8.2 (12.5%) 37 (0.1%) 0.1 (0.03%)
miR-221 257 (1.9%) 2.2 (3.3%) 198 (0.7%) 2.1 (0.7%)
miR-222 41 (0.3%) 0.1 (0.1%) 60 (0.2%) 0.9 (0.3%)
miR-423-3p 48 (0.4%) 0.2 (0.3%) 68 (0.2%) 0.5 (0.2%)
miR-423-5p 37 (0.3%) 0.4 (0.6%) 77 (0.3%) 1.1 (0.4%)
miR-630 28 (0.2%) 18.3 (27.9%) ND 7.7 (2.6%)
miR-720 1,684 (12.7%) 11.9 (18%) 2,427 (8.5%) 132.7 (45%)
miR-1246 25 (0.19%) 0.9 (1.3%) ND 0.3 (0.1%)
miR-1260 394 (3%) 1.3 (2%) 577 (2%) 17.6 (6.0%)
miR-1274a 282 (2.1%) 2.6 (4%) 171 (0.6%) 19.7 (6.7%)
miR-1274b 247 (1.9%) 1.5 (2.4%) 244 (0.9%) 21.3 (7.2%)
miR-1290 ND 1.6 (2.5%) ND 0.3 (0.1%)
All other miRNAs (36.8%) (9.62%) (41.1%) (12.7%)

Table 2. Comparison between the percent secretion of selected miRNA species for MCF7 and MCF10A
cells during 72 hours of incubation. Percent secretion is calculated by dividing the number of miRNA
molecules detected in microvesicles secreted on a per cell basis divided by the number of miRNA
molecules detected in a single cell. *miRNAs detected only in the microvesicles.

miRNA
PERCENT
SECRETED
MCF10A
PERCENT
SECRETED
MCF7 CELLS
let-7a 0.09% 0.36%
miR-16 0.15% 0.19%
miR-19b 0.23% 0.08%
miR-21 0.05% 0.17%
miR-24 0.43% 0.46%
miR-26a 0.00% 0.03%
151
miR-29a 0.18% 0.28%
miR-92a 0.37% 1.58%
miR-100 0.48% 1.32%
miR-125b 0.20% 1.36%
miR-200c 0.08% 0.24%
miR-205 0.73% 0.29%
miR-221 0.85% 1.04%
miR-222 0.13% 1.54%
miR-423-3p 0.40% 0.69%
miR-423-5p 0.98% 1.38%
miR-630 64.38% 100%*
miR-720 0.70% 5.47%
miR-1246 3.53% 100%*
miR-1260 0.33% 3.06%
miR-1274a 0.92% 11.56%
miR-1274b 0.63% 8.74%
miR-1290 100%* 100%*

References
1. Chen, X., et al., Characterization of microRNAs in serum: a novel class of biomarkers for diagnosis
of cancer and other diseases. Cell Res, 2008. 18(10): p. 997-1006.
2. Gilad, S., et al., Serum microRNAs are promising novel biomarkers. PLoS One, 2008. 3(9): p. e3148.
3. Lawrie, C.H., et al., Detection of elevated levels of tumour-associated microRNAs in serum of
patients with diffuse large B-cell lymphoma. Br J Haematol, 2008. 141(5): p. 672-5.
4. Mitchell, P.S., et al., Circulating microRNAs as stable blood-based markers for cancer detection.
Proc Natl Acad Sci U S A, 2008. 105(30): p. 10513-8.
5. Pritchard, C.C., et al., Blood Cell Origin of Circulating MicroRNAs: A Cautionary Note for Cancer
Biomarker Studies. Cancer Prev Res (Phila), 2012.
6. Kirschner, M.B., et al., Haemolysis during sample preparation alters microRNA content of plasma.
PLoS One, 2011. 6(9): p. e24145.
7. Farazi, T.A., et al., MicroRNA sequence and expression analysis in breast tumors by deep
sequencing. Cancer Res, 2011. 71(13): p. 4443-53.
8. Volinia, S., et al., Breast cancer signatures for invasiveness and prognosis defined by deep
sequencing of microRNA. Proc Natl Acad Sci U S A, 2012. 109(8): p. 3024-9.
9. Arroyo, J.D., et al., Argonaute2 complexes carry a population of circulating microRNAs independent
of vesicles in human plasma. Proc Natl Acad Sci U S A. 108(12): p. 5003-8.
10. Gibbings, D.J., et al., Multivesicular bodies associate with components of miRNA effector complexes
and modulate miRNA activity. Nat Cell Biol, 2009. 11(9): p. 1143-9.
11. Mittelbrunn, M., et al., Unidirectional transfer of microRNA-loaded exosomes from T cells to
antigen-presenting cells. Nat Commun, 2011. 2: p. 282.
12. Pegtel, D.M., et al., Functional delivery of viral miRNAs via exosomes. Proc Natl Acad Sci U S A.
107(14): p. 6328-33.
13. Skog, J., et al., Glioblastoma microvesicles transport RNA and proteins that promote tumour growth
and provide diagnostic biomarkers. Nat Cell Biol, 2008. 10(12): p. 1470-6.
14. Valadi, H., et al., Exosome-mediated transfer of mRNAs and microRNAs is a novel mechanism of
genetic exchange between cells. Nat Cell Biol, 2007. 9(6): p. 654-9.
15. Zhang, Y., et al., Secreted monocytic miR-150 enhances targeted endothelial cell migration. Mol
Cell, 2010. 39(1): p. 133-44.
16. Pigati, L., et al., Selective release of microRNA species from normal and malignant mammary
epithelial cells. PLoS One, 2010. 5(10): p. e13515.
152
17. Debnath, J., S.K. Muthuswamy, and J.S. Brugge, Morphogenesis and oncogenesis of MCF-10A
mammary epithelial acini grown in three-dimensional basement membrane cultures. Methods, 2003.
30(3): p. 256-68.
18. Thery, C., et al., Isolation and characterization of exosomes from cell culture supernatants and
biological fluids. Curr Protoc Cell Biol, 2006. Chapter 3: p. Unit 3 22.
19. Taylor, D.D. and C. Gercel-Taylor, MicroRNA signatures of tumor-derived exosomes as diagnostic
biomarkers of ovarian cancer. Gynecol Oncol, 2008. 110(1): p. 13-21.
20. Chen, C., et al., Real-time quantification of microRNAs by stem-loop RT-PCR. Nucleic Acids Res,
2005. 33(20): p. e179.
21. Neilson, J.R., et al., Dynamic regulation of miRNA expression in ordered stages of cellular
development. Genes Dev, 2007. 21(5): p. 578-89.
22. Lamparski, H.G., et al., Production and characterization of clinical grade exosomes derived from
dendritic cells. J Immunol Methods, 2002. 270(2): p. 211-26.
23. Wang, K., et al., Export of microRNAs and microRNA-protective protein by mammalian cells.
Nucleic Acids Res. 38(20): p. 7248-59.
24. Fix, L.N., et al., MicroRNA expression profile of MCF-7 human breast cancer cells and the effect of
green tea polyphenon-60. Cancer Genomics Proteomics, 2010. 7(5): p. 261-77.
25. Calabrese, J.M., et al., RNA sequence analysis defines Dicer's role in mouse embryonic stem cells.
Proc Natl Acad Sci U S A, 2007. 104(46): p. 18097-102.
26. Sampson, V.B., et al., MicroRNA let-7a down-regulates MYC and reverts MYC-induced growth in
Burkitt lymphoma cells. Cancer Res, 2007. 67(20): p. 9762-70.
27. Tsang, W.P. and T.T. Kwok, Let-7a microRNA suppresses therapeutics-induced cancer cell death by
targeting caspase-3. Apoptosis, 2008. 13(10): p. 1215-22.
28. Liu, Q., et al., miR-16 family induces cell cycle arrest by regulating multiple cell cycle genes. Nucleic
Acids Res, 2008. 36(16): p. 5391-404.
29. Cimmino, A., et al., miR-15 and miR-16 induce apoptosis by targeting BCL2. Proc Natl Acad Sci U S
A, 2005. 102(39): p. 13944-9.
30. Wu, H., S. Zhu, and Y.Y. Mo, Suppression of cell growth and invasion by miR-205 in breast cancer.
Cell Res, 2009. 19(4): p. 439-48.
31. Chan, J.A., A.M. Krichevsky, and K.S. Kosik, MicroRNA-21 Is an Antiapoptotic Factor in Human
Glioblastoma Cells. Cancer Research, 2005. 65(14): p. 6029-6033.
32. Lee, Y.S., et al., Silencing by small RNAs is linked to endosomal trafficking. Nat Cell Biol, 2009.
11(9): p. 1150-6.
33. Balaj, L., et al., Tumour microvesicles contain retrotransposon elements and amplified oncogene
sequences. Nat Commun, 2011. 2: p. 180.

153
Battery Technology

Student Researcher: Pierre A. Hall

Advisor: Dr. Tom Hartley

The University of Akron
Department of Engineering, Electrical Engineering

Abstract
The Tropica roadster is an all-electric vehicle manufactured in the 90s. The Tropica used in this research
was the official pace car in the Cleveland Grand Prix. After that the vehicle was bought by First Energy
Corp., later it was donated to The University of Akron. From sitting the lead acid batteries expanded and
the electrolytes inside dried out. The goal of this research was to test new lithium battery chemistries by
placing them in an electric vehicle and monitor the performance of the batteries. First the old lead acid
batteries were removed from the Tropica. Then a second test set of lead acid batteries were connected
to power the vehicle. With the Tropica powered we were able to see what electrical components were
damaged and which worked properly. After running some initial observation test everything seemed to be
in complete working order, and then the battery charger was tested. The battery charger is the only
component on the Tropica that is not working. The next step in the process is to install the new batteries
in parallel strings then conduct tests with the Tropica. However, at this time the new batteries have not
been installed into the Tropica and different battery placement and connection options are being
discussed.

Methods
The process for this particular research has two different parts that are dependent on each other. The first
aspect of the research is the Tropica. The Tropica will be the medium in which the second aspect will be
used and tested. The second part of the research is the batteries, which is the main focus.

In order the complete the whole task the Tropica must be completed and in working order. Several steps
have been taking to ensure that the Tropica will be ready for the batteries. The first action that was taken
was to remove the batteries that were currently in the Tropica. This task was difficult to achieve. There
were 12 Lead Acid batteries that came with the Tropica from the factory. The batteries were arranged in
the shape of a T, with three batteries in the front that were horizontal and the other nine were in the center
and stretched to the whole length of the vehicle. To get to the actual batteries the front clip had to come
off first. After the front clip was then the steering mechanisms had to come off, that was attached to the
front metal cage of the batteries. Once the front of the battery frame was off then then I attempted to pull
the batteries out from the frame, they did not move. The twelve batteries add up to be roughly 300lbs and
the high friction surface they were on also increased the amount of work and energy needed to move
them. To move the batteries just 3-4 inches I had to go to the back of the car and push while my research
partner was pulling from the front, then the batteries would not move any further. To remove the whole
rack of batteries I needed to get four other people and we all pulled together. With the 6 of us we got the
batteries out roughly 3 feet. Since there were batteries in the open I removed the easily assailable batteries
to lighten up the load. After that the batteries were easy to pull out.

Since the batteries are out we need to look all the electrical components of the vehicle and figure out how
it works. Doing some research and some observation I figured out what most of the electronics do. The
next step is to get power to the vehicle to see what is still working and takes some measurements of how
much current is pulled when the accelerator is pressed.

Once the Tropica is all figured out and ready for the new batteries the next step is to put the new batteries
in. When the new batteries are in the Tropica the testing will begin. The testing will consist of driving the
vehicle to test the range of the batteries and discharge rate. Then when the batteries are done I plan on
charging the vehicle and observing how long it takes to recharge the batteries. The information from the
154
testing will be used to construct a Ragone plot. A Ragone plot is used for performance comparison of
various energy storing devices by plotting energy density vs. power density.




In order to talk about batteries, the first thing that needs to be known is what types of batteries there are
and what are the different types of batteries purpose. The first type of battery is called an SLI battery; it
means Starting, Lighting, and Ignition. The purpose of an SLI battery is to deliver a lot of current in a
short period of time. The main applications of these batteries are automotive to power the starter motor,
lights, and ignition system of a vehicles engine. The most common chemistry is lead acid. The next type
of battery is Deep Cycle. Deep Cycle batteries are designed to be discharged down as much as 80% time
after time, and have much thicker plates. The major difference between a true deep cycle battery and
others is that the plates are solid and not sponge like. This gives less surface area, thus less instant power
like SLI batteries need. The most common applications for Deep Cycle batteries are in electric vehicles.

Just like the purpose of batteries there are many types of battery chemistries. The most common
automotive applications are Lead Acid chemistry. Lead Acid batteries were used initially in electric
vehicles but since their cycling rates were too low (150-300 cycles) the range they offer is low as well 15-
20 miles. When Lithium- ion batteries where introduced the battery cycles increased exponentially to
3000 cycles and the range increased to 60-70 miles. As more research was conducted on lithium battery
chemistries, more lithium chemistries were created. Below is a chart listing the different types of lithium
battery chemistries and the benefits of the mixture.

Lithium / Sulfur Dioxide - Lithium / sulfur dioxide cells (Li/SO2) are used almost exclusively in
military / aerospace applications and have somewhat lower energy density than manganese dioxide-
lithium or poly (carbon monofluoride) lithium cells. Their service life and energy density are less than
half that of thionyl chloride-lithium cells. For safety, an "emergency" vent structure is required in the
hermetically welded case.
Lithium / Thionyl Chloride - Lithium / thionyl chloride batteries (Li/SOCl
2
) have a nominal cell voltage
of 3.6V, a very high energy density, and a good temperature range. Typical applications are computer-
memory backup power, instruments, and small electronics. The widespread use of these batteries for
consumer applications is limited because of their high cost and concerns about safety and handling. After
extended shelf storage times, the lithium anode may start to form a protective film. This film will initially
cause a voltage delay when first placed into service. The lithium / thionyl chloride cell also is produced
with BrCl additives to yield an increase in cell voltage to 3.9V, and an increase in energy density to about
1000Wh/L with overall safer operation.
Lithium / Manganese Dioxide - Lithium / manganese dioxide batteries (LiMnO
2
) are available as
cylindrical cells or as button cells. Advantages include high gravimetric and volumetric energy densities,
high pulse capability, long shelf life, and operation over a wide temperature range.
Lithium / Carbon Monofluoride - Lithium / carbon monofluoride cells (Li/ (CF)
n
) are high energy
density batteries used in low weight, long life applications. These batteries use a solid cathode
and organic electrolyte, and have an unparalleled safety record. Typical usages include aerospace
(qualified for space use since 1976), terrestrial, marine, and missiles.
Lithium / Copper Oxide - Lithium / copper oxide (LI/CuO) and lithium-copper oxyphosphate cells were
produced until the mid-1990s. Currently, the use of this technology is limited.
Lithium / Iodine - Lithium iodine cells (LiI
2
) exhibit outstanding performance (very low self-discharge
rate, excellent reliability, very high hermeticity, etc.) but for a limited field of application, in particular for
cardiac pacemakers.
Lithium / Silver Vanadium Oxide - Lithium silver vanadium oxide batteries (Li/SVO) are designed for
medically implantable devices such as cardioverter defibrillators, neurostimulators, atrial defibrillators,
and drug infusers.
V=Voltage (V)
I=Current (A)
t=Time (s)
m= Mass (kg)
155
Lithium Anode Reserve Batteries - Ambient-temperature lithium anode reserve batteries are
available in three major types: lithium/vanadium pentoxide, lithium/thionyl chloride, and lithium/sulfur
dioxide. Ambient-temperature lithium anode reserve batteries have undiminished power output even after
storage periods over fourteen years.
Lithium / Iron Sulfide - Lithium / iron sulfide batteries (LiFeS
x
) are under development for use in
electric vehicles. Operating at a temperature range of 375 - 500C, the lithium / iron sulfide battery
possesses high power, works well in many environments, and is safe, but requires a thermal management
system to maintain a proper temperature.
Lithium / Iron Phosphate - Lithium iron phosphate batteries are made using LiFePO4 as the cathode
material. Batteries made with this material have better safety characteristics than lithium ion batters made
with different cathode material.
Lithium / Manganese Titanium - Lithium / manganese titanium rechargeable batteries (LiMnTi) are
compact rechargeable batteries developed for rechargeable watches, and backup power supplies for
pagers and timers. The batteries employ lithium-manganese complex oxide as cathode material, and
lithium-titanium oxide the as the anode material. The batteries provide a capacity that is more than 10
times that of capacitors of the same size.
Lithium / Polymer - The lithium-polymer battery differs from other battery systems in the type of
electrolyte used. The original design, which dates back to the 1970s, uses a polymer electrolyte. The
polymer electrolyte replaces the traditional porous separator, which is soaked with electrolytes. The dry
polymer design offers simplifications with respect to fabrication, ruggedness, safety and thin-profile.
There is no danger of flammability because no liquid or gelled electrolyte is used. Theoretically, it is
possible to create designs that form part of a protective housing, are in the shape of a mat that can be
rolled up, or are even embedded into a carrying case or a piece of clothing.
Lithium-ion - Also referred to as lithium-carbon. The lithium-ion rechargeable cell (Li-ion) refers to a
cell whose negative active material is carbon to which lithium cations are intercalated or deintercalated
during the charge-discharge process. Li-ion is one of the newer rechargeable battery technologies. These
batteries can deliver 40% more capacity than comparably sized NiCd batteries and are one of the lightest
rechargeable batteries available today. Li-ion batteries are the batteries of choice in notebook computers,
wireless telephones and many camcorder models. They are also one of the more expensive rechargeable
technologies.
Lithium / cobalt oxide - The lithium / cobalt oxide cathode battery (LiCoO
2
) is very light and has an
energy density about three times higher than that of the conventional rechargeable batteries. It is now
indispensable for the power source of various portable or mobile IT instruments which need reduction of
weight and volume. This lithium ion battery will play a significant role from the viewpoint of the
environment, as it is replacing rechargeable batteries, which contain harmful elements such as lead and
cadmium.
Lithium / nickel oxide - The lithium / nickel oxide positive electrode (LiNiO
2
), has a capacity almost
40% over currently mass-produced batteries. Many can withstand about 500 charge cycles.
Lithium / manganese oxide - Lithium / manganese oxide batteries (LiMn
2
O
4
) are characterized by high
energy density, high power density, good storage life and discharge performance, they offer a big
advantage over alkaline batteries and their cost is modest.
Lithium-vanadium - Vanadium pentoxide (Li/V
2
O
5
) is a solid cathode material into which lithium ions
are inserted. Discharge plateaus are observed at 3.3 V and 2.4 V. The system is low pressure, so low rate
cells do not need to have a safety vent. Vanadium pentoxide is mainly used in reserve batteries but it is
likely to be of more importance in rechargeable lithium batteries in the future.
Lithium / Manganese Dioxide - The lithium / manganese dioxide system (Li/MnO
2
) offers the best
balance of performance and safety for consumer replaceable battery applications. They contain liquid
organic electrolyte and solid cathode cells.
Lithium / Titanium Disulfide - The reaction in lithium / titanium disulfide batteries (Li/TiS
2
) occurs
very rapidly and in a highly reversible manner at ambient temperatures as a result of structural retention.
Titanium disulfide is a solid cathode material. It has a relatively flat discharge and high energy density.

Electric vehicles are now using lithium battery chemistries. Tesla uses lithium ion batteries in their
vehicles, Myers Motors uses a lithium manganese mixture, Lightning Car Company uses lithium titanate
156
in the Lightning Gt, Proterra uses lithium titanate batteries. Lithium titante batteries are fairly new and are
projected to have an 80 year cycling life and a greater range.

Results
At this time there are no results because the lithium titanate batteries have not been placed into the
Tropica yet. However, I do expect the results to outperform lead acid tremendously. For cycle life the
lithium titanate batteries has over 800 times as many cycles as lead acid batteries, the Tropica can go
more than 3 times the distance on a single charge than it did with lead acid batteries, and the reliability is
better and the maintenance is lower. When I do receive the results from the battery tests I plan to make a
Ragone plot from the results and maybe add another dimension to the Ragone plot.

Conclusion
There arent many things I can conclude definitively at this time, other than batteries in strings is more
reliability. An analogy is when buying Christmas tree lights. When purchasing these lights it is better to
buy the lights in parallel because if one light blows out, the rest of the lights will not come out because
the blown light has made it an open circuit. This is the same type of thinking used in this research, if one
of the strings of batteries goes out the other will be there to keep the vehicle in working order and not
leave the driver stranded. The statistical definition of reliability is the probablility that a randomly
selected sample will still be operational after a specific time has elasped. This differs from the definition
which deals with failure rate during the specific period of useful life. Replacing the lead acid batteries
with lithium-titanate batteries will increase the reliability of the system. Increasing the reliability more
will result from placing them into 2 parallel strings of 3 cells. By adding the redundant string the
reliability increases, however with these particular cells, cost is very expensive. Therefore, the more
strings the more expensive the battery system will cost.



Acknowledgments
Ohio Space Grant Consortium
Department of Energy
First Energy
Ronald McNair Foundation
University of Akron College of Engineering

References
1. Abuelsamid, Sam. "Lightning Car Co to Use Altairnano Batteries in New Sports Car Autoblog
Green." Autoblog Green We Obsessively Cover The Green Scene. 12 June 2007. Web. 27 June
2011. <http://green.autoblog.com/2007/06/12/lightning-car-co-to-use-altairnano-batteries-in-new-
sports-car/>.
2. Brad. "Tropica." GetMSM.com Main. 2008. Web. 27 June 2011. <http://www.getmsm.com/default.htm>.
3. "Lithium Batteries Specifications." GlobalSpec - Engineering Search & Industrial Supplier Catalogs. Web. 27
June 2011.
<http://www.globalspec.com/Specifications/Electrical_Electronic_Components/Batteries/Lithium_Batteries>.
4. Moorth, Dr. MuMu. "Lithium Lithium Titanate Titanate Based Based Batteries Batteries for for High
High Rate Rate and and High High Cycle Cycle Life Life Applications Application." NEI
Corporation: 1-8. Print.
5. Power, Max. "Deep Cycle vs Starting (SLI) Batteries." Lead-acid Batteries Maintenance &
Restoration. Battery Power, 27 Dec. 2007. Web. 27 June 2011.
<http://blog.batterypower.co.nz/2007/12/deep-cycle-vs-starting-sli-batteries.html>.
6. Tesla Motors. The Tesla Roadster Battery System. Teslamotors.com. 19 Dec. 2007. Web. 25 June
2011.
157
Design Analysis and Modeling of Hazardous Waste Containment Vessels in Support of
Safe Nuclear Waste Transport and Storage

Student Researcher: Malcolm X. Haraway

Advisor: Dr. Edward Asikele

Wilberforce University
Computer Engineering

Abstract
Nuclear energy is a growing source of energy today in America. Though to produce nuclear energy you
need nuclear power plant and with nuclear power plant nuclear waste is produced. With that being said
the question is What happens to this nuclear waste?, or How is it transported? I am here to design a
containment waste transport vessel to safely transport and nuclear waste to a destination. I will design this
vessel based on the Federal Regulatory Commissions Safety requirements. I will base my design
standards on Section 50. I will be using Solid Works to design and test the vessel. Some concepts I will
need to incorporate will be finite element analysis, boundary conditions, meshing algorithm, material
properties, poison ratio, and geometric material, loading, and deformation characteristics. I will also
calculate fatigue, stress, dynamics, inertia, load, and impact to determine the stableness of the vessel.

Project Objectives
In revolution of nuclear energy, nuclear waste is produced. The nuclear waste that is produced has high
levels of radiation and fatal. My objective is to design a vessel that can transport this highly radioactive
nuclear waste safely using material that can block the radiation and withstand impact.

Methodology Used
The containment vessel designed is based on the ability to hold 2,244 gallons of highly radioactive
nuclear waste. Its dimensions consist of a height of 5 feet, width of 4 feet and length of 15 feet without
the excess amount of thickness produced by the shielding materials used. The containment vessels also
needed to withstand a 2-kilogram steel weight, 2.5 centimeters in diameter dropped from a height of 1
meter, and a 50-gram weight and pin, 0.3-centimeter pin diameter dropped from a height of 1 meter.
Due to research it is found, the three main radioactive rays emitted are Alpha , Beta , and Gamma
rays. Gamma was the most radioactive and harmful. In order to insure proper shielding, based on the
Federal Regulatory Commission, a material had to be able to withstand all three of the rays. After
research lead had the highest shielding ability. Concrete and steel were also found to be great shielding
agents. Aware that the containment vessel needed to withstand collision and shield radiation, all three
materials were implemented in the design. To determine the thickness of each shielding material I needed
take in consideration the weight, shielding ability and durability.

Equation for shielding Ability based on absorption



Each material had its own halving thickness which determined how thick the material needed to be to
reduce the gamma radiation by 50%. For example you need 1 cm (0.4) of lead to reduce their intensity
by 50%. The effectiveness of the shielding material increases as the density increases.

Next I determined the weight of each material to ensure that the containment vessel will be as light as
possible. The weight of lead is 4.86 lbs. per cubic feet, concrete 130 lbs. per cubic foot, and steel is 490
lbs. per square foot. I made the lead 10 inches thick, the concrete 6 inches thick and steel 3 inches thick.

This was enough to ensure that enough radiation was shielded for safe transport and a light weight
containment vessel.
158
Results Obtained



Material
Halving Thickness,
inches
Halving Thickness,
cm
Density,
g/cm
Halving Mass,
g/cm
Lead 0.4 1.0 11.3 12
Steel 0.99 2.5 7.86 20
Concrete 2.4 6.1 3.33 20

Significance and Interpretation of Results
I found that it was best to use lead as the inner material because it was the least durable and gave the most
shielding from the gamma rays. As for the outermost materials, next came concrete then iron. Based on
the results, the more durability the least shielding the material had. The results from the impact and
durability were the bases for the thickness of the materials I used.

Acknowledgments
Dr. Edward Asikele
Wilberforce University
Dr. Abayomi Ajayi-Majebi
Ohio Space Grant Consortium
159
Neck and Head Injury Criteria and How It Relates to Pilot Ejection

Student Researcher: Kevin M. Hatcher

Advisor: Dr. Tarun Goswami

Wright State University
Biomedical, Industrial and Human Factors Engineering

Abstract
In this research the criteria of neck and head injuries are examined to see if the forces experienced in car
crashes can relate to the forces that fighter jet pilots experience. The data that the National Highway
Traffic Administration (NHTSA) has presented on injuries and the Abbreviated Injury Scale (AIS) will be
used in classifying the types of injuries that pilots could experience during ejection. With the NHTSA
data and the AIS the research should show that pilots will experience AIS values that affect the lower
cervical spine rather than the upper cervical spine.

Project Objectives
The main objective of the project was to examine injury criterion adopted by the (NHTSA) and analyze
them under certain scenarios. These scenarios would involve the neck injury criteria and the head injury
criteria and they would produce an injury threshold. Examining cervical injuries in car accidents gives a
good prediction of seat ejection injuries and with this prediction and injury criteria, a relationship to AIS
codes can be seen. The injury criteria would have predetermined values based on previous data from the
Hybrid III crash dummy.

Methodology Used
The forces examined were axial forces of the neck that included: flexion, extension, tension and
compression. Representative values were used for the neck injury criteria (Nij) and the head injury criteria
(HIC).
2

Nij =


HIC = {[

]
2.5
(t
2
-t
1
)}

The denominators in the Nij represented critical intercept axial forces (Fint) and critical intercept
bending moments (Mint) that were determined based on Hybrid III dummy testing using 50% percentile
male and 5% percentile female data; while the numerators were axial forces and bending moments based
on moments and forces that the neck could tolerate and not tolerate.
2
The head injury variables represent
the critical time frame of either 0 milliseconds (ms) to 15 milliseconds (ms) or 0 milliseconds (ms) to 36
milliseconds (ms).
3
The scenarios that were tested with injury criteria involved a range of G-forces and
would show if the spine could or could not handle such a force amount and what the comparison was
between automobile accidents and seat ejection.

Results and Data
The figure below shows the neck injury criteria slope previously stated with a compressionflexion

60 70 80 90 100 110 120 130 140 150
0.5
0.6
0.7
0.8
0.9
1
1.1
1.2
Nij for 50% male (Compression-Flexion)
Acceleraion in G-force
N
e
c
k

I
n
j
u
r
y

C
r
i
t
e
r
i
a

T
h
r
e
s
h
o
l
d
Injury Threshold
160
example, along with the threshold value of 1 which is the value for other cases such as compression-
extension, tension-flexion and tension-extension. The figure shown represents the 50% percentile male
data and the 5% female data was also tested. The figure represents the compression-flexion slope and it is
seen as a positive slope because flexion is the forward bending of the neck and will have positive values
while extension is the backwards bending of the neck and will have negative values.
1

The HIC curve below follows a similar pattern and shows the time of impact between 15 ms and 36 ms.
The 36 ms interval shows that at around a scenario of 85 gs there is a 50% chance of a skull fracture.



The relationship of the injury criteria to seat ejection can be inferred
from viewing the given results and comparing the results to G-forces
experienced by seat ejection; however, other considerations still need
to be applied. The launch of seat ejection is given in a time frame that
differs from the tested time frame of 0-36ms. Other forces must also
be considered such as the drag force and the windblast that occurs
upon ejection. Other criteria may be examined in these cases as well as
the reduction of HIC thresholds due to pilot helmets. Although the
criteria can be used to analyze if the lower cervical spine is affected
other considerations must be made and thus more information is
needed.








References
1. Schmitt, Kai-Uwe; Niederer, Peter F.; Muser, Markus H.; Walz, Felix; Schmitt, Kai-Uwe; Niederer,
Peter F.; Muser, Markus H.; Walz, Felix. Trauma Biomechanics: Accidental injury in Traffic and
Sports 2010. pp. 96-115.
2. Simms, Ciaran; Wood, Denis. Pedestrian and Cyclist Impact: A Biomechanical Perspective 2009.
pp. 83-84.
3. Nahum, A. M. (2002). Accidental injury biomechanics and prevention. New York City: Springer
2002. pp. 330-338.
60 70 80 90 100 110 120 130 140 150
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
HIC Curve
Accelaration in G-force
H
I
C

T
h
r
e
s
h
o
l
d


HIC 15ms
HIC 36ms
Proposed HIC Tolerance Levels
Related to Brain Injury
Tolerance
Levels
G-
Force
AIS
Equivalence
<150 <55 0-1
150-500 55-90 2
500-1800 90-150 3-4
>1800 >150 5
AIS
Code
Description
1 Skin, Muscle
Abrasions
2 Cervical Dislocation
without fracture
3 Cervical Spine:
Multiple Nerve
Lacerations
4 Spinal Cord Contusion
5 Spinal Cord laceration
without Fracture
6 Spinal Cord Laceration
at C3 or higher with
Fracture
HIC score for 50% chance of skull
fracture

161
Can Another Planet Support the Human Lifestyle?

Student Researcher: Emily S. Horrigan

Advisor: Robert L. Ferguson, Ph.D.

Cleveland State University
Department of Education, CSUTeach

Abstract
The human population of Earth is around 7 billion people. Scientists argue that the population is reaching
the earths carrying capacity due to the consumption of goods at a greater rate than resources can renew
themselves. In a project-based learning experience, students will determine if any planet in our solar
system could support human life, as we know it. The students would need to explore what composes our
lithosphere, biosphere, hydrosphere, and atmosphere here on earth. Once that is understood, the students
could explore the planets in the solar system, using NASA materials, to see if there is an option for human
emigration.

Objectives
Students will be able to define the criteria necessary for an organism to be classified as living.
Students will be able to analyze the properties of soils and their ability to sustain living organisms.
Students will be able to explain how energy is absorbed and reflected by the earths surface.
Students will be able to determine the potential impact population has on climate.
Students will be able to analyze data and images to determine if there is liquid water on Mars.
Students will be able to propose where water can be found on a planet.

Lesson
As a culminating experience for an Earth and Space Science course, the students will refer to lessons they
had encountered throughout the curriculum. They will draw on what they learned in the Earth portion of
the lesson and compare that knowledge to other planets, especially Mars in the Space portion of the class.
The NASA education materials that would be most helpful would be from Destination Mars, Getting
Dirty on Mars, Investigating the Climate System-Energy: a Balancing Act and Mars Exploration: Is There
Water on Mars? These materials would allow students to see the important features that have made it
possible for life to evolve on Earth.

The first lesson utilized from NASA materials attempts to create a working definition of life. Activity 2 of
Lesson 5, Searching for Life on Mars, in the Destination Mars packet is a two part activity that attempts
to make an inference about the possibility of life on Mars. In part one of the activity, the students will
utilize critical thinking to determine the criteria for life. In the second part students will test three soil
samples to determine if there is life in the samples according to their definition.

Getting Dirty on Mars is the next lesson that also looks into the soil of a planet. It attempts to define the
properties of soil that are capable of supporting living organisms. This hands-on activity looks at soil
moisture, soil color, soil consistency, biomarkers, soil texture, chemical analysis, and pH. The students
will compare a sample of soil from their region and a Mars Soil Simulant that can be obtained through
the Johnson Space Center. Once the properties of the soil are measured, the students will determine what
might need to be screened for on other planets to determine habitability of the soil.

Climate is another important factor that leads life being sustained here on Earth. Investigating the Climate
System-Energy: A Balancing Act is a three part lesson that explores how energy is absorbed and reflected
by the Earths surface. The questions the lessons attempt to answer are whether the type of ground surface
influences its temperature, how evaporation cools the earths surface, and how population impacts the
areas temperature. The extension of these lessons could include looking at the surface temperatures of
the different planets in the solar system.
162
Lastly, water is explored. Water indicates that life was once or is still present on a planet. Water also
contributes to the climate, which is discovered in the previous lessons. It is also a fundamental resource
needed if colonization occurs on another planet. Activities 6 and 7 of Mars Exploration: Is There Water
on Mars?, looks at the images and data to determine if and where water could be found on Mars. By
learning the geographical features that indicate where water is or once was, students will be able to apply
that knowledge to any planet in the solar system.

Assessment
The final artifact of the Earth and Space Science curriculum will be for students to collaborate with one
another to create a fictitious colony on another planet. They will need to propose the planet (real or
fictional) that will be inhabited by humans; describing its location and the properties the planet possesses
that can support human life. These properties include the soil properties, climate, and the location and
amount of water on the proposed planet. In some circumstances, the students may need to describe the
modifications that our race would need to make in our lifestyle so that we can indeed live on such a
planet. For example, if the atmosphere is not conducive for breathing, we may need to live indoors at all
times. The students can choose to present their proposal using whatever media type they would prefer.

Pedagogy
The use of a Project-based Learning (PBL) Experience to conclude an Earth and Space Science
curriculum is an effective way to encourage student engagement and ensure the dissemination of
knowledge. By having inquiry-based lessons spaced throughout a unit, students are able to accomplish
more than rote learning. The students will be able to utilize critical thinking and problem solving skills
while applying their knowledge to an open-ended, real-world problem. This PBL allows for students to
essentially create a planet that can be colonized by humans. There is no expected answer, as long as the
students support their reasoning with evidence they learned throughout the unit, the students will succeed
and leave with a valuable educational experience.

Standards
The National Science Standards addressed in the lesson for the high school level include, but are not
limited to, the following:
H.A.1 Abilities necessary to do scientific inquiry: A-F
H.D.1 Energy in the earth system: C-D
H.D.3 Origin and evolution of the earth system: C

NASA Lesson Resources
Destination: Mars; Lesson 5, Activity 2: Looking for Life
Getting Dirty on Mars
Investigating the Climate System- Energy: A Balancing Act
Mars Exploration: Is There Water on Mars?

163
Diversity of Shale

Student Researcher: James P. Houck

Advisor: Dr. Ben Ebenhack

Marietta College
Department of Petroleum Engineering

Abstract
Shale rock has always served its purpose in the oil and gas industry. It has been recognized as a cap rock
and it can contain substantial hydrocarbon reservoirs. However, with new interest in the newly exploited
shale reservoirs, the question must be asked, Are all shale rocks created equally? Shale can vary in its
physical and chemical properties, making it have varying values as a practical hydrocarbon reservoir.

Project Objectives
This projects deals with the different characteristics of shale formations and reservoirs in the United
States. Physical properties are some of the more easily quantified aspects of a reservoir rock, so this paper
will focus on the physical characteristics. The goal of this paper is to compare shale rocks and their
physical properties so that conclusions can be drawn about the effectiveness of shale reservoirs.

Methodology Used
For the purpose of comparing shale reservoirs, it is best to select three characteristic shale plays in the
United States that have proven to be important producers of hydrocarbons. This paper takes a look at the
Antrim Shale in the Michigan Basin, the Barnett Shale in Texas, and the Haynesville Shale in Louisiana
and Texas. These plays have proven to be historically significant and strong producers of hydrocarbons
(Modern Shale, 2009).

The physical properties that were chosen to examine in each reservoir were permeability, porosity, total
organic carbon (TOC), and original gas-in-place. These properties, although somewhat limited by their
variability, can lead to important insight about the quality of the rock reservoir.

Results Obtained
From the research, data was found that did not meet the expected results. Below are charts that represent
the results of the research.

TOC did not correspond with gas-in-place or gas-in-place per square mile like expected.


0
2
4
6
8
10
12
14
16
P
e
r
m
e
a
b
i
l
i
t
y
,

m
d

Permeability
0
2
4
6
8
10
12
14
16
Porosity %
TOC weight %
164



Reference
1. United States of America. Department of Energy. Modern Shale Gas Development in the United
States: A Primer. 2009. Print.
0
100
200
300
400
500
600
700
800
Gas-in-place tcf
Gas per area
bcf/sq mi
165
Feeling Down? The Effect of Gravity On Our Weight

Student Researcher: Christopher J. Iliff

Advisor: Dr. Sandra Schroeder

Ohio Northern University
Mathematics Department

Abstract
For my lesson I will be discussing how each planets gravitational force affects the weight which
we feel on each planet. This lesson will demonstrate to students how to apply the use of decimal,
fractions, ratios and proportions. They will also be able use this knowledge and apply it to given
information intuitively. My lesson will begin with a review of what we have learned in the past
with ratios, rates, and percentages. After this, I will provide students with background knowledge
about Isaac Newton and his revolutionary concept of gravity. Students will be able to practice
their skills with work involving proportions and different weights on different planets. After
review of this work the next day, students will be given a project where they will create their
own universe. This activity will allow them to use creativity while practicing proportions and
cross products.

Lesson
So many times we take for granted our own knowledge of ratios that often we dont even take
time to consider how hard it was to initially learn them. Many times when real life examples are
given to students the math is much more real and sinks in better. The knowledge of ratios, rates,
and proportions is one of the foundations of math and knowledge needed once students leave
high school. The first day, students will review the term ratio, unit rates, cross multiplication, and
percentages. This review is necessary after being introduced to help solidify its understanding
before introducing later projects that will allow students to apply their knowledge.

After review of these terms, I will start to talk about gravity. A reading will be given to students
about Sir Isaac Newton to engage them with the history behind gravity. I will encourage students
to take a Double Entry Journal listing quotes they like on the left side, and their reflections about
it on the right. After pairing and sharing these thoughts, students will be shown a video of the
moon walk from YouTube and asked to discuss with a partner what they have just watched. This
will lead into a discussion on proportionality and gravity due to different sizes of bodies in our
solar system. At this point I will hand out the assignment at which they will work on it at home.

The following day I will go over the homework and what they thought about it. Then I will
introduce their solar system project in which students will be able to create their own solar
systems with correct proportionality and weight projections. Students will be asked to present
their project in front of the class and show off their creativity.

Objectives
1. The student will be able to use decimals, fractions, ratios, and proportions to solve for
unknowns.
2. The student will be able to discover their weight on other planets given specific
information about those planets.
3. The student will be able to create their own planets with correct proportionalities.

166
Benchmarks
From the Grade 7 Model Curriculum for Mathematics:
Domain: Ratios and Proportional Relationships
Cluster: Analyze proportional relationship and use them to solve real-world and
mathematical problems.
3. Use proportional relationships to solve multistep ratio and percent problems.

Methodology
The pedagogy of this lesson plan is one that revolves around the activation of prior knowledge
and then making mathematical connections. I would argue that all students know about gravity
and how it may affect them in certain circumstances. What is not common knowledge is that this
force can be change how much one actually weighs and this happens all around us. When
looking for real life examples of the use of ratios and proportions, we must only look at the
information astronomy gives us to be able to explore an infinite amount of opportunities to apply
this knowledge. Many times in a math classroom, the realities of the mathematics in which
students need to learn are lost or forgotten. Through YouTube videos and other examples,
students are brought into a world where knowledge of proportions and ratios are used every day
for extraordinary things. Even though comparing weights on other planets to ours here on Earth
only scratches the surface, I believe that this will create a great introduction to astronomy and its
applications to math.

Assessment
There will be several forms of assessment used during and after the lesson. Different strategies of
formative assessment during the lesson will be used to quickly and efficiently check for
understanding with all the students. These quick checks include think, pair, share sessions where
students are asked to think about a topic or problem and reflect on it, and then pair with a partner
to discuss each others process. After given time to discuss the topic, I will ask each group their
process and check to see if other groups have done the same thing. If different I will explore their
process and if incorrect, I will go help them to achieve the right answer. Also, I will authentically
assess them with a project that requires them to create their own solar system. This solar system
will be plotted out on a poster board, and students will be given the opportunity to present their
solar systems and how much he or she might weigh on their created planets. Unfortunately given
the time frame, I was not able to implement this in a classroom, although I look forward to using
it to allow students creativity in exploring proportions.

Conclusion
In conclusion, I greatly enjoyed implementing the things I learned during the workshop and other
information attained from the NASA site. I can see my future students enjoying this lesson plan
as it gets the students out of the textbook and makes the mathematics real for them. It is my firm
belief that all students are interested in something and the oddities of outer space and our own
solar system could very well be the spark for many students. I like the opportunity for students to
incorporate their creativity in the authentic assessment because it is in conjunction with the
mathematics skills they will learn from the lesson.

167
Interaction Between Osteoblasts Cells and Fuzzy Fibers for Possible Medical Use

Student Researcher: Hadil R. Issa

Advisor: Dr. Khalid Lafdi

University of Dayton
Department of Chemical and Materials Engineering

Abstract
Research in the nanotechnology field is crucial to explore and understand as the prevalence and uses of
nanotechnology are expanding in everyday life. Applications of nanomaterials are already common in the
areas of electronics, cosmetics, drug delivery and pharmaceuticals. Due to their prevalent functions, their
health effects are important to understand in order to continue expanding on their uses. Health effects
associated with nanomaterials vary with the variety of chemical natures they can display and ways they
can be functionalized. Nanomaterials are manufactured in many different ways and forms; among those
forms, carbon nanotubes are one of the most popular in current fields of research (Figure 1). In order for
the carbon naotubes to be used safely and effectively, their toxicologies or health effects must be
assessed. One of the ways to assess their toxicologies is by studying their interactions with the variety of
cell lines from origin tissues such as the bone marrow, ovarian and lung tissues.

Project Objectives
For this particular research, the viability of the osteoblast cells from the bone tissues will be assessed after
introducing fuzzy fibers in their living environment. The fuzzy fibers contain carbon nanotubes grown on
glass and carbon fibers (Figure 2 and Figure 3). In order to have a base line reading, the biocompatibility
of the glass and carbon fibers with respect to the osteoblast cells will be evaluated and the cells viability
using the glass and carbon fibers will be compared against that of the fuzzy fibers. Establishing data for
the toxicology of fuzzy fibers will allow for creating a kinetic predictive module of the cells reaction to
the fuzzy fibers. This predicative model can be used in determining the interaction of osteoblast cells with
fuzzy fibers in a medical setting, as some potential uses of fuzzy fibers may be in surgical tools or implant
materials such as bone scaffolds.

Methodology Used
Preparation of carbon and glass fiber
The carbon and glass fibers were cut, weighed and put in a 24-well plate. Carbon and glass fibers
contained into the plates were sterilized by exposing the well plates along with the fibers to ultraviolet
light for sterilization prior to seeing of the cells.

Cell Culture
Human fetal osteoblast cells (hFOB) were cultured in 75-cm
2
rectangular canted neck cell culture flasks
at 37C in a humidified atmosphere of 5% CO
2
in air. Dulbeccos modified Eageles medium (DMEM)/
Hams F12 medium was supplemented with 10% fetal bovine serum (FBS), 0.5mM sodium pyruvate,
2.5mM L-gutamine, antibiotic/antimycotic, and 20units/mL of G418 disulfate salt. Fresh media was
added about every two to three days.

Cell Viability/Cell Function
Cell viability was measured using the CellTiter 96

AQueous One Solution assay from Promega. The


solution contains a tetrazolium compound [3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-
(4-sulfophenyl)-2H-tetrazolium, inner salt; MTS(a)] and an electron coupling reagent (phenazine
ethosulfate; PES)
1
.After the cells were grown for 0 and 72 hours, the MTS solution was added to each
well. They were then incubated for two hours in which the media turned a reddish color. The plate was
then removed and placed in a 24 well plate and positioned in a Wallac 1420 Manager plate reader to
record the absorbance (Figure 4). The quantity of formazan product was measured a wavelength of 490
nm where this reading is directly proportional to the number of living cells in culture.
Cell Morphology
168
To test ingrowth on the glass and carbon fibers, cells were stained with CFDA-SE (invitrogen), a vital dye
where only viable cells take up the dye. DMSO was added to CFDA-SE vial to create the stock solution.
For the working solution, 15ml of PBS (Phosphate Buffer Solution) was added to 3l of the stock
solution creating a 1:5000 dilution ratio. After removing the media from the flask, 10 ml of the working
solution was added to the flasks and incubated at 37C for 15 minutes. The working solution was then
removed and replaced with fresh media. These cells were then seeded onto the well and fiber surfaces and
was imaged using dark imaging microscopy after an incubation period of 0-72 hours. At 0 time point, the
cells are just round and have not yet expanded, in 72 hour time point the cells started to align along the
fibers (Figure 5).

Results Obtained
For the cells seeded on the carbon fibers, the data was interpreted to indicate initial cell growth over a
certain period of time. As for the cells seeded on the glass fibers; the data could not be accurately
interpreted due to an interference of the absorbance reading performed by the plate reader.

Significance and Interpretation of Results
When comparing the data for the two time points (Figure 6), it is established that the cell growth is
depreciating with time. However, in the morphology images, the cells are expanding along the fibers
indicating an increase in proliferation of the cells in the 72 hour time as seen in Figures 1C and 1D. An
explanation to the inconsistency could be due to the transformation of the osteoblast cells into more
mature bone cells. In order to account for the transformation for the cells, an alkaline phosphatase assay
could be used to test for the formation of the osteoblasts into osteocytes for example. On the other hand,
when making the assumption that the cells are not differentiating, a reason for the decrease in
proliferation between the 0 and 72 hour time point could be due to the toxic effects of the fibers on the
cells. Another reason for the decrease in proliferation is that the cells have reached their maximum
expansion on the matrix; hence cell death.

Figures/Charts

Figure 1. A. Carbon Nanotube
2
; B. Carbon Nanotubes synthesized via CVD
3


Figure 2. A. Glass Fiber
4
; B. Carbon Fiber Mesh
5


169

Figure 3. Fuzzy Fibers
6



Figure 4. Samples Absorbance Readings; (A1-A6: Osteoblast cells at varying concentrations; B1-B3:
Carbon fibers only; B4-B6: Seeded carbon fibers with osteoblasts)


Figure 5. Cells Seeded on Carbon Fibers. A and C) Dark image microscopy of osteoblast cells at 0 and
72 hour time point respectively. B and D) Clarified modifications to the dark image microscopy of
osteoblast cells at 0 and 72 hour time points respectively.
170

In the above figure, the circular cells indicate that the cells have not yet started to expand along the fibers
as mostly seen in images A and B, 0 hour time point. In the 72 hour time points, images C and D, the
cells have started to expand along the fibers indicating that the compatibility of the osteoblasts with the
fibers.


Figure 6. 0 to 72 Hour Carbon Fiber Measurement

Acknowledgments
I would like to acknowledge and thank C. Browning, K. Lafdi, M. Cofield, A. Hoffmann, and N. Jones at
the University of Dayton for their continuous support and time. Also, my thanks are extended to SOCHE
and the members involved in making this program successful.

References
1. Promega Co. "CellTiter 96 AQueous One Solution Cell Proliferation Assay."Promega.com. June
2009. Web. Mar. 2012.
<http://www.promega.com/~/media/Files/Resources/Protocols/Technical%20Bulletins/0/CellTiter%2
096%20AQueous%20One%20Solution%20Cell%20Proliferation%20Assay%20System%20Protocol.
pdf>.
2. "CpE - Carbon Nanotube Research / Developing a Low Cost Solar Voltaic Paint." Web. Apr. 2012.
http://www.cpeproto.com/Nanotubes.html
3. "Devices and Nanotechnology." Devices and Nanotechnology. Web. 05 Mar. 2012.
<http://www.ipt.arc.nasa.gov/finnfigures.html>.
4. "Short-cut Glass Fiber." B2BFOO. Web. May 2012. <http://www.b2bfoo.com/products/60595/Short-
cut-glass-fiber.html>.
5. "Platinum RC Products - Carbon Fiber Panels." Platinum RC Products - Carbon Fiber Panels. Web.
May 2012. <http://www.platinumrcproducts.com/carbonsheeting.html>.
6. "Fuzzy Fiber" Nanomaterial May Revolutionize Composite Parts : Composites World." Fuzzy Fiber"
Nanomaterial May Revolutionize Composite Parts : Composites World. Web. Mar. 2012.
<http://www.compositesworld.com/news/fuzzy-fiber-nanomaterial-may-revolutionize-composite-
parts>.
171
Optimal Inverse Functions Created via Population Based Optimization

Student Researcher: Alan L. Jennings

Advisor: Dr. Ral Ordez

University of Dayton
Department of Electrical and Computer Engineering

Abstract
Finding optimal inputs for a multiple input, single output system is taxing for a system operator. This
work presents a population-based optimization to create sets of functions to approximate a locally optimal
input that an operator could use in real time. A population of agents minimizes the cost for the agent's
current output using the cost and output gradients. When an agent reaches an optimal input for its current
output, additional agents are generated to step in the output gradient directions. The new agents then settle
to the local optimum for the new output value. The set of associated optimal points forms a inverse
function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple
locally optimal functions can be created. These functions are naturally clustered in input and output
spaces allowing for a continuous optimal function. The operator selects the best cluster over the
anticipated range of desired outputs and adjusts the set point (desired output) while maintaining
optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point
with no loss in performance. Results are tested on a set of functions and on a robot control problem.

Project Objectives
The problem addressed regards many input-single output systems (MISO) that can also be optimized.
Examples include pneumatic systems where pressure, temperature and number of gas molecules all affect
the final volume according to the ideal gas law. On this example, an operator could adjust any one of
these parameters to obtain a desired volume (output). But if each input had an associated cost, then it
would be very taxing for the operator to maintain optimality while controlling the volume. Besides
simplifying things for an operator, it also simplifies the task for a higher level controller, such as an
automated planning system, without sacrificing optimality.

This paper addresses the challenge by pre-computing optimal inverse functions using a general
optimization technique. Typical functions map inputs to outputs. Inverse functions find inputs that
produce an output. The inverse function problem has some specific challenges. Whereas functions can
have multiple inputs producing the same output (many to one), the inverse problem does not
automatically create functions since the problem is one to many. In addition, even though a forward
function could be continuous, the inverse problem could have discontinuous solutions. As an example,
consider following driving directions. If you follow a set of directions (from a given starting point), then
you will end up at a specific location. This is the forward function. The inverse function is finding a route
based on a destination, but multiple routes may exist. In other words, once a destination is chosen, there is
not a single set of obvious directions. In addition, some routes may differ drastically for small changes;
meaning the inverse function is discontinuous.

Inverse function that could be used by an operator should follow the locally optimal solution and be
continuous. Local optimality for a specific output can be computed via constrained gradient descent. An
inverse function can then be built by repeating the optimization process after steps that change the current
output. The set of optimized points can be interpolated to from a continuous inverse function. If the
problem is multimodal, multiple locally optimal inverse functions would be expected; and multiple
solutions can exist even for unimodal functions. The final objective is a set of inverse functions, from
which the operator could select to use in real time.

The formal problem statement is to produce K functions, h
k
(y
d
)=x*, with x* in a set A which is a closed
subset of R
n
and y
d
in the set B
k
which is an open subset of R, such that all x in a neighborhood of x* that
satisfy g(x)=y
d
has a larger or equal cost, f(x*) f(x). The functions h
k
(y
d
) must be continuous over their
172
domain, B
k
, which is non-empty. The set A is defined as a hypercube: a
i,l
x
i
a
i,u
for each dimension i.
The output and cost functions should have a bounded, continuous first derivative to guarantee the inverse
function has a bounded derivative.

There are some limitations to the scope of the problem that are worth mentioning. Multiple objectives are
not addressed. To support multiple objectives, either for outputs or costs, the candidates must be
organized in the higher dimension space. Not every combination of outputs may be feasible, requiring
demarcations. The hypercube assumption on the set A simplifies steps involving the boundary of A.
Nonlinear boundaries would require addition considerations to deal with curvature. These are not
fundamental or insurmountable problems, but are outside the scope of this work. Analysis of the proof
shows that for a given cluster and maximum rate of input change, the maximum output rate of change can
be calculated. However, the relation depends on the problem topology, so requirements on response times
cannot directly be implemented using this method. In addition, the system only deals with disturbances
that can be compensated via the operator changing the desired output; and in that sense is similar to an
open loop system. The method cannot address other disturbances due to the reduction of dimensions of
control.

Methodology
The method for consists of a preprocessing step where the inverse functions are found, and then an
execution phase where the inverse functions are used in real time, as shown in Figure 1. Full details of the
method can be found in [1], but including an Armijo condition on step length.

Some examples of population based optimization are genetic algorithms, particle swarm optimization and
ant colony optimization. The common principle is that each agent represents a possible solution, interacts
with each other and the environment to find better solutions. Genetic algorithms exchange data between
solutions. Particle swarms agents are attracted to the best known solution. Ant colony optimization
modifies an artificial environment to guide agents to good solutions.

Global optima are typically desired, but a local minimum is desired here. Numerical methods, such as
gradient decent, are more efficient methods in terms of accuracy and computation for local solutions on
well-behaved functions. The advantage of using a population with the gradient descent step is that a very
broad area is searched efficiently. Rather than traversing the space sequentially, agents search the space in
parallel. Each local optimal only requires one agent in its watershed. So relatively few agents are needed
in practice, but the method only provides a likelihood of finding every cluster. If two agents approach
each other or an optimal point (within a set distance); they are assumed to converge to the same cluster
and one is removed. This creates a taboo condition preventing excessive repetition of calculations.

The inverse function is based on optimality for a given output. Therefore, the agents step perpendicular to
the output function gradient, resulting in no output change (for a linear approximation). To find optimal
solutions, they step opposite the cost gradient within the remaining directions. To justify locally linear
assumptions, the steps are bounded. As either gradient goes to zero, the direction becomes ill defined. If
the cost gradient is going to zero, the location is approaching an optimality condition and the step size is
reduced. If the output gradient is going to zero, the arbitrary directions will result in a minor output
change, so the requirement to step orthogonal to the output gradient is relaxed and the direction orients
closer to the maximum cost descent direction. An addition requirement is placed on the step length so that
the cost is reduced by a percent of the expected amount (the Armijo condition). This ensures that the steps
are not so long that they continue to overshoot the optimal point and not settle.

The proofs are based on the following reasoning. The Armijo condition guarantees a reduction in cost for
every step, otherwise the step length would be reduced. If the step size approaches zero, then the
component of the cost gradient orthogonal to the output gradient is near zero. Therefore any point in the
vicinity will have an equal cost to a linear approximation. Because of the successive reduction in cost,
these points are unlikely to be maxima. Because the step is in a directions more than 90 degrees from the
cost gradient, there will always exist a step length the will reduce the cost by the required percent of
173
predicted. Additionally the bound on step length shows that the change in output due to the linear
approximation is bounded.

Once an agent is settled, additional agents are formed in the output gradient directions and they are
optimized. The string of optimized points from the original agent forms a cluster. A few checks to ensure
that the additional agents should belong to the same cluster, such as producing an output larger than the
current tip (or smaller for the other side) and being close enough, but not too close. The clustering of
points guarantees the optimal inverse function is continuous. Ordering the points by output value, a
continuous function is formed by one dimensional interpolation, which can easily be done in real time.

The set of clusters can easily be tested for accuracy and optimality. The difference of the output function
evaluated at interpolated points and the value used for interpolation measures the error in accuracy. This
error can be reduced by finer spacing of the cluster points and changing interpolation methods (monotonic
cubic interpolation is suggested). Optimality is tested by evaluating neighboring points. The cost for
neighboring points should be equal or greater than the cost of the inverse function evaluated at the output
of the neighboring point.

The operator would select one of the inverse functions, based on the range of expected output desired and
the cost over that range. The operator could also have other considerations they could use for judging
which inverse function to select. Comparison of two or more clusters could be organized by the output
and compared in each input dimension and the cost. During operation, the operator, or a higher level
planning program, would change the desired output in response to changes in the environment.

Results Obtained
This method has been applied on two sets of problems. Pure mathematical functions allow for verification
of the results and testing challenging configurations, such as saddle points. For a practical application, the
method was applied to optimizing a robots pose for highest precision.

A set of two dimensional mathematical functions were taken: a quadratic function, a linear-quadratic
combination, a mixture of Gaussian and a linear-sinusoidal function. These functions were combined in
all perturbations as cost and output functions. The functions were scaled so that the magnitude of the
gradient ranged to a unit. The same parameters were used on the function combinations and gave good
performance.

One example of the set of inverse functions is shown in Figure 2. The output function is a quadratic, so
the clusters should trace from the minimum to the outer edge. The cost function is a mixture of Gaussians,
so the functions should follow negative peaks or go along valleys between positive peaks. This is what
happened. Results can also be seen in Figure 3, as ordered by output value. Tests of one cluster are shown
in Figure 4, and they do establish the accuracy and optimality of the inverse function.

Some principles were established by through the tests of these functions. All extremum in the output
should be in a cluster. Since there is no neighboring point with the same output, the point is by definition
optimal. Additional clusters are formed along the boundary when the cost gradient points away from it
and at minimum in the cost. Other clusters can be formed without either of these situations due to
changing curvature of contours.

The method was also implemented to optimize the radial distance of a robotic arm. Robotic arms have a
series of joints. The problem of finding the tip location from joint angles is simple, but the problem of
finding joint angles that produce the desired tip location is complicated due to the nonlinear functions. It
would be very error prone to put an operator in charge of all joints to try to achieve a desired path. The
nonlinear equations also mean that the sensitivity of the distance to the tip is not constant. A large
sensitivity results in low precision due to limitation on resolution of the joint sensors. The distance to the
tip, as a function of joint angles was chosen as the output function, while the magnitude of its gradient
was chosen as the cost function. A 3 degree of freedom robot (Motoman HP5) and a 7 degree of freedom
174
robot (Motoman IA20) were used for optimization in 2 and 3 dimensions respectively. The additional
degrees of freedom did not influence either function and were removed from the optimization.

The geometry for the HP5 is shown in Figure 5. The cost and output functions are shown in Figure 6. The
clusters extend along the major axis of the ellipses in the output function and along the upper and lower
2

boundary. An additional cluster is also formed along the upper boundary of the
1
axis. A cluster was
selected and used to draw a circle for comparison to a standard method of keeping the tool vertical.
Results showed the expected cost reduced by half. Examples of optimal poses are shown in Figure 7.

Significance and Interpretation of Results
The goal of this work was to develop an algorithm for creating optimal inverse functions and results show
that practical accuracy and optimality can be obtained. These functions will substantially reduce the
cognitive burden on operators without a reduction in performance and can be used for real time
applications. Traditional path based optimization could not be used in real time since the future path is
unknown. The method uses a population of agents to efficiently search the input space using constrained
gradient descent techniques. Results on mathematical test problems and robotic applications have shown
favorable success.

Figures and Charts







Figure 1. The algorithm has a pre-processing phase
and an execution phase. The preprocessing phase
involves a population based optimization forms a
set of inverse functions. The operator, or a higher
level controller, then uses a cluster to control the
system in real time.
Figure 2. A set of example clusters (shown by the
four lines of dots) for a quadratic output function
(shown by the circular contours) and mixture of
Gaussian cost function.
175


Figure 3. The clusters for Figure 2 can also be
viewed in the output space. One of the four clusters
lies very close to a mixture of two others.
Figure 4. An inverse function is tested by
evaluating the cost and output for points produced
from the inverse function (black) and neighboring
points (gray). Accuracy is 1/100,000th of the
output range and neighboring points often are 5%
of the output range higher.


Figure 5. The Motoman HP5 was used to
demonstrate optimization of pose for precision.
Figure 6. The output (radial distance) and cost
(radial sensitivity) functions for the HP5.
176

Figure 7. Examples of poses from the optimal inverse function for tracing a circle.

Acknowledgments
The author is very grateful to the Ohio Space Grant Consortium for financial support and organization of
informative gatherings including the Great Midwestern Space Grant Regional Conference and the NASA
Futures Form. Professor Ordez is also deserving of thanks for his guidance and equipment, and to
Temesguen Messay-Kebede for assisting implementation on the robotic arm.

References
1. A. L. Jennings and R. Ordonez, Population Based Optimization for Variable Operating Points,
Evolutionary Computation (CEC), 2011 IEEE Congress on, pp.145-151, 5-8 June 2011
doi: 10.1109/CEC.2011.5949611
177
Medicinal Plants and Bacteria Growth

Student Researcher: Candace A. Johnson

Advisor: Dr. Cadance Lowell

Central State University
College of Natural Sciences

Abstract
The primary focus of this research is to explore the secondary metabolites of common plants. Secondary
compounds are chemicals produced by the plant that is not required for its survival. These compounds
have physiological effects that have proven to be beneficial in the field of medicine. Most plants have
been used as a form of holistic or alternative medicine. It has already been proven that a few plants found
in Ohio and India has the ability to decrease the growth of different bacteria. The makeup of these various
plants will be analyzed to find how these plants affect different strands of bacteria. Our ultimate goal is to
establish a connection between medicinal plants and human infections in a way that secondary
metabolites have the ability to cure or treat these bacteria.

Project Objectives
To establish and classify various plants as a medicinal alternative as a more traditional antibiotics used
against bacterial infections commonly found in humans. The four strains of bacteria tested can also be
found in public areas ranging from the gym equipment to hospital beds. Each strain of bacteria has its
own degree of severity that could potentially be lethal. Other countries and cultures practice holistic
treatment as a primary source of heath care. Studies have shown by incorporating alternative medicine
can lead to a healthier style of living as well as being cost efficient. The probability of side effects from
pharmaceutical drugs is decreased due to the lack of chemicals that react with the organic compounds of
the human body. Once able to pin point the specific secondary metabolites involved, they can be extracted
from medicinal plants to be administered for several ailments.

Methodology
Each bacterium (4) was cultured in liquid nutrient agar media for 24 hours at 35C. Combined together in
2.0mL vial was 0.7 grams fresh cut leaves, 1.2 grams silicon beads (0.1mm), and 1mL dimethyl
sulfoxide. The vials were placed into the Bead Beater for 5 minutes to further grind leaves into smaller
pieces. Vials were transferred to be centrifuge for 5 minutes at 13,000 rpm. Plates were streaked with 4
strains of bacteria. Discs dipped in Dimethyl sulfoxide was represented as the control. Following discs
were submerged in their respective plant extract before placed on streaked plates. All plates were placed
inverted in 37C for 24 hours. Studies of possible outcome are still in progress.
178
Harmful Effects of 35mm Film vs. Digital Formats

Student Researcher: Phillip E. Johnson

Advisor: Dr. Donna Moore Ramsey

Cuyahoga Community College
Department of Visual Communications

Abstract
Cellulose film is one product that is extremely dangerous, and highly flammable. Once it catches on fire it
is very difficult to put out, and the flames get intense with toxic smokes that fill the air with poisonous
gases. The older cellulose nitrate film gets, the higher the risk increase as it ages.

Cinematic film and negatives, also x-ray film might be made from nitrate. Its the plastic that is
commonly used in photographic things such as movie stills, and x-rays. Production of these materials
went onto the early 50s. Modern films that are made today, is acetate or polyester-based, its the cellulose
that makes nitrate films so harmful to store and use. It begins to decompose and become fragile at degree
temperatures as low as 38 degrees Celsius, putting out large quantities of poisonous gas, which could
create an explosion, making the warmth and moisture accelerate the process. The flames are so intense
and hot that it is hard to put out. Different from other flammable materials, it does not need the oxygen in
the air to keep burning and is hard to put out. Also trying to put it out in water is no guarantee, and could
in fact just make the smoke worse that is produced from the fire.

As film ages, and by it not being stored properly in good conditions, the danger of the material gets
overwhelmingly serious. Anyone who has that type of film stock should keep them far away from any
type of heat source, such as light bulbs, radiators, matches, etc. it can catch fire from the friction of a
projector by the film passing thru a projector, or by putting negatives on a larger hot machine.

Nitrate started out being used for 35mm film and photo sheets all the way until the 40s, and kept on being
used for amateurs and sky photography till about the 1950s. Most film was 35mm gauge up to about 1951
in the United Kingdom, even though old stock might have been used after that time. Also a very common
practice is where 35mmcinematic films were cut down and used for still work.

Silent Filming, where the scenes occupy the whole width between the edge holes, is most definitely
cellulose nitrate film. To tell if it was that kind of product, there would be a star printed on the edge of the
film, and if sound was attached to the film, where the soundtrack is between the holes, it might be etched
NITRATE FILM along the sides or have dashes every fourth hole. Visuals of the tape can usually be
found with smell and degrading negatives, but there are signs that might be present such as, brown or
yellowish discoloration of the images where there is just a patch or the whole film is messed up. The film
could be stuck together feeling soft, and could get ruined when peeled apart, a bubbling or blister like
presence on the negative, possibly with yellow froth. A foul odor that may range from faint to strong
irritating fumes can cause one to get sick, also the negative is decomposed into a small residue.

Films stored in metal cans might be covered in brown powder like residue that can cause a reaction
between the can and film print. It forms a ring of rust on the inside of the drum, and that could be a sign
of deterioration of the film stock, and possible degrading. Safety film is polyester based film, and marked
as such, but it is still done with caution because early film might have been copied to nitrate film. 16mm
and 8mm amateur films are also to be safety film, but if you have any doubt of the material treat it as
hazardous anyway, and contact a professional for advice.

Here is a handy list of extra things not to do when handling this type of outdated film. Do not allow
smoking near it, dont freeze it in cold storage museums because will not eliminate all the fume build up.
Do not dispose into the trash, or house the material in plastic. Do not forget to wash your work surface
with a mix of 1 teaspoon baking soda to one pint of water to calm the acid build up, and of course do
179
not be near any type of sparks, or stuffy rooms, but if you must deal with this type of film store it in a
cold dark storage vault at 40 degrees F, then warn your local fire department about the nitrate. Store with
acid free separation sheets with clear reading indicators of their new location, then place in an archival
box within a Ziploc bag in a frost freezer. When examining a film, handle to a small fraction but keep the
cans closed, and do it in a well ventilated area, where you can get out quick in case of an emergency.
Avoid contact with the eyes and also the skin, make sure you have some type of goggle on to be on the
safe side. Also if checking on an old school camera version, use a wind up hand operated model, and not
a motorized unit.

If your film still has some good viewable images, go and contact an archive company for some advice to
restore the artifacts of the film. If it got to a point where everything is all sticky and tacky, it might not be
restorable, but a historical company can assess that information once it sees it for itself and its
importance. If the National Museum of Photography doesnt want the damaged footage they can let you
know of a laboratory who will safely transfer the images to a good restoration for you.

If the film has rotted so much that it need to be thrown in the trash, one needs to discard of it safely. Try
contacting the local environmental health agency near you for the proper procedure in handling the
manner, since it is dangerous waste, but you may ask yourself what should I do in the meantime while
Im waiting for the film to be picked up? Because of the high risk of fire and toxic gases, the safest means
of action is not to store the nitrate film, because it might make your insurance void on your property.

When you shoot movies and video with such products made by Canon, Sony, Red one, Arri, etcyou are
moving at blazing speeds when shooting, the formats are not flammable, you can mess up takes as much
as you like and re-record. Billions of dollars are saved and can be reinvested, the new formats also create
new filmmakers that are younger than ever before, which have opened up doors that never used to be
on a global platform. Now whatever idea comes to light, it can be manifested on screen without causing
hazards to your crew or yourself, and you do not have to worry about an explosion or fire because you are
using an outdated medium.

Long ago the United States Navy had shot a short film for professional projectionists using footage of
controlled ignition using nitrate, which stayed burning in fully drowned water, and over in London the
Underground it was forbidden to transport movies until the safety of the medium got put in place. Cinema
fires were the cause of the 1926 film tragedy in County Limerick which took 48 peoples lives and the
1929disaster which killed 69 children. Projection rooms have to be required to have metal covering for
the windows just in case the fire try and spread out into the auditorium. Three layers of protection are
needed for the storage such as putting in sleeves, then sleeves in a box, and then on shelves or in a
cabinet. Microfilm should be put in unsealed containers, and acid paper will stop the deterioration that is
caused by the formation of acids.

Healthy nitrate smells sweet, whereas rotting nitrate smells like rotting bananas or like stinky feet. Film
stored with camphor smells like mothballs, and decomposing acetate film smells like vinegar.
Preservation is very important but it is how its saved is of the more importance, because as humans we
can not afford more deaths, and health issues at stake because a few individuals want to keep a certain
medium around.

180
The Effects of Cg Shifting During Flight

Student Researcher: Nicholas S. Jones

Advisor: Jed E. Marquart, Ph.D., P.E.

Ohio Northern University
Mechanical Engineering

Abstract
During this research project, the effects of shifting the center of gravity of an aircraft during flight were
investigated. Overall flight characteristics of the aircraft are relevant during a perching maneuver when
the aircraft is to decrease elevation and pitch up quickly in order to land like a bird would on a wire or
fence. If a weight inside of the aircraft was to be shifted quickly to the rear during the process of pitching
up, it could greatly increase the effectiveness of the overall perching maneuver. By moving this weight
towards the rear of the aircraft, it in turn shifts the Cg backwards. This Cg shift can also be coupled with
an elevator deflection to achieve greater results. The distance the Cg travels can be computed easily
through simple calculations. The tests were run to look at this effectiveness of the Cg shift using an
educational size wind tunnel. Forces and pitching moments were recorded for different variations of the
test. A basic wind tunnel model of an airfoil was used as the testing device in this experiment. In order to
demonstrate the Cg shift, quarters were attached to the wing at various positions. The tests will be run at
a range of speeds and angles of attack. The wind tunnel model also has an adjustable elevator that will be
tested in different positions. The data was then taken and analyzed through graphs to see the
effectiveness of the Cg shifting for the various runs.

Project Objectives
The objective of this experiment is to examine different variations of Cg shifting and look at the overall
effectiveness of the Cg shift compared to the baseline tests. Of the 45 different Cg shifting variations, the
greatest deviation in the coefficient of moment of the model from the baseline will be presented as the
most effective shifts. The data is helpful in studying the behavior of birds during flight and can be used in
air vehicle development. Micro air vehicles that are working on recreating the perching maneuver can
use the data to predict how the aircraft might react under certain perching conditions. Dimensionless
parameters were used throughout the data analyzing period so that the information could be used to
calculate predictions for different models under different conditions.

Methodology Used
The wind tunnel model was placed at five different angles of attack, three different elevator deflections,
three different added weight positions with two different weights, and all varying from 40 to 80mph air
speed. The testing was done in an Aerolab open-circuit wind tunnel and all data was collected in
Microsoft Excel. Once all testing on the model was completed for all of the different configurations of
the wing, the data was paired against baseline testing and graphed to show the variation of the coefficient
of moment on the model. The graphs will show the coefficient of moment versus the corresponding
Reynolds number.
The weights used to shift the Cg of the model were attached to the wing in three different places including
the quarter chord, the half chord, and the three-quarter chord of the model. The weights used were U.S.
quarters that weighed approximately 5.7 grams each. The first run of tests used one quarter on each end
of the airfoil and tested at the different locations. Then another quarter was added to each side of the
model and the testing was run again. The overall Cg of the model for the different weight positions can
be calculated by using Equation 1.

(1)
X is the distance of the Cg from the leading edge of the model. The m is each of the masses and the x is
the distance from the leading edge of the corresponding mass. The model must also be included in the
equation with its preliminary Cg as the x distance.
181
Results Obtained
After reviewing the data in the graphs, it can be seen in the most basic form of the testing shown in Figure
1 that there is about a 12% increase in the coefficient of moment at a Reynolds number of 140000 where
the values start to steady out. After comparing the remaining graphs showing the coefficient of moment
versus Reynolds number we can see the most effective Cg shifts occurred at ten degrees negative angle of
attack. At ten degrees angle of attack and thirty degrees elevator deflection we can see the greatest Cg
shifting effect. The coefficient of moment increased by almost 800% at a Reynolds number of 140000
and this chart can be seen in Figure 2.


Figure 1. Zero degrees angle of attack Figure 2. Ten degrees negative angle of attack

Reference
1. Hibbeler, R.C. Center of Mass of a Body. Engineering Mechanics Statics. Pearson Prentice Hall,
2010. PP. 449.
182
Oncology Therapeutics: Hyperthermia Using Self-Heating Nanoparticles

Student Researcher: Carmen Z. Kakish

Advisor: Dr. Alexis Abramson

Case Western Reserve University
Mechanical and Aerospace Engineering and Biomedical Engineering

Abstract
Micro-particles composed of Poly-Lactic-Co-Glycolic Acid (PLGA) coated calcium chloride (CaCl
2
) salt
are being created for the development of an innovative form of cancer treatment. The proposed treatment
involves the in situ delivery of the micro-particles into the intracellular space of a cancerous cell. After
extended exposure to the intracellular space of the cancerous cell, the PLGA coating of the micro-
particles will dissolve allowing for the exothermic reaction of calcium chloride with water. The release of
heat from the reaction will raise the temperature of the cell beyond viable means; thus, causing death of
the cancerous cell. Baseline tests measuring heat release and temperature change of the dissolution of
calcium chloride with water were conducted using the method of calorimetry in order to understand the
thermal behavior of particles. Further analysis of these results should give insight into the amount of
calcium chloride embedded in the particle. The next step in the research is to test the thermal behavior of
various formulas of particles in order to determine the ideal concentration and type of particle for the
project to proceed.

Project Objectives
Current cancer treatments, such as chemotherapy, target rapidly dividing cells in the body, that include
cancer and normal cells.
1
Side effects such as hair loss, nausea, vomiting, diarrhea , constipation, and a
decrease in blood cell count are caused by the treatments attack on normal cells.
1
Another form of cancer
treatment involves elevating the temperature of an individual in order to kill or hinder cancer cells, this is
known as a hyperthermia treatment.
2
Compared to the side effects of chemotherapy, hyperthermia
treatments have the benefit of causing minimal injury to a patient.
2
As of now, hyperthermia treatment can
only be administered by external methods.
2
For example, external applicators placed around the area of
the tumor and regional blood perfusion techniques are used to raise the temperature of a tumor site.
2
As
an attempt to create an internal hyperthermia treatment, the research groups operating under Dr. Alexis
Abramson and Dr. Agata Exner at Case Western Reserve University are developing a drug that releases
heat upon contact with a cancer cell in the body. Note that due to concerns regarding intellectual property,
certain details of this project have been omitted from this report

Preliminary research must be conducted to determine the feasibility of such a drug. To demonstrate
proof of concept, the synthesis of micro-particles by the encapsulation of CaCl
2
in PLGA was
accomplished (Figure 1). The PLGA essentially acts as protection for the CaCl
2
enabling a controlled
exothermic reaction. By further controlling the size and physical characteristics of these particles, future
formulations can be specifically designed for applications such as cancer treatment. To test the initial
particle formulations, they were placed in a 2M sodium hydroxide (NaOH) solution to record the
anticipated temperature rise. This experimental procedure will help determine: if the particles dissolve in
an aqueous solution, if the dissolution of particles causes a temperature rise, and the probable number of
grams of CaCl
2
encapsulated in the particles, corresponding to the resulting temperature rise.

Methodology
The principle of calorimetry was used to conduct all of the temperature experiments. Initially, baseline
measurements of the temperature change caused by the dissolution of CaCl
2
in water were attained in
order to be used as a means of determining how much CaCl
2
is in the particles. Then, the same
experimental setup was used to determine the temperature increase resulting from dissolution of the
micro-particles. The apparatus used for conducting these experiments is shown in Figure 2. The data in
Figure 3 shows the theoretical and experimental data results of the experiments. The baseline
measurements were conducted in a vacuum oven set at 37C in order to mimic the human body
183
temperature and also at room temperature for comparison. Four different concentrations of CaCl
2
in water
were tested to give a broad range of temperature change data. Two K-type thermocouples were inserted in
the plastic cup and temperature measurements were taken at a sampling rate of 1 kHz by the Labview 8.5
computer program. The temperature change for each experiment was calculated by subtracting the
equilibrated water temperature before the addition of the CaCl
2
from the resulting peak temperature after
the addition of the CaCl
2
.
The experimental data was compared to a theoretical calculation using the following equations.

(1)

(2)

is the heat generated by salt during the exothermic reaction with the aqueous environment,

is the internal energy gained by the entire solution, H is the known heat of dissolution of CaCl
2
.
(732.5 J/g),

is the specific heat capacity, by weight percent, of the entire solution, is the
temperature change, and

is the mass of CaCl


2
and water together. Since all quantities in equation
(2) are known, , the theoretically derived temperature rise expected, can be calculated and compared
with the experimental data. The difference in temperature change between the theoretical and
experimental data is due to heat losses incurred during the experiment. The experimental data matches
close enough to the theoretical data to be used as a means of baseline data for later comparison with the
particles.
Two batches of particles were tested. Batch A and B used the same amount of PLGA to synthesize the
particles while Batch A had twice the amount of CaCl
2
. The particles were dissolved in 2M NaOH rather
than water because it allowed for a quicker dissolution. The fast dissolution time results in a more
measurable and easily discernible temperature change. Future experiments will be conducted in water or
saline to more closely mimic the environment in the body. The temperature tests were conducted at three
different concentrations for each of the batches. To reduce uncertainties in the measurement that arise
during the use of the vacuum oven, these initial measurements were conducted at room temperature.
Figure 4 show the average temperature change of the various trials at each concentration of Batch A vs.
Batch B.

Results Obtained
Figure 3 shows the baseline measurements of CaCl
2
dissolved in NaOH as a function of concentration
(grams of CaCl
2
per milliliter of water). As expected, a linear increase in temperature arises with
concentration, and the experimental data falls with 23.78 % of the theoretical calculation. This
discrepancy is due to the fact that there is some extraneous heat loss to the environment.

Figure 4 shows that appreciable temperature changes were obtained from dissolving both Batch A and
Batch B particles in NaOH. Batch A, as expected, had a greater average temperature change than Batch
B, since the concentration of CaCl
2
was higher in Batch A. The concentration of CaCl
2
in the particles can
be estimated by comparing the temperature change of the particles at each concentration (Figure 4) to the
experimental baseline temperature change of CaCl
2
in water (Figure 3). Table 1 shows the approximate
amount of CaCl
2
for the Batch A and Batch B trials at each concentration. Preliminary Ca
+2
assay
experiments indicate that in actuality, there appears to be less CaCl
2
in the particles than indicated by this
analysis. Further investigation is ongoing to explore what might be occurring.

Conclusions
These preliminary investigations reveal that CaCl
2
encapsulated in PLGA micro-particles can be formed
and that when introduced in a solution of NaOH, an appreciable temperature rise, which is a function of
concentration, results. Further research must be conducted to determine the precise amount of CaCl
2

actually encapsulated in the PLGA, how to control the dissolution process and size of the particles, as
well as how to maximize the resulting temperature. Through further perfection of the particles synthesis, a
new mechanism for hyperthermic cancer treatment may be realized.




184
Figures




Figure 1. Scanning electron microscope image of
CaCl
2
encapsulated in PLGA.

Figure 2. Calorimeter apparatus used for
conducting temperature measurement tests. The
cup is where the reaction takes place. It is
surrounded by insulation to reduce heat loss to the
environment.


Figure 3. Theoretical T vs. Experimental T of CaCl
2
in water. The difference is due to heat losses
during experiment, but experimental data matches closely to theoretical data.


Figure 4. Average temperature change of particles at various concentrations. Batch A particles are shown
to have a higher average temperature change at each concentration than Batch B.
0
2
4
6
8
0 0.01 0.02 0.03 0.04 0.05
T
e
m
p
e
r
a
t
u
r
e

C
h
a
n
g
e

(

C
)

Concentration (g/ml)
Theoretical vs. Experiemental Data
experimental
data
0
0.5
1
1.5
2
2.5
3
3.5
0 0.005 0.01 0.015 0.02 0.025 0.03
C
h
a
n
g
e

i
n

T
e
m
p
e
r
a
t
u
r
e

(
d
e
g

C
)


Concentration (g/ ml H
2
O)
Average Temperature Change of Particles at Various
Concenctrations
Batch A
Batch B
Linear (Batch A)
Linear (Batch B)





COMPUTER
INSULATION
CUP
THERMOCOUPLES
VACUUM
OVEN
GLASS
BEAK
185
Table 1. Estimate of the amount of CaCl
2
embedded in PLGA for each Batch A and Batch B
concentration. The values were found by comparing the temperature change at each trial (Figure 4) to the
experimental baseline temperature change of CaCl
2
in water (Figure 3).

Concentration
of Batch A (g
particles/ ml
H
2
O)
Temperature
Change of
Batch A Trial
(C)
Estimation of
concentration
of CaCl
2
in
Batch A trial
(g/ml)
Concentration
of Batch B (g
particles/ ml
H
2
O)
Temperature
Change of
Batch B Trial
(C)
Estimation of
concentration
of CaCl
2
in
Batch B trial
(g/ml)
.0136 1.61 .012 .0133 1.086 .007
.0196 2.52 .019 .0196 1.30 .011
.0265 3.012 .023 .0261 2.064 .015

Acknowledgments
I would like to thank Dr. Alexis Abramson and Dr. Ananth Iyengar for their mentorship throughout this
project. I would also like to extend my thanks to the Exner lab, which includes Dr. Agata Exner, Dr. Luis
Solorio, and Ashlei Beiswenger, for providing the particles used to conduct this research as well as for
their support and ideas.

References
1. Gerber, David E. (2008). Targeted Therapies: A New Generation of Cancer Treatments.
American Family Physician. 77 (3), 311-319.
2. Hyperthermia in Cancer Treatment. (2011). National Cancer Institute. Retrieved from:
http://www.cancer.gov/cancertopics/factsheet/therapy/hyperthermia#R1
186
Design And Testing Of An Adaptive Magnetorheological Elastomers Vibration Isolation System

Student Researcher: Michael P. Karnes

Advisor: Dr. Jeong-Hoi Koo

Miami University
Department of Mechanical Engineering

Abstract
Unwanted vibrations cause many problems from loss of precision in a laboratory setting to loss of life in
the event of earthquakes. A various vibration isolation techniques and devices have studied to mitigate
such harmful vibrations. In the case of earthquake induced vibrations, passive base isolation systems have
been implemented in buildings to protect the structures and contents inside. In an effort to enhance the
passive base isolation system, this project investigated an adaptive base isolation system based on
Magneto-Rheological Elasotmers (MREs). The primary objectives of this project were to design and
construct a prototype adaptive base isolation system and test it. The focus of the project has been the
design of the magnetic actuation device for the MRE isolation system and the validation tests of the
system. The results show the feasibility of varying dynamic characteristics of MRE-based isolation
system using external magnetic fields generated by the magnetic actuation device (i.e., a hybrid of
electromagnets and permanent magnets).

Project Objectives
Precision is only becoming more important as technology moves to smaller scales; vibrations limit the
achievable precision. This project proposes to create an adaptive base isolation system that would protect
against precision limiting vibrations, improving laboratory capabilities to work on smaller scales.
Vibrations, created by machinery and the ground, continually reverberate through buildings. Steps can be
taken to reduce the magnitude of these vibrations in a certain areas of operation of the building. Base
isolation systems can be applied under a buildings foundation to reduce the effect of ground vibrations.
Machinery can be altered to reduce the vibrations they produce. These methods of vibration reduction are
not practical in most cases, since it is only certain pieces of equipment or areas of operation that need
protection; so a solution that addresses these specific areas would be much more affective. An example of
such a solution that currently exists is installing a passive base isolation system, decoupling the
equipment from the ground and reducing the transmissibility of the vibration. These base isolation
systems are rubber pads placed between the ground and the isolated structure. However, if the stiffness of
the rubber is not properly tuned, then the transmissibility of the vibrations can become even worse. The
proposed adaptive base isolation system would have a tunable stiffness that could be quickly set to most
effectively reduce the transmissibility of the vibrations, even if the frequency of the vibrational excitation
shifts.

In many countries, base isolation systems are currently implemented to protect structures from the ground
vibrations. As seen in Figure 1, these systems operate by isolating, or decoupling, the structure from the
ground to reduce their natural frequency, rendering them less susceptible excitation.


Figure 1. Diagram of how a base isolation system decouples a structure from the ground
One example of a base isolation system currently used is the lead rubber bearing, or LRB, as shown in
Figure 2. LRBs are columns consisting of layers of rubber and steel around a lead core that are placed
187
between the foundation and the ground. LRBs are designed to operate in shear, deforming horizontally.
This type of base isolation system is considered passive, meaning that the behavior of the system is not
able to change. This means that a unique base isolation system would have to be designed for each
scenario. Also, if the vibrational excitation changed, then a new base isolation system would have to be
implemented. By using an adaptive system, a single system could be used in various vibrational scenarios
and remain effective when the scenario changes.

Figure 2. Diagram of a lead rubber bearing and the pattern of deformation

Previous simulation work (Usman et al) shows that by adjusting the stiffness of the base isolation system,
the displacement and acceleration of the structure could be minimized. One way of achieving a variable
stiffness base isolations is using magnetorheological elastomers (MRE). MREs are composed of silicone
rubber, carbonyl iron particles, and a small amount of silicone oil and have the ability to quickly and
reversible change stiffness with the application of magnetic fields (Kallio 2005). The stiffness of the MRE
material is described in Equation 1, where G is the shear modulus, A is the shear area, h is the thickness
and G is the change in the shear modulus caused by the magnetic field.

Equation 1


This MRE system is considered adaptive, because the stiffness is changed with an electromagnet and a
control system. This system can also be considered semi-active because the system would still be able to
operate similarly to a LRB in the case of a power outage.

Because MRE materials have structural characteristics like natural rubbers, this project created a base
isolation system similar to LRB using MRE material. The MRE material was placed between two
electromagnets, so that the magnetic field experienced by the MRE could be controlled through current in
the electromagnets. The electromagnets were designed and modeled using FEMM simulations. Two
electromagnet pairs were fabricated and tested on a shake table that is excited by a shaker with a
frequency generator controlled output. The acceleration of the shake table surface and the platform were
measured with accelerometers to measure the transmissibility of the system and to test the adaptive
capability of the MRE material.

Methodology Used
Magnetic Actuation System Design
There are two pieces that needed to be designed for this project: the magnetic field application, the MRE
sample. The magnetic field application had to be strong enough to affect the stiffness the MRE and easily
controlled. The magnetic flux density required was determined to be 0-.25 Tesla (Kallio 2005), because
above this range the effects on stiffness diminish. In order to reduce the amount of current required, a
permanent magnet would be used to create a nominal flux density of .15 Tesla.

Two directions of design were considered: one, an electro magnet, using current to control magnetic field
strength; the second, a system which controls magnetic field strength by controlling the distance between
to permanent magnets. An electromagnet was selected as a more viable path because it could be more
accurately controlled through current than permanent magnets could be controlled through distance.

188
Magnetic fields move in a circuit type pattern, and the more permeable that circuit is, the higher the
magnitude of the magnetic field is. Two options were considered with this factor in mind: a c-shape
electromagnet and a tubular electromagnet, as seen in Figure 3.

Figure 3. Diagram showing tubular and c-shaped electromagnets and the difference in support methods

A tubular design was design was selected because it provides direct vertical support to the platform,
where the c-shape would cause a bending moment to occur on the MRE material. Also, the tubular is
lighter weight option. As seen in Figure 4, the tubular electro magnet has 4 pieces: the base, the core, the
shell and the wire coil. The goal of this electromagnet design was to create the lightest magnet that could
provide the adequate amount of flux density. This required that the design balanced the permeability of
circuit path and the weight of the magnet. This constrained the dimensions of the base, core, and tube.
The dimensions of the wire coil were constrained by the core outer diameter and the tubing inner
diameter. The overall size of the electromagnet was constrained by the shake table weight limit of 50 lbs.

Figure 4. Diagram of tubular electromagnet components

Using FEMM, multiple designs were tried. Experimenting with each dimension, gave insight to the effect
it had. A design was produced that met the given requirements, as shown in Figure 5. The plot shows the
magnetic flux density at the middle of the gap from the center of the magnet to the outer edge (i.e across
the radius). The distribution of the magnetic flux is shown through the contour lines in the figure.

189

Figure 5. Diagram showing the FEMM model and a plot showing strength of designed magnet at the
center of the gap, across the radius

The designed used a tube with an outer diameter of 2 and an inner diameter of 1.5. Each electromagnet
set, top and bottom, weighed 8 lbs. The bottom had a height of 3 and the top had a height of 1.5. By
having a shorter height of the top portion, the bending moment on the MRE material was decreased, as
shown in Figure 6. This was able to be done because it was found that magnet flux density was heavily
reliant on the total number of turns in the electromagnet and not where the turns were located. So the
height of the top portion was able to be reduced by increasing the height of the bottom portion.

Figure 6. Diagram showing the effect of shortening the top tubular electromagnet

As seen in Figure 7, the diameter of the core was reduced, in the area where the coils would be wrapped,
to increase the number of turns that could be fit. .25 diameter by .25 thick Neodymium 32 permanent
magnets were inserted in the slot in the surface of the core. The remaining .25 of the slot were used as a
securing method for the aluminum plate the sample was glued to.
190

Figure 7. Diagram showing the geometry of the designed tubular electromagnet

MRE Bearing Design
Two options were considered for the MRE sample: a sample across the diameter of the electromagnet and
a sample across the diameter core of the electromagnet, as seen in Figure 8. Having the sample across
only the core of the magnet allowed for a more uniform magnet field to be passed through the sample, but
did not provided the stability necessary for testing. Using FEMM simulations it was found that having an
MRE sample across the entire diameter of the electromagnet did not significantly diminish the magnitude
of the flux density, so it was decided that this method would be used.

Figure 8. Diagram showing possible MRE sample designs

In order to insure a complete and secure connection, it was decided that the MRE sample would be glued
to a thin aluminum plate, with a .25 long and .25 diameter circular protrusion which would fit in the
slot in the core, as seen in Figure 9. These plates were created by facing 2 aluminum rod on the lathe and
then turning a .25 length down to a .25 in diameter.
191

Figure 9. Diagram showing the geometry of the aluminum plate used to attach the sample to the
electromagnet
Magnetic Actuation System Fabrication
The electromagnets were manufactured in multiple stages. First the base of each magnet was cut into 4
by 4 squares from .25 steel plate. 9/32 holes were drilled at each corner of each plate, so the bases
could be attached to the shake table and a platform. Then the core was created by cutting and facing .5
diameter and 1 diameter steel rod. The cut steel rod was then welded to each other and the base. After
this was done, the top of each core was faced using a milling process and reduced to a matching height of
the matching corresponding part, to insure that each face was level and equal height. The slots in each
core were then drilled using a 17/32 bit on a mill to a depth of .5. Each core was then wrapped in
electrical tape, to protect the wire from being damaged and 18 AWG wire was then wrapped around the
core. This was done using a lathe set at a consistent speed. The time was recorded for the process,
insuring that a uniform number of coils were applied. Next the 2 diameter steel tubing was cut .125
longer than the desired length. Then it was faced and brought to its desired length using a turning process.
Using the mill, a .25 wide slot was cut a .25 deep into one side of each tube, to give room to thread the
wire between the base and the tube, so that a power source could be applied. The tube was then placed
over the core and glued to the base, using welders glue, with the two ends of the wire coming through the
slot in the tube described above.

MRE Fabrication
The mold for the MRE sample was created by cutting a 2 diameter through hole in a 2 thick
polyurethane block with a hole-saw. This mold was clamped to a table surface, with a .25 sheet of
polyurethane as its bottom. 120g of MRE material with 30% iron particles was prepared using HIII
silicone. After mixing thoroughly, the still liquid mixture was poured into the mold, which was prepped
with a mold release spray. The MRE material was allowed to cure for 24 hours. Then using the mold as a
guide, the MRE material was cut into .5 thick disks using a fully extended utility knife blade. This took
several attempts to cut an even surface, due to the elasticity of the MRE material. The MRE disks were
then glued to the aluminum plates and the aluminum plates were then glued to the electro magnets.

Test Setup
The created adaptive base isolation system was tested on the shake table created. The shake table and
shaker were clamped to the counter surface, as seen in Figure 10. The shake table was connected to the
shaker using a thin metal threaded rod, called a stinger. The shaker was controlled by a function generator
with amplifier used to increase the signal strength for the shaker. The acceleration of the shake table
surface and the platform mounted on top of the base isolation were measured using accelerometers. The
accelerometer signals were measured using Labview Signal Express. This program created a tdms file
with the time and the acceleration from each accelerometer. The tdms file was then exported to Excel.
From this point, the data could be analyzed.
192

Figure 10. Picture showing test setup with shake table, shaker, function generator, amplifier, and DAQ
system
The adaptive base isolation system was tested using a manual sweep with intervals of 5 Hz. This was
done with a current of -5A, meaning 5A against the permanent magnets, 0A and 5A. Due to the results,
32.5 Hz was included in the data because it was seen as a crucial point.

Results Obtained
Validation Results for Electromagnet Design
Each electro magnet was labeled with a letter, where A and B are bottom electromagnets, and C and D are
top electromagnets. To validate the electromagnets, each pair of magnets was tested at 0A, and 5A. This
was done by creating a .5 gape between the pair of magnets. Then the gaussmeter rod was attached to a
stand, so that it was in the middle of the gap. Then by moving the gauss meter until a maximum flux
value was achieved near the center, the position of testing was determined. This position was then set for
the remainder of the test. When characterizing the electromagnets, it was found that the experimental
values were much less than that of the FEMM model, as seen in Table 1. This was most likely caused by
difference in the manufactured electromagnets and the theoretical model. An electromagnets strength is
highly dependent upon the number of wire turns and their alignment. In the theoretical model, each turn
was perfectly aligned, allowing for more turns and more effective turns. For further electromagnet design
a factor will have to be included to account for the difference in theoretical and practical models.

Table 1. Modeled and measured magnetic flux fabricated magnets paired
Magnets
(amp)
FEMM A C(T) B D (T)
-5 0.078 .015 .015
0 0.0000007 .005 .005
5 0.08 .026 .026

Preliminary Evaluation Results
To evaluate the dynamic characteristics of the prototype, a series of harmonic vibration testing was
performed with a discretely varying input loading frequency and current. Figure 11 shows the
transmissibility plots (ratio of the base input accretions to the isolated platform accelerations) for seven
different frequencies (15Hz, 20Hz, 25Hz, 26Hz, 30Hz, 35Hz, and 40 Hz) and for three currents (0A
nominal magnetic field, 5A maximum magnetic field considered in this study). The figure shows that
the resonant peaks occur around 25Hz, but the peak varies in location with varying the current,
indicating that the dynamic characteristics (i.e., stiffness) have been altered. Moreover, the magnitude of
the peak changes between 0A and 5A. The damping change is thought to be attributing to the peak
193
magnitude changes. Further testing will be conducted to better assess the dynamic characteristics of the
MRE system and evaluate the effectively of the system in reducing the structural vibrations by adding a
scaled structure on top of the isolated platform.

Figure 11. Plot showing the transmissibility of the adaptive base isolation system with varying current

Conclusion
The primary objectives of this project were to design and construct a prototype adaptive base isolation
system using MRE material and a magnetic actuation device. By being adaptive, the base isolation system
would be better able to protect a structure from unwanted vibrations. By using MREs ability to change
stiffness, a system was designed modeled after a lead rubber bearing. A magnetic actuation device was
iteratively designed using FEMM models, balancing strength and weight. The strength of the fabricated
electromagnet was experimentally found to be much weaker than the numerical model. A preliminary
evaluation of the designed base isolation system was conducted. The results showed the resonant peaks
changed locations with varying current. From these results it can be concluded that the characteristics of
the MRE material were able to be altered, suggesting that an adaptive base isolation system is feasible.

Works Cited
1. Dong, Xiao-min, Miao Yu, Chang-rong Liao, and Wei-min Chen. "A New Variable Stiffness
Absorber Based on Magneto-rheological Elastomer." Transactions of Nonferrous Metals Society of
China 19 (2009): S611-615. Print
2. Kallio, Marke. The elastic and damping properties of magnetorheological elastomers. Espoo 2005.
VTT Publications 565. 146 p.
3. Usman, M., S. H. Sung, D. D. Jang, H. J. Jung, and J. H. Koo. "Numerical Investigation of Smart
Base Isolation System Employing MR Elastomer." Journal of Physics: Conference Series 149 (2009):
012099. Print.

194
Doping of Phthalocyanines for Use As Low Temperature Thermoelectric Materials

Student Researcher: Evan R. Kemp
1,2


Advisor: Dr. Douglas S. Dudis
1


1
Air Force Research Laboratory
Materials and Manufacturing Directorate
Thermal Sciences and Materials Branch
Wright-Patterson Air Force Base, OH 45433

2
University of Dayton
Department of Chemical Engineering

Abstract
Thermoelectric materials have found their way into space applications and most notably with radioisotope
thermoelectric generators (RTG) providing for upwards of 480W of power. RTGs have both proven to be
reliable power sources and long-lasting with current operation times of 40 years. The reliability of RTGs
is analogous to thermoelectric materials in general where the lack of moving parts reduces the need for
maintenance and failure opportunity. Iodine Doped Phthalocyanines are promising for low temperature
thermoelectric materials due to its high electrical conductivity of ~500
-1
cm
-1
at room temperature,
which varies up to 2 times that value as it reaches a weak maximum ~30K. This material has the potential
to be used for sensor cooling on military and scientific space missions which require cryogenic cooling
for IR, -ray, and x-ray sensors. The purpose of this project is to increase the electrical conductivity of
phthalocyanine through a combination of doping the material and purification, while attempting to
minimize the effects on the Seebeck coefficient and thermal conductivity.

The thermoelectric potential of un-doped copper phthalocyanine is hindered greatly due to its low
electrical conductivity of ~10
-7

-1
cm
-1
at room temperature. Iodine doping alone has proven to increase
the electrical conductivity of phthalocyanine on the order of 10 to 11 orders of magnitude. With the
doping, a decrease in the overall thermal conductivity has been observed. Further analysis into the doping
of the various metal phthalocyanines will be explored along with the amount of iodine doping which
occurs.

Project Objectives
Cryogenic cooling for IR, -ray, and x-ray sensors is required for many military applications and
scientific space missions. Further development of efficient, low temperature thermoelectric materials
could lead to a revolution in cryogenic cooling in such applications. Thermoelectric materials are known
for their reliability due to the fact they thermoelectric devices are solid state devices that are vibration-
free, reconfigurable, and independent of both orientation and g-forces. Thermoelectric devices could
greatly increase scientific knowledge by replacing bulky, unreliable dewar cooling and magnetic
refrigeration, with the potential to save both space and weight in space applications. The dimensionless
ZT value is the figure of merit for thermoelectric materials where:

(1)
Each of the variables in Equation 1 correspond to a different property of the thermoelectric material, is
the electrical conductivity, S is the Seebeck coefficient, T is the absolute temperature, and k is the thermal
conductivity. Overall, the value of ZT corresponds to the overall Carnot efficiency of the thermoelectric
material and the higher the ZT the more efficient the energy conversion. The difficulty in optimizing
these parameters for the ZT is that most are interrelated in most materials. State of the art thermoelectric
materials have set the bar at a ZT of around 1; however quasi-one-dimensional organic crystals predict a
ZT~20.


195
Metallophthalocyanines have the potential to fit the quasi-one-dimensional organic crystal prediction by
phthalocyanines stack along a single axis. However, copper phthalocyanine (CuPc) thin films as shown in
Figure 1 and Figure 2 have a Seebeck coefficient of ~300 V/K and an electrical resistivity of ~10
6.75

cm at room temperature. The Seebeck coefficient of CuPc has a temperature dependence with a
maximum around 400K, while the electrical resistivity shows an increase as temperature decreases.
1,2

Along with a thermal conductivity of ~0.25 W/mK, copper phthalocyanine has an extremely low ZT at
room temperature due to its low electrical conductivity. Improving the overall electrical conductivity
while minimizing the effects on the Seebeck coefficient and thermal conductivity could increase the ZT
of CuPc to around 1 and match the state of the art materials. In Figure 3, it can be seen that doping CuPc
with Iodine has the potential to increase the electrical conductivity to around 100
-1
cm
-1
, an increase of
10 orders of magnitude from the base material with the potential to reach upwards of 1000
-1
cm
-1
by
increasing the purity and crystallinity of the material.
1,2


Methodology Used
The purpose of this project is to develop a purification and doping method for a CuPc thin film, since the
thin film is more likely to resemble the single crystal characteristics than the bulk pressed powder and
characterized for the resulting electrical conductivity increase.

CuPc was prepared fresh since commercial was contaminated with ~7% chlorine. Fresh CuPC was
prepared by reacting of 4 molar equivalents Dicyanobenzene with 1 molar equivalent Copper (II) Acetate
Monohydrate in refluxing Dimethylformamide at 165C for 48 hours. The resulting product was vacuum
filtered then Soxhlet Extracted with water for 24 hours to remove any unreacted Copper Acetate. Further
purification and removal of excess Dicyanobenzene was completed by several sublimations in a three
zone tube furnace at a pressure of <10 mTorr and a temperature of 500C until no excess Dicyanobenzene
was recovered.

CuPc thin film deposition and doping was conducted in a three zone tube furnace. The film was deposited
using thermal evaporation at a pressure of ~100mTorr. The center sublimation zone was at 450C while
the copper substrate was at 350C, while argon was begin passed through the tube furnace. The total
deposition time was around 2 hours before the entire system was cooled down. Solid Iodine was then
introduced in a ceramic boat at the end opposite to the vacuum line while the substrate was held at a
constant temperature and the Iodine doping took place at 200C with a total doping time of 5 minutes.
The sort doping time was due to the formation of Copper Iodide on the substrate due to the reactivity
between the Copper substrate at the Iodine atmosphere, an electrically and thermally conducting substrate
was necessary for electrical conductivity measurements of the film on a PSM system.

In the PSM measurement for electrical conductivity, the sample is part of an AC-circuit according to the
Figure 4. The supplied electrical current is measured on a serial resistor, and the voltage between the tip
and the reference electrode is measured, which is determinated by the electrical conductivity and the
dimensions of this current path. Since the geometry of the sample and current path is known, the
resistivity can be calculated by Equation 2.

(2)

Results Obtained
The synthesis and purification produced a bright purple mixture of powder and mm scale single crystals.
The resulting material was verified through elemental analysis to within 0.03% of the theoretical values
before the thin films were prepared.

Thermal Evaporation of the CuPc resulted in a thin light blue film with only a slight color change to an
opaque blue with the Iodine doping before testing with the PSM system, results shown in Figure 5.


196
Significance and Interpretation of Results
The resulting electrical conductivity across the film shows a large amount of heterogeneity, which can
clearly be seen by the variation of the electrical conductivity from ~1-10
-5

-1
cm
-1
. The variation in
results could be due to the formation of a Copper Iodide film under the CuPc film since the electrical
conductivity of CuI is on the order of 10
-5
-10
-6

-1
cm
-1
and would be the largest resistance in the circuit.
3

The Iodine doping of the CuPc film could also be the cause of the low electrical conductivity and the
heterogeneity since the doping time was short due to the dopants reaction with the substrate. Though,
clear evidence of the effects of the Iodine dopant on the CuPc are evident in the 2-7 order of magnitude
increase in the electrical conductivity.

Figures


Figure 1. Resistivity as a function of
reciprocal temperature for CuPc film
Figure 2. Seebeck coefficient as a function of Temperature
for CuPc film



Figure 3. Electrical conductivity as a function of
temperature for CuPcI
Figure 4. PSM AC circuit diagram
197

Figure 5. Electrical conductivity as a function of the position on the thin film product.

References
1. El-Nahass, M. M.; F. S. Bahabri, A. A. Al-Ghamdi, S. R. Al-Harbi. Structural and Transport
Properties of Copper Phthalocyanine (CuPc) Thin Films. Egypt. J. Sol. 2002, 25, 2, 307-321.
2. Quirion, G.; M. Poirier, C. Ayanche, K. K. Liou, B. M. Hoffman. Transport properties of the
isostructural organic conductors Cu(L)I with itinerant charge carriers strongly coupled to Cu
2+

localized spins. J. Phys. I France 1992, 2, 741-751.
3. Villain, S.; J. Caban, D. Roux, L. Roussel, P. Knauth. Electrical properties of CuI and the phase
boundary Cu CuI. Solid State Ionics 1995, 76, 3-4, 229-235.
0
3
6
9
12
15
1.00E-05
1.00E-04
1.00E-03
1.00E-02
1.00E-01
1.00E+00
1.00E+01
0
2
4
6
8
10
12
14
16

-
c
m
)
^
-
1

Distance (mm)
1.00E+00-1.00E+01
1.00E-01-1.00E+00
1.00E-02-1.00E-01
1.00E-03-1.00E-02
1.00E-04-1.00E-03
1.00E-05-1.00E-04
198
NFL Concussions: The Effect on the Brain

Student Researcher: Isiah A. Kendall

Advisor: Dr. Tarun Goswami

Wright State University
Department of Biomedical, Industrial and Human Factors Engineering

Abstract
Concussions due to athletic activities are something that is persistently growing, specifically in the field
of football. When two players run towards each other in opposed directions and collide they experience
conservation of momentum. When the momentum is transferred, a force is also applied to the head based
on a series of mechanical events. This process evokes a certain amount of G-force on the skull, which
may lead to indications of a concussion. The objective of this research project is to not only understand
the full expansion of an impact leading to a concussion (specifically occurring in the NFL), but also what
is currently being done to moderate them. Data will be obtained by analyzing different collision scenarios
and how much G-force is accumulated and transferred from the helmet the skull and lastly to the brain.
The investigation of different models of football helmets will be examined in the event of seeing which
brand provides the best protection from the G-force threshold leading to a concussion.

Project Objectives
NFL players are at risk of receiving a concussion every time they go onto the field of play. Whether it be
from two linemen thrusting all their weight onto one another or by the quarter back getting blindsided,
concussions can occur when you least expect it. Whats scary is the fact that current football physicians
dont have a routinely method when dealing with players who develop concussions. To them, they see it
fit for the player to return to the field after a concussion as long as they have no memory or cognitive
problems (3). The current designs of football helmets aim to reduce the G-forces felt by the head during
an impact in hope of lowering the concussion rate. Understanding how the G-forces are transferred from
the helmet to the skull and then to the brain is a vital piece of information when considering the topic of
concussions.

This research project looks to analyze the beginning of a football collision and identify not only how the
head is affected but also the neck. From here, an understanding of G-forces can be developed and how
many Gs are transferred from the helmet, to the skull and lastly the brain. This will allow later
investigation to be done as far as building better helmets.

Methodology
To understand how the G-forces are distributed between the helmet skull and brain, we must look at how
it first occurs. When two football players move toward each other and collide, they experience what is
called conservation of momentum. This law states that the total momentum in any closed system will
remain the same all the time. Even if 2 objects are going at different speeds and collide, the total
momentum of the system before and after the collision will remain constant. Various tests were done to
demonstrate this statement. The following equation was used to calculate how much momentum was
transferred between football players in different scenarios.

M
1
*



M
1
and M
2
represent the masses of the objects colliding.

represent the velocities of the 2


objects. Understanding this concept lead to the conclusion that an increase/decrease in mass or velocity
would result in a direct relationship with the total momentum produced. That is, by increasing/decreasing
the mass of one player or changing his speed, would directly affect his angular acceleration, angular
velocity and any component preceding the impact.

199
Further analysis was divided between the head and neck to obtain quantitative results. Research on human
skulls, helmet sizes, and weights allowed several cases of impacts to be analyze. It was also found that the
padding inside the helmets absorbs incoming force due to deformation properties. The following
equations allowed the stress the padding absorbs to be calculated.



Where the strain in x, y, z components, E represents the elasticity constant of the material, is the
incoming velocity and is the stress.

After identifying the important head criteria, research was done on the neck likewise. Different helmet
sizes were selected so that the C.G(center of gravity) could be found and leads on helmet design
principles would be understood better. The C.G is important because systems such as helmets add more
weight to the head therefore shifting the C.G. If the shift is moved anteriorly, then this creates larger
moments on the neck and will require more opposing force(4). Identifying the limits the neck was also
crucial data. It was found that the range the neck can extend is 45
0
-70
0
, 60
0
-80
0
rotationally and 45
0

laterally (4).the following formula to find the moment of a rotating object was used to quantify the effects
the neck feels.
M=F DSin

Where M is the moment, F is the force; D is the length from the point of impact to the rotating axis.
Through these calculations is was found that larger face masks lead to larger moments felt by the neck.
The average rotational acceleration is 5022 rad/s (2). According to a study done, rotational forces can
cause the brain to scrape against the skull lining during impact. The skull is smooth on the outside, but
has jagged edges that can cause severe damage to the brain if scraped against (1).

With a general understanding of what should be considered when researching collisions, the study of G-
forces and how much is translated to the brain could commence. To find the translation forces, the
Youngs modulus equation was rearranged so that the force the given material could absorb was known.


E is the Young's modulus (modulus of elasticity).F is the force exerted on an object under tension.A
0
is
the original cross-sectional area through which the force is applied is the amount by which the length of
the object changes.L
0
is the original length of the object.

With this formula, calculating how much G-force each material absorbed was made easy. 3 different
materials were analyzed; the Football helmet with no padding, the skull and lastly a fully padded helmet.

Results Obtained
By applying the Youngs modulus equation, it was found that a fully padded football helmet reduced the
G-force impact by about 10 Gs. The figures below display this conclusion. The data was based on an age
group between 10-15 years old. This was done to show that if children are experiencing G-forces this
high, then something needs to be done in not only professional football, but also middle school and up.

Figures and Tables
Its important to note that the data obtained is in respect to the 10 year old. That is, each football player
collided with the 10 year old. Its plausible to believe this could happen because the 50 percentile body
weight mean of the 15 year old was used, and therefore the 90 percentile body weight mean of the 10 year
200
old. Likewise the 80 percentile mean for the 11 year old and so on. The 2 columns to pay attention to are
the body weight column and the reduced G-force column.

Age Weight(kg) Velocity
(m/s)
Force G-
force
reduced
force
Reduced
G-force

10 31.75 6 1444 49.1 1394 47.43

11 34.93 6.5 1513.18 51.41 1463 49.3

12 38.1 7 2144.57 72.87 2094.57 70.55

13 43.09 7.5 2598.67 88.3 2548.67 85.8

14 49.9 7.3 2613.21 88.79 2563.21 86.1

15 54.43 7.7 3004.68 102.1 2954 99.9

Figure 1. Skull data.

Age Weight(kg) Velocity
(m/s)
Force G-
force
reduced
force
Reduced
G-force

10 31.75 6 1444 49.1 1343 43.3

11 34.93 6.5 1513.18 51.41 1412.18 45.5

12 38.1 7 2144.57 72.87 2043.57 66.45

13 43.09 7.5 2598.67 88.3 2497.67 82.3

14 49.9 7.3 2613.21 88.79 2512.21 82.5

15 54.43 7.7 3004.68 102.1 2903.68 95.2

Figure 2. Helmet liner data.

Age Weight(kg) Velocity
(m/s)
Force G-
force
reduced
force
Reduced
G-force

10 31.75 6 1444 49.1 1394 39.5

11 34.93 6.5 1513.18 51.41 1463 41.2

12 38.1 7 2144.57 72.87 2094.57 62.33

13 43.09 7.5 2598.67 88.3 2548.67 77.1

14 49.9 7.3 2613.21 88.79 2563.21 77.3

15 54.43 7.7 3004.68 102.1 2954 89.5

Figure 3. Fully Padded Helmet.

The following graph shows how much G-force is reducing between each material. It was found that the
fully padded helmet reduced the G-forces by about 10 Gs compared to the original G-force coming in.
201

Figure 4


Acknowledgments
The author of this paper would like to thank project advisor Dr. Tarun Goswami for the guidance on this
cutting-edge topic. The author would also like to thank OSGC for the opportunity.

References
1. Barth, Jeffrey T., "Acceleration-Deceleration Sport Related Concussion: The Gravity of it
all."PubMed Central. (2001): n. page. 0.
<http://www.ncbi.nlm.nih.gov/pmc/articles/PMC155415/?tool=pubmed>.
2. Rowson, S., Duma S. M., Beckwith, J. G., Chu, J. J., Greenwald, R. M., Crisco, J. J., Brolinson,
P. G., Duhaime, A. C., et al. "Rotational head kinematics in football impacts: an injury risk function
for concussion." PubMed.gov. (2011): 1-13. Print.
<http://www.ncbi.nlm.nih.gov/pubmed/22012081>.
3. Pellman, E. J., Viano, D. C., Casson, I. R., Arfken, C., and Feuer, H. "Concussion in professional
football:players returning to the same game."PubMed.gov. (2005): 79-90. Web. 13 Apr. 2012.
<http://www.ncbi.nlm.nih.gov/pubmed?term=Concussion in professional football:players returning to
the same game>.
4. Coakwell, Mark R., Bloswick, Donald S., and Moser, Royce. High-Risk Head and Neck Movements
at High G and Interventions to Reduce Associated Neck Injury. 1. 75. Aerospace Medical
Association, 2004. 68-80. Web.
<http://www.ingentaconnect.com/content/asma/asem/2004/00000075/00000001/art00011>.
30 35 40 45 50 55
30
40
50
60
70
80
90
100
weight (kg)
R
e
d
u
c
e
d

G
-
f
o
r
c
e
Combined Data


skull
helmet liner
all
202
Airfoil Investigation

Student Researcher: Logan M. Kingen

Advisor: Jed E. Marquart

Ohio Northern University
Department of Mechanical Engineering

Abstract
When reading about aviation as a child I was always fascinated about how wings were able to lift these
giant planes. During my education I learned more about physics and lift and drag. Participating in AIAA
piqued my curiosity. For my research I want to learn more about drag effects on airfoils.

When considering airfoils for a wing, there are many variables to consider. One of the most important
variables is lift. Aircraft are constantly fighting to optimize lift and minimize weight. On the Ohio
Northern SAE Aero design RC aircraft, one weight savings solution was to run lightening holes through
the wing. The hole in the wing left a slight depression in the wing. I plan to research lift reduction do to
these depressions on isolated airfoils. My research is to study the lift reduction on an airfoil resembling
the one described and compare it to a solid airfoil.

Project Objectives
My primary objective throughout this project was to learn the various procedures of CFD simulations. I
wanted to be able to take a wing and simulate several different variables to Ohio Northern Universitys
SAE Aero team. This would be useful to select the best wing for competition, amongst other things for
our designs year after year. I also planned to use the research to gain better understanding of airfoils and
wing designs. Although this knowledge can be obtained through reading, there is no better learning tool
than performing real world simulations.

A second objective of my research was to attempt to show the shortcomings of last years SAE Aero
design competition plane, the Black Swan 2. The 2011 competition plane was predicted to carry 30lbs
of payload. The plane was barely able to take off within 200ft with 22lbs of payload and crashed on
approach. There was much discussion as to why the planes performance came up short of expectations.
One theory is the one I have described above. That is, the laminar flow over the airfoil was disrupted by
sunken geometry where the lightening holes were cut. I wanted to prove the wing performance was
hindered by this flaw when compared to a solid airfoil.

Methodology Used
To achieve my objectives, I recruited the help of a colleague as well as my advisor to walk me though the
CFD simulation processes. The CFD process that I ran used the Navier-Stokes Equations, a complex set
of differential equations. I began the process by first building a model to test. A portion of the Black
Swan 2 was used to create the geometry. I then removed the lightening holes from the first sample
geometry and created a solid airfoil section. This model is shown in Figure 1. For the second, I filled the
holes leaving a depression in the top of the wing, mimicking the actual condition from the Black Swan
wing. This model is shown in Figure 2.

Once these models were created, the CFD process began. My colleague assisted me with creating a
suitable grid structure. After the grid was created, the simulation parameters were set. With the help of
my advisor, the simulation was allowed to run until convergence. I then used the coefficient of lift (cl) as
well as a simple lift equation to compare the two airfoils. The logic for the calculation is shown below in
Figure 5. Velocity is symbolized by v, density by , lift by L, and area by A.

Results Obtained
During the post processing stage I was able to obtain Figure 3 and Figure 4 which shows the velocities
over each respective airfoil. These velocities give a clue to the relative lift of the airfoils. As you can see
203
the air is moving much faster over the top of the airfoil that I created. Figure 3 represents the solid airfoil
as shown in Figure 1. Figure4 shows the model I created as shown in Figure 2.

Using the equation mentioned above, and shown below, I was able calculate a 5% increase in lift
produced by the model I had created. This result was obtained using the coefficient of lift for the solid
airfoil as 1.9350 and 1.8552 for the second. The areas for the airfoils were 385.98in and 385.59 in
respectively.

Significance and Interpretation of Results
The results above were not what I was expecting. I was expecting a small decrease in lift from the solid
airfoil when compared to the model in Figure 2. I believe that the laminar flow over the wing would have
been disrupted leading to flow separation or separation drag. This would decrease the pressure difference
over the airfoil.

I believe the error is in the grid quality. The grid quality for the model in Figure 1 was 91.56 out of 100
and 92.74 out of 100 for the model in Figure 2. This means that the minute changes in the airfoil may not
have been calculated accurately over the airfoil.

Although the results were not definitive, I gained a greater knowledge of airfoils and lift from my
research. Furthermore, I was able to learn how to conduct CFD simulations and have learned from the
mistakes made in this research. Although there were errors in this simulation, I have developed a greater
understanding of how to obtain accurate results.

Figures/Charts

Figure 1. Figure 2.

Figure 3. Figure 4.


Figure 5.

Acknowledgments and References
I would like to thank Matt Smith as well as Dr. Jed E. Marquart for their support.

204
Wind Turbine Design Studies

Student Researcher: Krista M. Kirievich

Advisor: Dr. Mark Turner

The University of Cincinnati
College of Engineering and Applied Science

Abstract
Wind turbines are growing in utility for sustainable energy. Their efficiency is good, but improvements
can still be made. Turbines interact with each other in wind farms; they must operate under very different
operating conditions due to changing wind speed and direction. This research used existing methods to
design turbines, and used optimization approaches. Axi-symmetric analyses of a GE 1.5 MW wind
turbine were utilized to design a scaled-down CAD model using design methods created at the Gas
Turbine Simulation Laboratory. The turbine design was 3D printed at different pitch angles. Then, this
preliminary design was explored using Computational Fluid Dynamics (CFD) post-processing (Asgard).
From this CFD analysis, aerodynamic properties of the wind turbine were observed. It was seen that the
mid-span can be modeled from 2D considerations, but the hub and tip regions need 3D considerations to
be taken into account. Future work will include wind tunnel testing of these wind turbine models.

Introduction
Today as humans seek greater understanding on how technology and power generation effects the
environment, alternative sources of energy to the traditional coal and fossil fuels are being researched.
One promising technology is wind turbines. Many mechanical, aerospace, and other engineers are
interested in learning more about these technologies and how to improve them. This project sought out to
design a wind turbine using Turbomachinery Axisymmetric Solver (T-AXI) and print 3-d scaled down
versions of the wind turbine at different pitch angles to test in the freshmen Aerospace Engineering wind
tunnel.

Background Information
The main purpose of a wind turbine is to convert kinetic energy from wind into mechanical energy for
electricity generation. Wind turbines come in many shapes and sizes from industrial sized for wind farms
to small backyard home-use wind turbines. Most have 3 blades on a horizontally rotation axis, but others
may have 4 or more blades (some are even curved) and may rotate on a vertical axis. (Wind Power)
Wind turbines are usually classified by their rotating axis either horizontal or vertical. Horizontal axis
wind turbines are the traditional kind. They usually very large and have a gearbox in order to increase the
rotation seen by their generators. Usually the blades are placed upwind of the generator and tower in order
to reduce turbulence. Also, if blades were placed behind the tower and generator they would experience
more stress and strain and would be susceptible to material fatigue. Vertical axis wind turbines have some
advantages over horizontal axis wind turbines such as not needing to point into the wind. They are also
good for sites with highly variable wind and are frequently used for home power generation. Their control
systems are also close to the ground which makes maintenance easier. Because vertical axis wind turbines
have a lower rotational speed than horizontal axis types they need more torque to run. Some other cons of
vertical axis turbines are they have a higher dynamic loading, are more difficult to model
aerodynamically, due to being low to the ground they experience less wind, and they are susceptible to
more turbulence, noise, vibrations, and wind shear. (Wind Power)
Thus, most commercial wind turbines use the horizontal axis design with 3 blades pointed into the wind
using motors and sensors. Tip speeds for commercial wind turbines are around 200 mph, and they are
very efficient. Blades are usually painted light grey in order to blend in with the clouds. A gearbox is used
to step up the generator speed. Use of variable rotor speed allows for greater energy collection. Along
with this though, brakes and blade feathering into the wind are used for safety and to prevent blades from
detaching from the hub or tower. Typical commercial wind turbine characteristics include: blade lengths
205
around 66-130 feet, tower heights around 200-300 feet, and rotational speeds around 10-22 RPM. The
cost is around $2 million per MW installed. (Size)
An example of the kind of benefits that a wind farm can provide is that a 100 MW wind farm produces
enough energy equal to 2.9 M tons of coal or 63 billion ft
3
of natural gas over 20 years (GE Wind Energy
Basics).


Figure 1. Schematic of Wind Turbine Mechanical Structure
9


Figure 1 shows the basic mechanical structure of a commercial horizontal wind turbine, which consists of
three main components. First is the rotor (blades) which contributes to 20% of the cost of the wind
turbine. Second is the generator component including the generator, gearbox, and yaw/pitch controls. This
contributes to 34% of the wind turbine cost. The structural support accounts for 15% of the cost and
includes the tower and yaw motor. The other 31% of cost is due to transportation, assembly, etc. Heaters
are used to prevent ice from forming on the structures and causing damage. The yaw control faces the
blades into the wind. The pitch control is used to optimize lift and drag properties of the blades due to the
winds experienced. Light materials are used for constructing the blades in order to improve the tip speed.
However, the blades still need to be strong enough to withstand aerodynamic forces and centripetal loads.
(Wind Energy Basics)
The aerodynamics of a wind turbine are critical to its overall efficiency. The goal is to extract as much
energy as possible from the aerodynamic forces acting on the blades. This energy can be extracted from
either lift or drag and is then converted to torque for energy production. Blade airfoils and yaw/pitch
controls need to optimize the amount of aerodynamic force captured. In order to do this, the blades must
move in the direction of the force. However due to Newtons third law (for every action there is an equal
and opposite reaction), the creation of torque on the blade rotor also results in wake rotation in the flow.
This causes a loss in the amount of wind energy that can be captured by the wind turbine. The wake
rotation can be combated while still producing the same amount of power by allowing quicker blade
rotation causing more angular velocity, but less torque where power is:
(1)

206
According to Betzs Law for an ideal wind turbine, the maximum possible efficiency for a wind turbines
ability to extract energy from the wind is 59.3%. Todays turbines achieve about 47% efficiency in this
sense. (Wind Energy Basics)

Figure 2. GE 1.5 MW Wind Turbine
2

For this project the GE 1.5 MW wind turbine was chosen as the baseline model for designing the blades
and rotor. This model can be seen in Fig. 2 and consists of three fiberglass blades developed by GE along
with the US Department of Energy. It has a horizontal axis, active electric yaw and pitch controls and a
gearbox and generator located in a nacelle to reduce noise. In a favorable wind area, one GE 1.5 MW
wind turbine can produce enough energy for 400 average US homes annual needs. Its rated capacity is
1,500 kW and its rated speed is 12 m/s. This is its design point. Its dimensions are as follows: 70.5 m
turbine diameter; 3,904 m
2
sweep area; and a tower height of 80 m. The GE 1.5 MW variable rotor speed
is between 11.1-22.2 RPM and can withstand temperature ranges of -22 to 104 degrees F. Its cut-in and
cut-out speeds are 4 m/s and 22 m/s, respectively. These are the wind speeds at which the wind turbine
begin and cease to operate, respectively, for safety considerations. It also has a lightning protection
system. (GE 1.5 MW SLE Wind Turbine Specifications)
Goals and Objectives
The first goal of this research project was to use existing methods from the Gas Turbine Simulation
Laboratory to design wind turbines. To do this the Turbomachinery Axi-symmetric Solver (T-AXI) was
used. This solver allowed optimization approaches to be performed. The GE 1.5 MW Wind Turbine was
used as a baseline design. The second goal of this project was to generate a CAD model design to 3D
print at different pitch angles. After these rapid prototypes were created, the next goal was to calculate
and design the experimental setup in order to test the model wind turbines in the Aerospace department
freshmen wind tunnel. The experiment would measure lift, drag, RPM, and energy production from the
scaled-down wind turbines. Along with the wind turbine experiment, another objective of this research
project was to address issues of aerodynamics, structures, aeromechanics, and cost for wind turbines.
Structures, aeromechanics, and wind turbine cost was addressed through student research. Wind turbine
aerodynamics was studied by using Asgard, an in-house Computational Fluid Dynamics (CFD) post-
processor. The original CFD processing was performed by Soumitr Dey using FINE/Turbo. Static and
absolute pressures, temperatures, radial velocity, and the angle phi (arc tan (V
r
/V
x
)) were evaluated using
Asgard.

207
Research Study Details
T-AXI Methods
Here is a description of the methods used in the research project, codes T-AXI and Asgard, as well as to
design the experimental setup. During fall quarter 2010, the design process was completed using the
Turbomachinery Axi-symmetric Solver (T-AXI). During this time the student researcher learned how to
run T-AXI codes in both Windows and Linux operating systems. T-AXI (Turbomachinery Axisymmetric
Solver) is a robust and efficient axisymmetric code for turbomachinery design developed by Dr. Mark
Turner of the University of Cincinnati, Ali Merchant of the Massachusetts Institute of Technology, and
Dario Bruna of the University of Genoa, Italy. T-Axi uses gas dynamics and thermodynamics in order to
determine how the flow path area varies with blade/rotor geometries. This is key for blade/rotor geometry
optimization. T-Axi also accounts for end wall boundary layer, wakes, and other circumferential non-
uniformities. The program also uses Eulers Turbomachinery equation to account for enthalpy rise due to
work performed by/on blade row. T-AXI assumes the free vortex assumption: that rV

is constant.

Next, the T-AXI code was used to model GEs 1.5 MW wind turbine as a rotor with 3 blades. Figure 3
below shows how T-AXI uses geometry and design conditions to ultimately create the CAD files for
rapid prototyping. From T-AXI, output information files were used to create CAD models of the scaled-
down wind turbine models using Dr. Turners rapid prototyping machine at UCs Gas Turbine Simulation
Laboratory. The models printed are shown in Fig. 4 below; they are approximately 7 inches in diameter.

Figure 3. Flow diagram for using T-AXI

Figure 4. Three-blade horizontal wind turbine designs printed at 2 different pitch angles: 7 degrees in red
and 14.2 degrees in blue.

Ttdes

initial.xxx

stage.xxx

stack.xxx

walls.xxx
Boundaries/design
space coordinates

Txset

TAXI
Runs Axisymmetric
solver

Bladedata-
xxx.dat

3dgbinput.1.
dat
208
Asgard Methods
During spring quarter 2011, the UC in-house Asgard CFD post-processor code was used to better
understand the aerodynamics of wind turbines. The CFD flow solver used for the wind turbine design was
FINE/Turbo using steady calculations. FINE/Turbo is a CFD solver designed for the simulation of
internal, rotating, and turbomachinery flows. FINE/Turbo is used for many applications such as multi-
stage axial, radial, and mixed-flow compressors, turbines, fans, heat exchangers, and pumps. Asgard CFD
post-processor reads in the .cgns file from FINE/Turbo and then allows the user to view the CFD results
such as absolute and relative pressures and temperatures as well as velocity components. Asgard was used
to view the flow properties for the wind turbine case.

Experimental Design
In addition to using T-AXI and Asgard to design and look at the aerodynamics of the wind turbine, the
graduate and undergraduate student researchers and Dr. Turner started to develop a wind tunnel test setup
and to determine instrumentation needed. As a part of this, the estimated power output and RPM for the
model wind turbines were calculated as functions of wind speed in order to aid in the generator selection
process. This was done by using the following two equations:

(2)

(3)
respectively, where is the density of air, A is the sweep area of the wind turbine, V
w
is the wind
velocity, and r is the radius of the wind turbine.
This helped provide information to choose a generator to connect the wind turbine to during its
experiment. However, because of the wind turbines small sizes, they will only produce small power
outputs and at very high RPMs (around 13,300 RPM). These parameters make it difficult to find an off-
the-shelf generator to be compatible with the models. Thus, a gearbox will need to be used in order to
reduce the RPM seen by the generator. The research group is still in the process of refining the
experimental set up for the wind turbine wind tunnel testing.
Analysis Research Results
Significant results from using T-AXI and Asgard will be presented in this section.
T-AXI Results
Figure 5 below shows the CAD model generated by using T-AXI to design the wind turbine rotor. This
model has a pitch angle of 7 degrees.

Figure 5. CAD model of Wind Turbine Pitch angle: 7 deg.
209

Figure 6. Velocity Triangle for Flow enter Wind Turbine Blade

Figure 6 above is the velocity triangle notation used in T-AXI to represent the flow components of wind
coming into the wind turbine blade. W is the relative velocity vector, and V is the absolute velocity
vector. U is the rotational wheel speed, and V
x
is the absolute velocity component in the axial direction, x.
Angle is the absolute velocity angle, and is the relative velocity angle. Figure 7 below shows how
angle changes along the non-dimensionalized radius. Note that while is a function of r, r was placed
on the vertical axis to demonstrate visually how changes with r. This convention was also used for Fig.
8 and 9. Near the hub, is around 20 degrees, but then increases to 80 degrees around 60% of the radius
and remains at 80 degrees from 60-100% radius. This shows how the hub along with the blade geometry
affects the relative velocity angle entering the blade.


Figure 7. Relative Flow Angle as a Function of Wind Turbine Blade Radius


Figure 8. Angle as a Function of Wind Turbine Blade Radius
0
0.2
0.4
0.6
0.8
1
1.2
0 20 40 60 80 100
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l
i
z
e
d


R
a
d
i
u
s

Beta (relative flow angle) degrees
Relative Flow Angle (Beta)
as a Function of Radius
Inlet
Exit
0
0.2
0.4
0.6
0.8
1
1.2
-3 -2 -1 0 1 2
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l
i
z
e
d


R
a
d
i
u
s

Phi (degrees)
Phi (arctan(V_r/V_x)) as a
Function of Radius
Inlet
Exit
210
Figure 8 above shows how the angle changes along the radius of the blade. The angle is the arc tan of
the ratio V
r
/V
x
. Note that around 0.4 to 0.7 of the radius is close to zero, but from 0 to 0.4 and 0.7 to 1
radii the angle ranges from -2 to 1.5 degrees. The angle is indicative of whether or not the flow can be
treated as 2D or 3D. Thus, in regions where is zero, the flow can be treated as 2D, but where the angle
is not equal to zero 3D flow interactions are taking place that must be considered for accurate flow
analysis. Thus, according to Fig. 8 the T-AXI model can be treated as 2D from 0.4 to 0.7 of its radius, but
near the hub and blade tip flow must be considered 3D.

Figure 9. Adiabatic Efficiency as a Function of Wind Turbine Blade Radius

Figure 9 shows the adiabatic efficiency of the wind turbine model in T-AXI. From 0.2 to 0.8 radii, the
adiabatic efficiency is fairly constant around a value of 0.88, but near the hub and tip the adiabatic
efficiencies are much lower. Like in Fig. 8, Fig. 9 also demonstrates the differences between the 2D and
3D flow regions of the wind turbine blade. Greater flow losses occur near hub and tip analogous to flow
losses near the fuselage and wing tip of an airplane.

T-AXI Results
Figures 10 and 11 below show the streamlines near the wind turbine blade tip. The streamlines are
colored by the values of in the flow field. In both figures, it is important to note that for the flow
entering the blade and flow away from the blade the streamlines are green indicating is zero.
Immediately downstream of the blade, the angle becomes blue and red, which are negative and positive
angles, respectively. Similar to the T-AXI results in Fig. 8, both Fig. 10 and 11 demonstrate the 3D
nature of the flow near the wind turbine blade tip.


0
0.2
0.4
0.6
0.8
1
1.2
0.7 0.75 0.8 0.85 0.9 0.95 1
N
o
n
-
d
i
m
e
n
s
i
o
n
a
l
i
z
e
d


R
a
d
i
u
s

Efficiency
Efficiency as a Function of
Radius
Figure 10. Streamlines Near Blade Tip and in Flow Field Downstream

211

Figure 11. Streamlines near the tip of wind turbine blade. Colors indicate the value of the angle phi,
which is the arctangent of V
r
/ V
x

Figure 12. Angle Phi in the flow field surrounding the wind turbine. Note that Phi is zero near the center
of the blade, meaning 2D analysis in this region would be sufficient.
Figure 12 above show the angle gradient in the flow field surrounding the wind turbine blades. Note
that Fig. 12 shows only a 2 bladed rotor. Near the hub, there are blue regions of negative angle , and
near the tip, there are red/orange/yellow regions of positive angle . Everywhere else in the flow field the
angle is zero (green shade). This visual image of the flow field angle again demonstrates the 3D
characteristics near the hub and tip of the wind turbine blades.
212

Figure 13: Total Temperature Gradient on Wind Turbine Blade
Figure 13 shows the total temperature gradient on the wind turbine blade. Note that the range of
temperatures shown is from 288.13 to 289.6 K. The flow on the blade slightly increases in total
temperature going from the hub to the tip, indicating greater thermal energy at the blade tip.

Figure 14: Static Pressure Gradient in Wind Turbine Blade Flow Field
Figure 14 above depicts the static pressure gradient in the wind turbine blade flow field. This figure
slightly resembles Fig. 12. Near the tip on the leading and trailing edges of the blade there is an increase
in static pressure (orange/yellow regions), and on the trailing edge of the blade from around 0-0.9 radius
there is a decrease in static pressure (blue regions). In the other regions of the flow the static pressure is
around 1.013*10
5
Pa.

Both static pressure and the angle illustrate which regions of the wind turbine
blade can be considered 2D and 3D.
213
Wind Tunnel Testing Design
For the wind tunnel testing, the UC Freshmen wind tunnel will be used to measure RPM, lift, drag, and
power generation of the wind turbine. Due to the size limitations of the wind tunnel, the model wind
turbine rotors were designed at a diameter of approximately 7 in. A tower with a generator and horizontal
rotating rod will be used to extract energy from the wind turbines. In order to select an appropriate
generator for the testing, power and RPM calculations were performed using equations 2 and 3 in section
4. Table 1 below shows the results of these calculations. Since our two model wind turbines, blue and red,
are designed for 7 and 20 m/s, respectively. The estimated power and RPM values are 4 W at 4,652 RPM
and 93 W at 13,291 RPM, respectively. Note the small power outputs and high RPMs because of wind
turbine being scaled down. The combination of small power outputs and high RPMs has caused finding
an off-the-shelf generator for the experiment to be difficult. Thus, most likely a gearbox will be needed to
reduce the RPM seen by the generator. The experimental set up design is still in progress as of June.

Table 1: Estimate Power and RPM for Model Wind Turbine
Wind Speed (m/s) Power Output (W)
RPM
3 0.315 1993.709
6 2.523 3987.418
7 4.006 4651.988
9 8.515 5981.127
12 20.184 7974.836
15 39.422 9968.545
20 93.445 13291.393
25 182.510 16614.242

Conclusions
From looking at the Asgard CFD post-processing results, it can be seen that the mid-span (25-75%)
region of the wind turbine blade can be approximated by 2D aerodynamics because the angle is equal
to zero on this region of the blade, as can be seen in Fig. 12. A angle of zero is indicative of 2D
aerodynamics. However, near the wind turbine hub and tip, the angle is non-zero meaning that 3D
effects are at play and must be considered for accurate lift, drag, and energy production predictions. For
the experimental design for the wind tunnel testing, a gearbox will most likely need to be used in order to
reduce the RPM speed entering into the generator. This is because the scaled-down model will spin at a
much higher RPM than its larger counterpart, the GE 1.5 MW wind turbine.


214
Recommendations/Future Work
This project has shown that an axi-symmetric solver such as T-AXI can be used to design wind turbines.
When using CFD to analysis wind turbines, the 25-75% span of the wind turbine blade can be analysed
using 2D considerations but from 0-25% span and 75-100% span must be treated as 3D in order to
account for the effects of the hub and tip flow interactions.
The experimental portion of this research project is not yet complete. Thus, future work on this project
will include running wind tunnel tests measuring lift, drag, rpm, and power generation of the model wind
turbine, reviewing results, comparing these wind tunnel results with the GE 1.5 MW model, and if
differences exist, determining possible causes for these differences, whether they be due to experimental
design or the effects of the size of the scaled down model. As a part of this project, the research group
will also be collaborating with Tom Benson at NASA Glenn demonstrating wind turbines in mobile wind
tunnels.
Acknowledgments
I would like to thank my faculty advisor, Dr. Mark Turner, for all his support and guidance during this
project. The graduate students who I worked with also provided me with the training and resources
required for this research. Thus, I would like to express my gratitude to Soumitr Dey, Kedharnath Sairam,
and Kiran Siddappaji.

Lastly, I would like to thank the National Science Foundation for their financial support of this project
through the Research Experiences for Undergraduates Program for NSF Type 1 STEP Grant DUE-
0756921.

Bibliography
1. M.S. Thesis: Dey, Soumitr. Wind Turbine Blade Design SystemAerodynamic and Structural
Analysis. Masters Thesis. University of Cincinnati. 2011.
2. Website: GE 1.5 MW SLE Wind Turbine Specifications. GE Energys website.
http://gepower.com/prod_serv/products/wind_turbines/en/15mw/specs.htm
3. Brochure: GE Energy 1.5MW Wind Turbine Brochure. http://www.ge-energy.com/wind
4. Research Paper: Griffin, Dayton A. Blade System Design Studies Volume II: Preliminary Blade
Designs and Recommended Test Matrix. Global Energy Concepts, LLC. Kirkland, Washington:
June 2004.
5. Book: Manwell, J. F. Wind Energy Explained. Chichester, U.K.: Wiley, 2009. 2nd ed.
6. Code: Turbomachinery AXIsymmetric Design System. Mark G. Turner, Ali Merchant, and Dario
Bruna. May, 2006.
7. Website: Wind Energy Basics. GE Winds website.http://www.gepower.com/businesses/
ge_wind_energy/en/downloads/wind_energy_basics.pdf
8. Website: Wind Energy Basics. American Wind Energy Association.
http://www.awea.org/faq/wwt_basics.html.
9. Website: Wind Power. http://www.windpower.org/en/tour/wtrb/comp/index.htm
10. Website: Size specifications of common industrial wind turbines.
http://www.aweo.org/windmodels.html


215
Appendix I: Nomenclature

A = Cross-sectional area
C = Absolute velocity downstream
C
a
= Axial component of velocity
C

= Tangential component of velocity


C
d
= Coefficient of drag
C
f
= Skin friction coefficient
C
l
= Coefficient of lift
C
p
= Coefficient of pressure
C
p
= Specific at heat constant pressure = 1.005 kJ/Kg K.
D = Diameter
= Mass flow rate
n = number of blades
P = Power
P
T
= Total Pressure
P
s
= Static Pressure
p = pitch


R = Specific gas constant
Re = Reynolds number
r = Radius
rV

= Angular momentum
S = Span =


T-AXI = Turbomachinery Axi-Symmetric Solver
T
s
= Static Temperature
T
T
= Total Temperature
U = Rotor Wheel speed
u = Axial Velocity
V
x
= Axial velocity
V = Tangential velocity
V
r
= Radial velocity
V
rot
= Rotational Velocity
V
w
= Wind Velocity
W = Relative Velocity
x = Axial direction

Greek Symbols

= Absolute flow angle, also airfoil angle of attack
= Relative flow angle
= Tangential direction
= Ratio of specific heats
= Efficiency
= Density
= Torque
= (arc tan(V
r
/V
x
))
= Rotor angular Velocity

Appendix II: Research Schedule

September 2010-December 2011: Modeling of Wind Turbine Using Turbomachinery Axisymmetric
Solver (T-AXI)
January 2011-March 2011: Break (student on co-op)
March 2011-June 2011: Wind Tunnel Testing Design of Wind Turbine Model and Post-process
Results from Computational Fluid Dynamics (CFD) Analysis
216
Newtons Law in Action
Student Researcher: Charles D. Kish
Advisor: Mrs. Cynthia A. Smotzer
Youngstown State University
College of Education
Abstract
This lesson is designed to explore Newtons Three Laws of Motion using hands on, guided inquiry based
activities that keep students interested and engaged throughout the learning process. Using simple but
effective experiments that show concrete, real-world applications of Newtons Laws, students will break
down these laws and gain an understanding for how they impact us in everyday life.

Learning Objectives
1. Students will be able to identify and compare each of Newtons Three Laws of Motion.
2. Students will hypothesize as to why and how Newtons Three Laws of Motion work
3. Students will experiment in order to answer why and how Newtons Three Laws of Motion work
4. Students will discuss, check, and/or critique one anothers experimental results and understanding of
Newtons Three Laws of Motion.

Ohio Academic Content Standards
Subject: Science
Standard: Physical Sciences
Grade Level: Grade Nine
Benchmark D: Explain the movement of objects by applying Newton's three laws of
motion.
Organizer: Forces and Motion
Grade Level Indicator(s):
21. Demonstrate that motion is a measurable quantity that depends on the observer's
frame of reference and describe the object's motion in terms of position, velocity,
acceleration and time.
22. Demonstrate that any object does not accelerate (remains at rest or maintains a
constant speed and direction of motion) unless an unbalanced (net) force acts on it.
23. Explain the change in motion (acceleration) of an object. Demonstrate that the
acceleration is proportional to the net force acting on the object and inversely
proportional to the mass of the object. (F
net
= ma. Note that weight is the gravitational
force on a mass.)
24. Demonstrate that whenever one object exerts a force on another, an equal amount
of force is exerted back on the first object.
25. Demonstrate the ways in which frictional forces constrain the motion of objects
(e.g., a car traveling around a curve, a block on an inclined plane, a person running, an
airplane in flight).
Standard: Scientific Inquiry
Grade Level: Grade Nine
Benchmark A: Participate in and apply the processes of scientific investigation to create
models and to design, conduct, evaluate and communicate the results of these
investigations.
Organizer: Doing Scientific Inquiry
Grade Level Indicator(s):
3. Construct, interpret and apply physical and conceptual models that represent or
explain systems, objects, events or concepts.
217
6. Draw logical conclusions based on scientific knowledge and evidence from
investigations.

Underlying Learning Theory
This lesson takes the constructivist approach in which students develop and construct their own
knowledge on the subject. Students will work in groups of four in order to encourage cooperative
learning. By working together students can bounce ideas and information off of each other in order to
come to a better understanding of the concept. The guided inquiry process in itself promotes scientific
inquiry in which students conduct experiments and draw their own, logical conclusions about what they
observe. The lesson incorporates use of a wide variety of multiple intelligences. Students strengthen their
interpersonal and linguistic intelligences from working in groups. Logical/mathematical and visual/spatial
intelligences are also strengthened by performing the experiment, making observations, and doing
calculations. Finally, intrapersonal intelligence will also be utilized through the guided inquiry process
during the lesson. This is due to the fact that students are required to come up with explanations to
summarize the experiment and think critically in order to identify why they think the experiment turned
out the way it did.

Type and Level of Student Engagement
Students will take a guided inquiry approach to the studying and understanding of Newtons Three Laws
of Motion. With the exception of the few demonstrations performed after the Rocket Races activity,
students will do hands-on experiments in groups of four in order to study Newtons Three Laws of
Motion and attempt to explain why things happen the way they do.

Resources Required
For Instructors Use:
Throughout Lesson
o Computer (if available)
o LCD Projector (if available)
o Dry-Erase/Chalkboard
o Rocket Races Activity Worksheet
o Newtons Three Laws Worksheets
o Newtons Three Laws Worksheets
Answer Keys
o Newtons Three Laws Homework
o Newtons Three Laws Homework
Answer Key
o Newtons Three Laws Review
PowerPoint
Newtons Third Law Demonstrations
o 2 Force Spring Scale
o 2 Bathroom Scales
o 2 Wooden/Plastic Carts (large enough
for a person to sit on)
o 1-2m Rope (cut to length as needed)
Review Videos
o The Law of Inertia: Newton's First
Law Video Clip
o Force Equals Mass Times
Acceleration: Newton's Second Law
Video Clip
o The Law of Action and Reaction:
Newton's Third Law Video Clip
For Each Group of Students:
Throughout Lesson
o Newtons Three Laws Worksheets
o Newtons Three Laws Homework
o Newtons Three Laws Review PowerPoint
Newtons Third Law Activity
o Rocket Races Activity Worksheet
Newtons Second Law Activity
o 1 Stopwatch
o 1 Meter stick
o 1 Non-Motorized Cart
o 1 Cart Track
o 1 Pulley (attachable to cart track)
o 3 500g Block
o 1 20g Hanging Mass
o 1-1.5m String (cut to length as needed)
Newtons First Law Activity
o 1 Stopwatch
o 1 Meter stick
o 1 Block of wood
o 1 Crash test dummy
o 1 Motorized cart


218
Lesson Plan
All work done by students during this lesson will be done in groups of four. Each student will be required
to hand in his or her own copy of any work that is completed for assessment purposes. First, students will
examine Newtons Third Law. Students will do this by conducting the Rocket Races activity contained
within the NASA Rocket Educators Guide. After the instructor explains the activity to the students and
how to construct their rocket racer, students will construct their racer using their own designs. Afterwards,
the students will release their racers down the track and measure the distance their car travels, regardless
of how much curving the racer does. After each trial the distance of each groups cart will be posted and
students will try and modify their cart to set new records. After each racer runs three times, have students
complete their data sheets and sketch their final design on the design sheet. Students will then discuss
Newtons Third Law and how it relates to their rocket cars.

After the Rocket Races activity, students will watch a few demonstrations, still pertaining to Newtons
Third Law. They will first watch a pair of students pull on a single spring scale, followed by two spring
scales connected to one another. Then, the pair of students will push on two bathroom scales placed back
to back of one another. Students will record what they observe, and answer a few questions. The final
demonstration dealing with Newtons Third Law is one in which two students of roughly equal mass sit
on two wooden carts. They are pulled towards one another other, collide with one another, and then
bounce off in opposing directions. Students will write predictions about what they think will happen
before witnessing this demonstration. Afterwards students will reflect on what happened, discussing what
they predicted, and why they think their predictions were right or wrong. This same demonstration is then
repeated with a student and the instructor (differing masses) in order to lead into the next topic, Newtons
Second Law. However, a short guided inquiry activity on how to draw free body diagrams and the
differences between weight and mass must be covered before students will begin the hands-on activity
about Newtons Second Law.

During the Newtons Second Law guided inquiry activity, students will record the time it takes three carts
of different masses to travel a uniform distance of one meter. To do this, they will use the same cart which
has a mass of five hundred grams, for all twelve trials. After three trials of only the cart, a five hundred
gram mass will be added to the cart for the next three trials. This cycle will continue until the cart has
three five hundred gram masses added onto it. The cart will be pulled along a track by the same force
each time, provided by a twenty gram hanging mass suspended by a string which is then run over a pulley
and connected to the cart. Students will calculate the acceleration of the cart during each trial using the
distance it travels and the time it takes it to travel that distance. Once they have calculated the
acceleration, they can calculate the force applied to it. Students will then see that objects of varying mass,
which have the same force applied to it, have different rates of acceleration. Thus demonstrating the main
point of Newtons Second Law.

Lastly, students will examine Newtons First Law. We will cover this one last as it is generally the easiest
law for students to comprehend because it is the law that is most easily identifiable in everyday life.
Students will examine Newtons First Law by first using a stopwatch to time a motorized cart as it travels
a uniform distance along a tabletop. This cart travels at a constant velocity, and after a few time trials the
students should then be able to accurately determine the velocity of the cart. Students will then place a
small crash test dummy/figure on top of the cart. By then placing a block of wood in the path of the cart,
at the edge of the table, the cart will stop but the dummy will continue moving. The dummy will continue
moving in a horizontal direction covering the same distance it would over the same amount of time, as if
it were still on top of the motorized cart. Measuring the vertical distance between the tabletop and the
floor, students can use this value to calculate the time it will take the dummy to reach the floor after it is
ejected from the cart. Using this time in combination with the velocity of whichever cart they use,
students will then be able to determine where on the floor the dummy will land, as long as Newtons First
Law holds true.

A review session will take place after all activities have been completed. During this review the students
will watch three videos found on the NASA CORE website pertaining to Newtons Three Laws. These
three videos (listed in the Resources Required section) provide more real-world examples of how
219
Newtons Three Laws affect us in everyday life, and really help drive home the main points of the lesson.
Afterwards, students will answer some review questions dealing with the material weve covered and
discuss as a class, how they came about those answers.

Lesson Implementation Results
The students really enjoyed the activities covered throughout this lesson. The Rocket Races activity
was a really big hit with the students. It helped get the point across while encouraging some competition
among students to get their cars to go farther. However, it took some skill to smooth out the wheels
enough in order to get a good measurable distance. Using sandpaper on the wheels seemed to produce the
best result.

Having students set up their own track apparatus to study Newtons Second Law is a little too time
intensive due to students not fully understanding how to best set this up. That being said, the instructor
should set up the apparatus for each group prior to class in order to save time.

During the activity dealing with Newtons First Law, it took some practice for students to get a good,
measureable distance from their crash test dummy. Students found that the dummy was most easily
propelled when sitting on the front of the cart rather than in it. Also, being that the carts are motorized due
to battery power, better batteries will give you more power and thus a faster cart, making it that much
easier to propel the crash test dummy. As the instructor, you must always make sure to have fresh
batteries in each cart before students perform this experiment so that the students will get the most out of
their carts and this experiment.

Description of Assessment
Formative assessment will be performed throughout the lesson through observation, making sure that
each group member is putting forth their own individual effort and contributing in their groups. The
worksheets will also be collected at the end of the class to be checked mainly for completion, but also to
make sure each student is staying on track. Summative assessment will come in the form of a short
homework assignment for each student to complete and turn in the next day as well as a quiz at the end of
the unit.

Critique and Conclusion of the Project
This lesson gave students real-world, concrete examples of Newtons Three Laws of Motion. While
guided in part by the instructor, students create their own knowledge and understanding of Newtons
Three Laws through cooperative learning experienced while doing the experiments that they set up and
conduct. After completing each experiment, students could then see how each of Newtons Three Laws
are applied in everyday life.
220
Harmonic Balance and Conjugate Heat Transfer Methods
for Unsteady Turbomachinery Simulations

Student Research: Robert D. Knapke

Advisor: Dr. Mark G. Turner

University of Cincinnati
School of Aerospace Systems

Abstract
The flow within gas turbine engines is inherently unsteady. However, due to the computational cost of
time accurate computational fluid dynamics (CFD) simulations, steady simulations are the norm for
turbomachinery design. With steady turbomachinery simulations, the downstream running wakes and
upstream running pressure waves are circumferentially averaged at the interface plane between two
blades. This allows the designer to consider only one blade passage per blade row in the computational
domain and solve the simpler steady equations. At the same time, significant flow physics is lost. Time
accurate simulations capture the unsteady flow physics by passing the time varying information across the
interface plane. To maintain periodicity in each blade row, often more than one blade passage must be
included in the computational domain. Depending on the blade counts, simulation of the whole annulus
could be required. This drastically increases the computational cost. In addition, some low speed regions
of the domain (such as cavities and cooling passages) require more time steps to resolve, thus increasing
the computation time. With these extra costs, the time accurate methods require orders of magnitude
more time and computer resources than a steady simulation. Another common simplification of
turbomachinery simulations during the design process is the use of adiabatic walls. The temperature of
the walls is rarely known, so the absence of heat transfer is assumed. However, in the high temperature
regions of the engine, such as the high pressure turbine (HPT), this assumption is not valid. To reduce the
computation cost of unsteady simulations and accurately model the fluid-solid heat transfer, software
including the Harmonic Balance (HB) and Conjugate Heat Transfer (CHT) methods is under
development.

Project Objectives
The purpose of this effort is to reduce the computational cost and increase the accuracy of unsteady
turbomachinery simulations. The HB method reduces the cost of an unsteady simulation by reducing the
spatial domain to only one passage per blade row and by solving a series of steady type equations instead
of the time accurate equations. Simultaneous HB simulations of the fluid and solid heat transfer will be
performed and the two domains will be coupled with the CHT method. Several test cases will be used to
validate the implementation of these methods. Eventually, an unsteady simulation of the EEE HPT
including cooling holes and cooling passages will be performed.

Methodology Used
The HB method assumes a time periodicity and a given set of temporal frequencies to solve. HB can
be extended to aperiodic problems, but the derivation shown here is specifically for time periodic cases.
In general, turbomachinery simulations are periodic and the majority of the unsteadiness in the flow field
is dominated by frequencies of the neighboring blade rows. Therefore the assumptions of HB are
generally valid.

Consider the governing equations for three dimensional fluid flow in the following form.

(1)
Where U is the solution vector and F, G and H are the fluxes in the x, y and z directions. Assume now
that the solution is a periodic function in time for frequencies. The solution can be written as a Fourier
Series.

)]

(2)
221
The Fourier coefficients (

) in Eq. 2 are only functions of space. A series of time


steps can be generated from Eq. 2 and written in matrix form.
[

] [
(

) (

) (

) (

)
(

) (

) (

) (

)

(

) (

) (

) (

)
]
[

(3)
Equation 3 can be re-written as shown below.

(4)
The * superscript indicates a series of solutions and indicates the Fourier coefficients. By taking the
inverse of the

matrix, the Fourier Transform operation can be determined.


(5)
Substituting the series of solutions into Eq. 1 with the corresponding series of fluxes, the set of governing
equations is now the following.

(6)
By noting that the Fourier coefficients (

) are not dependent on time, the time derivative term of Eq. 6


can be modified as follows.

(7)
Substituting the results of Eq. 7 into Eq. 6 gives the final governing equation.

(8)
Where

(8)
The result of the transformations is that a series of steady equations are solved, instead of solving a time
accurate equation. The

matrix is analytically differentiable as shown below.


[
(

) (

) (

) (

)
(

) (

) (

) (

)

(

) (

) (

) (

)
] (9)
The matrix is then calculated from Eq. 9 and the matrix. Although this HB derivation shown above
involves the differential form of the 3D fluid flow equations, other spatial schemes could be used. An
integral form of the equations could also be derived.

Additional derivations of the HB and the Time Spectral (TS) methods are shown by Hall et al. [1, 2] and
Gopinath et al. [3, 4, 5]. The TS method can be thought of as a subset of the HB method.

Accurately capturing the fluid and solid heat transfer requires solution of the two sets of equations
simultaneously. At the boundary between the two domains, continuity of the wall temperature and heat
transfer must be enforced [6, 7, 8]. This involves communicating the heat transfer from the fluid to the
solid domain and transferring the surface temperature to the fluid domain. For unsteady turbomachinery
CHT simulations, the fluid flow time scales are several orders of magnitude shorter than the solid heat
222
transfer time scales. For this reason, He et al. suggest a frequency domain approach [7]. This makes HB
a great candidate for unsteady CHT simulations as it is a frequency domain approach. As mentioned
previously, the HB method can be applied to other spatial schemes, such as the heat equation.

Results Obtained
The HB method has been applied to the 1D linear convection Burgers equation and 1D heat equation
spatial schemes. The unsteady and HB versions of the governing equations for these two schemes are
shown below.
Linear Convection Equation:

(10)
HB Linear Convection Equation:

(11)
Heat Equation:

(12)
HB Heat Equation:

(13)
For both linear convection and heat equations, a single frequency sinusoidal variation of was specified
at the left boundary. The right boundary of the heat equation was set to zero.

Figures 1a and 2a show the results for the time accurate simulations. Several time steps are included in
each plot. The three solutions of the HB equations are shown in Figures 1b and 2b.

Significance and Interpretation of Results
The results of the linear convection and heat equation simulations show that the HB method can represent
the single frequency exactly. Therefore, the HB solutions incorporate all of the unsteadiness in the
system and represent as much information as the 1600 time steps calculated for the time accurate
simulation. Using the three HB solutions and the Fourier Transform operation (Eq. 5), any time step can
be reconstructed.

Despite the simplicity of these simulations, they indicate that the implemented version of HB is
applicable to fluid flow and solid heat transfer problems. However, potential performance gains achieved
with the HB method cannot be determined at this time.

Several steps must be completed before a fully cooled 3D unsteady turbomachinery simulation is
attempted. The communication between the fluid and solid domains as described by the CHT method
must be developed. In addition the HB method must be coupled with a 3D turbulent Navier-Stokes
spatial scheme. The turbomachinery interface and periodic boundary conditions for HB are also required.

Figures

(a) Time Accurate (b) Harmonic Balance
Figure 1. Linear convection solutions.
223

(a) Time Accurate (b) Harmonic Balance
Figure 2. Heat equation solutions.

Acknowledgments
The author would like to thank Dr. Mark Turner for his guidance. Thanks to Marshall Galbraith for his
assistance with the application of the Harmonic Balance method within his software. The author would
also like to acknowledge and thank the Ohio Space Grant Consortium for their financial support.

References
1. K. C. Hall, J. P. Thomas, and W. S. Clark. Computation of Unsteady Nonlinear Flows in
Cascades Using a Harmonic Balance Technique. AIAA Journal, 40(5):879_886, 2002.
2. K. C. Hall K. Ekici. Nonlinear Analysis of Unsteady Flows in Multistage Turbomachines Using
Harmonic Balance. AIAA Journal, 45, May 2007.
3. A. K. Gopinath and A. Jameson. Time Spectral Method for Periodic Unsteady Computations
Over Two- and Three- Dimensional Bodies. In AIAA Aerospace Sciences Meeting, 2005. AIAA
2005-1220.
4. A. K. Gopinath, E. van der Weide, J. J. Alonso, A. Jameson, K. Ekici, and K. C. Hall. Three-
Dimensional Unsteady Multi-Stage Turbomachinery Simulations Using the Harmonic Balance
Technique. 2007. AIAA 2007-892.
5. A. K. Gopinath. Efficient Fourier-Based Algorithms for Time-Periodic Unsteady Problems. PhD
thesis, Stanford University, Palo Alto, CA, April 2007.
6. Z. Han, B. H. Dennis, G. S. Dulikravich. Simultaneous Prediction of External Flow-Field and
Temperature in Internally Cooled 3-D Turbine Blade Material. ASME Turbo Expo, May 2000.
2000-GT-253
7. L. He and M. L. Oldfield. Unsteady Conjugate Heat Transfer Modelling. ASME Turbo Expo,
June 2009. GT2009-59174.
8. R. H. Ni, W. Humber, G. Fan, P. D. Johnson, J. Downs, J. P. Clark, P. J. Koch. Conjugate Heat
Transfer Analysis of a Film-Cooled Turbine Vane. ASME Turbo Expo, June 2011. GT2011-
45920.
224
Experimental Framework for Analysis of Curved Structures Under Random Excitation
Effect of Boundary Conditions and Geometric Imperfections

Student Researcher: Stephen E. Lai

Advisor: Dr. Amit Shukla

Miami University
Department of Mechanical and Manufacturing Engineering

Abstract
Air Force Research Laboratorys (AFRL) Structural Sciences Center is interested in formulating,
developing and implementing physics-based, computational-analytical-experimental methods and models
for the non-linear structural response of aerospace structures subjected to combined extreme
environments (coupled thermo-acoustic-mechanical loading). The finite element modeling program,
Abaqus, obtains the reactions (stresses, displacements, energy, etc.) of the material specimen. The
maximum displacements and natural frequencies of the specimen, corresponding to the first six modes, at
three different points of interest on the model were gathered. This data is a first step in predicting the
probabilistic structure life of materials in aircraft using real world flight profiles.

Project Objectives
Computational research is necessary for USAF needs to perform long-term, reliable assessments of the
durability and continued safe operation of designs that are proposed and/or to be developed for
hypersonic air-breathing vehicles, space operating vehicles, reusable single-stage to orbit space vehicles.
Many technology gaps exist in development of hypersonic, reusable aircraft structures. An essential piece
of this puzzle is the understanding the sources of variability in the simulation models developed for
response prediction. This goal of this research is to develop a combined experimental framework to
understand the variability in the response due to imperfections in geometry as well as variability in the
boundary conditions on the nonlinear response of a curved beam structure.

The design and development of next generation of hypersonic aircraft structure would require an ability to
predict the nonlinear response of the structure under the given operating conditions with a reliable
estimate of accuracy of the predicted results. It has been observed that boundary conditions and the
geometric imperfections are the two prominent sources of uncertainty in the response. In this research, a
clamped-clamped curved beam will be used to illustrate the effect of geometric imperfections and the
boundary conditions on the model development and hence the response. The overarching goal of this
research is to illustrate an approach for validating models for hypersonic aircraft applications. This
proposal will advance our understanding of the inherent variability and nonlinearities in the response of
structures and propose a framework for validation of the models for future use in design trade-off studies.

Methodology
The development of model was conduct based on actual geometric measurement of the curved beam
specimen using the 3-D Digital Image correlation (DIC) technique. The set up for 3-D DIC is shown in
Figure 1.
225

Figure 1. 3-D DIC Setup for Geometry and Deformation Measurement

The data from the 3-D renderings were transferred into finite element software to create a virtual beam
model, as shown in Figure 2. Boundary conditions of the beam were also taken into consideration into
the model.


Figure 2. Digitally measured geometry of the curved beam in fixture, bolt torque= 90 in-lbs

Figure 3. Finite element model of curved beam in clamped-clamped configuration

The first finite element model included the apparatus the beam would be clamped to as well as the surface
of the shaker the apparatus would attach to. This model was simplified to a finite element model where
-150
-100
-50
0
50
100
150
-100
-50
0
50
100
-100
0
100
Length mm
Width mm
T
h
i
c
k
n
e
s
s

m
m
226
the beam itself was only subjected to end boundary conditions, as shown in Figure 3. This model was
then studied for its nonlinear response under forced excitation. To generate a forcing function of
relevance, first the beams natural frequencies under non-idealized geometry and boundary conditions is
estimated. The loading profile is then used to study the effect of increasing amplitude of excitation.
The simulation was conducted in the ABAQUS CAE environment, this time directly observing the
displacement effects at three specific nodes in the mesh of the beam model. The peak-to-peak
displacement amplitudes for the first six natural frequencies, tested against varying applied forced
amplitude, of the simulation were gathered and the results plotted.

Results
The results from the Abaqus simulations were analyzed to obtain the out-of-plane motion of the beam to
obtain the peak-to-peak displacement at the selected locations. This is shown in Figure 4. As expected, it
was found that the vertical motion of the beam (in this case, the z-axis) experienced the most peak-to-
peak displacement, followed by the lateral direction (x-axis), and the transverse direction (y-axis). This is
indicative of nonlinear model coupling among the various mode shapes of the beam as shown in the
appendix.



Figure 4. z-axis displacement

Figure 4 shows that the peak-to-peak displacement amplitudes rise exponentially as the excitation
amplitude is increased. These results are replicated for the two other nodes located on the beam. In
future work, these results will be compared against the experimental data gathered from the shaker table
tests. In those experiments, the beam will be clamped using the same boundary conditions as those in the
finite element analysis program and excited to the same frequencies.

This research is a first step in developing a reliable modeling and simulation approach to estimate the
non-linear structural response of aerospace structures subjected to various types of loading in an
integrated computational and experimental approach. The corresponding data for all three nodes and
displacement responses to the three axes can be found in Appendix B.

0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e

Applied Amplitude
U3 N2
M2
M3
M4
M5
M6
227
Appendix A Abaqus renderings


Figure 5. Tested beam specimen


Figure 6. First Mode


Figure 7. Second Mode

228

Figure 8. Third Mode


Figure 9. Fourth Mode


Figure 10. Fifth Mode
229

Figure 11. Sixth Mode


Figure 12. Applied end boundary condition

Figure 13. Full assembly (not tested)

230

Figure 14. Full assembly (not tested)


Figure 15. Full assembly, transparent view

Figure 16. Full assembly, mesh view


231
Appendix B Excel charts


Figure 17. Node 1, x-axis


Figure 18. Node 1, y-axis

Figure 19: Node 1, z-axis
0
0.00002
0.00004
0.00006
0.00008
0.0001
0.00012
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e

Applied Amplitude
U1 N1
M2
M3
M4
M5
M6
0
0.0000001
0.0000002
0.0000003
0.0000004
0.0000005
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e


Applied Amplitude
U2 N1
M2
M3
M4
M5
M6
0
0.005
0.01
0.015
0.02
0.025
0.03
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
p
o
n
s
e

Applied Amplitude
U3 N1
M2
M3
M4
M5
M6
232

Figure 20: Node 2, x-axis

Figure 21: Node 2, y-axis

Figure 22: Node 2, z-axis
0
0.000005
0.00001
0.000015
0.00002
0.000025
0.00003
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e

Applied Amplitude
U1 N2
M2
M3
M4
M5
M6
0
0.0000002
0.0000004
0.0000006
0.0000008
0.000001
0.0000012
0.0000014
0.0000016
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e

Applied Amplitude
U2 N2
M2
M3
M4
M5
M6
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
0.045
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e

Applied Amplitude
U3 N2
M2
M3
M4
M5
M6
233

Figure 23: Node 3, x-axis

Figure 24: Node 3, y-axis

Figure 25: Node 3, z-axis
0
0.00002
0.00004
0.00006
0.00008
0.0001
0.00012
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e

Applied Amplitude
U1 N3
M2
M3
M4
M5
M6
0
1E-08
2E-08
3E-08
4E-08
5E-08
6E-08
7E-08
8E-08
9E-08
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e

Applied Amplitude
U2 N3
M2
M3
M4
M5
M6
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.0001 0.0005 0.001 0.005 0.01
P
e
a
k
-
P
e
a
k

A
m
p
l
i
t
u
d
e

R
e
s
p
o
n
s
e

Applied Amplitude
U3 N3
M2
M3
M4
M5
M6
234
References
1. Inman, Daniel J. Engineering Vibration. 3rd Ed. New Jersey: Prentice Hall, 2008. Print.
2. Abaqus/CAE. Vers. 6.10-1. Providence, RI: Dassault Systemes, 2010. Computer software.
3. Luo P. F., Chao Y. J., Sutton M. A., Peters III W. H., Accurate measurement of three-dimensional
deformation in deformable and rigid bodies using Computer Vision, Experimental Mechanics 33:
123132, 1993.
4. Mignolet, M.P., Soize, C., Stochastic reduced order models for uncertain geometrically nonlinear
dynamical systems. Computer Methods in Applied Mechanics and Engineering, 197, 3951-3963,
2008.
5. Murthy, R., Wang, X. Q., Perez, R. and Mignolet, M. P., Uncertainty-based experimental validation
of nonlinear reduced order models, Proceedings of X Conference on Recent Advances in Structural
Mechanics, Southampton, UK, July 2010.
6. Parks, A. K., Eason, T. G., Abanto-Bueno, J. L., Dynamic response of curved beams using 3D digital
image correlation, Proceedings of the annual SEM conference, Indianapolis, June 2010.
7. Przekop, A., Rizzi, S. A., Nonlinear reduced order random response analysis of structures with
shallow curvature. AIAA Journal, 44, 1767-1778, 2006.
8. Soper, H. J., Shukla, A. and Spottswood, S. M., Stochastic response of a curved beam: a comparison
of fokker-planck equation approach with monte carlo simulations of reduced order models,
Proceedings of X Conference on Recent Advances in Structural Mechanics, Southampton, UK, July
2010.
9. Spottswood, S. M. Eason, T. G., Wang, X. Q., Mignolet, M. P., Nonlinear reduced order modeling of
curved beams: a comparison of methods. Proceedings of the 50th Structures, Structural Dynamics,
and Materials Conference, Palm Springs, California, 2009. AIAA Paper AIAA-2009-2433.
10. Zulli D., Alaggio R., Benedettini F., Non-linear dynamics of curved beams. Part 2, numerical analysis
and experiments, International Journal of Non-Linear Mechanics 44: 630 643, 2009.
235
Mathematics of Rockets

Student Researcher: Jennifer J. Lyon

Advisor: Dr. Jennifer Hutchison

Cedarville University
Department of Science and Mathematics, and Education

Abstract
This project will focus on geometry and algebra. The project will follow a unit on trigonometry
and trajectory. Then we will spend several class days focusing on applying the concepts we
learned specifically to rockets. This will include ideas of angle of trajectory, height and distance
a rocket will reach with known information, and more specifically the basic design. The students
will then be given this final project as a culmination of the unit. They will be given instructions
on how to build their rockets similar to those found in "High-Power Paper Rockets activity in
NASAs Educators Guide to Rockets. Then they will have a few days to design and build their
rockets before Launch Day where they will have a competition to see who can launch their
rocket the farthest.

Objectives

To see the effects of the angle of trajectory
Use trigonometry to make predictions
See the effects of designs on flight

Lesson
The students will be following the instructions on how to build their
rockets given in the NASAs Rocket Educator Guide. They will be
given a week to design, build, and predict the outcomes for their
rockets.

The first day will focus on them designing their rockets. This will
include fin design and placement, length of the rocket, and the nose
design. The next few days they will be supplied the materials to begin
creating their rockets based on their design. I would encourage them to use time at home to
construct their rockets.so that class time can be used to test their design. They will be given time
in class to use the rocket launcher to test their design so that they can remodel if they would like.

Ideally the building and designing of their rockets would be done during the week and then they
would be given the weekend to finalize their project. Then when they return there will be a
Launch Day (this may be more than one day depending on the size of the class and length of
the class period). The class will begin with a competition to see whose rocket will launch the
farthest. So the rocket launcher angle will be set to a 45 angle and each student will launch their
rocket to see whose goes the farthest (it is important to pump an equal amount of pressure for
each launch as well).
236
Next the students will get to launch their rockets choosing their own angle to launch. They will
record this angle and the distance their rocket launches to use in answering questions on a
worksheet.

Alignment
Geometry and Spatial Sense

Benchmark A Use trigonometric relationships to verify and determine solutions in problem
situations.

Benchmark G Describe and use properties of triangles
to solve problems involving angle measures and side
lengths of right triangles

Assessment/Results
Students are able to visualize the geometry concepts better
after this project. It takes a little extra explanation but
when they have a reference they are a lot more excited to
do the work. Having their own personal data also makes
doing problems more enjoyable; rather than just having
problems from a textbook.

Interpretation
This project allows students a hands opportunity to see
how geometry is used in both the design and results of
rockets. It also gives them a different picture for
trigonometric ideas and their own personal data to solve
problems with.

Resources
1. "Ohio's Academic Content Standards in Mathematics." Ohio Resource Center. Web. 15 Apr.
2012. ohiorc.org/standards/ohio/search/mathematics/benchmark/118.aspx?&page=1
2. Shearer, Deborah A., and Gregory Vogt. "High-Power Paper Rockets." Rockets: educator's
guide with activities in science, technology, engineering and mathematics. Washington, DC:
National Aeronautics and Space Administration, 2008. 91-96. Print.
237
Analytical Modeling for Waterfloods in the Appalachian Basin

Student Researcher: Jennifer E. Masters

Advisor: David C. Freeman

Marietta College
Department of Petroleum Engineering

Introduction
The production of oil and gas has been a major part of the Appalachian Basins industry since the first
well was drilled in 1859. More recently, the production of natural gas from unconventional shale
reservoirs has over shadowed oil production in the area. With the current price differential between oil
and gas, it is potentially lucrative to reexamine the oil wells that used to dominate the industry. Most of
these wells were produced solely on primary production, meaning just a small fraction of the original-oil-
in-place has been recovered. One secondary recovery option that has been used successfully in the Basin
is waterflooding. Implementing a well-designed waterflood will likely lead to a greater percentage of the
original-oil-in-place being recovered. An issue that a producer may encounter when considering a
waterflood is the lack of easy-to-use analytical models to predict waterflood performance. It is often not
feasible to spend the time and money it would take to have a waterflood simulation performed.

Waterflood Model
In order to provide a user-friendly way to analytically model waterflood performance, two popular
waterflood models were integrated into an Excel workbook. The two models provided are the Craig-
Geffen-Morse model and the Modified Dykstra-Parsons model.

The Craig-Geffen-Morse (CGM) model takes into consideration displacement and areal sweep efficiency
while vertical sweep efficiency is assumed to be equal to 1. This popular model is an effective way to
predict waterflood performance for reservoirs with good vertical homogeneity. The CGM model breaks
the waterflood into the following four stages:

1. Start to Interference
2. Interference to Fill-Up
3. Fill-Up to Water Breakthrough
4. Water Breakthrough to End-of-Project.

Each stage of the model is clearly outlined in the workbook and performance curves are automatically
generated. For layered reservoirs, this model can be used by calculating each layers performance
separately and summing the results.

In order to incorporate vertical heterogeneity into the waterflood model, the Modified Dykstra-Parsons
method should be used. This model uses the permeability variation factor, V, to include the reservoir's
degree of heterogeneity. A V equal to 0 indicates a completely homogeneous reservoir, while a V equal
to 1 represents a completely heterogeneous reservoir. There are five assumptions in this model that must
be considered:

1. No cross-flow between layers
2. Immiscible flow
3. Linear flow
4. The distance water travels through each layer is proportional to the permeability of the layer
5. Piston-like displacement

Steps for calculating the permeability variation factor are included in the workbook and production plots
for the waterflood are automatically generated.

238
The reservoir and fluid properties that must be manually input are clearly indicated. Because some of this
information may be unavailable, a fluid properties calculator is provided at the beginning of the
workbook. Also, the equations
1
that are built into the model, along with all tables and graphs that are
required, are provided in the workbook for the users reference making the model completely self-
contained.

Examples
In order to illustrate the usefulness of the workbook, an example using each model is provided. The
CGM model is illustrated using data from a potential waterflood candidate in Harrison County, Ohio.
2
Data from an ongoing waterflood in Calhoun and Roane Counties in West Virginia
3
is used to illustrate
the Modified Dykstra-Parsons model.

References
1
Ahmed, Tarek. Reservoir Engineering Handbook. 3rd. Burlington: Gulf Professional Publishing, 2006.
Print.
2
Freeman, D. C., Marietta College.
3
Freeman, D. C. "A Practical Methodology for Evaluating Appalachian Basin Waterfloods." SPE. (2009):
Print.

239
Thermal Conductivities of Heat-Treated and Non-Heat-Treated Carbon Nanofibers

Student Researcher: Eric K. Mayhew

Advisor: Dr. Vikas Prakash

Case Western Reserve University
Department of Mechanical and Aerospace Engineering

Abstract
The thermal conductivities of commercially available, chemical vapor deposition-grown, heat-treated and
non-heat treated carbon nanofibers are reported. The thermal conductivity of the individual samples is
measured using a three-omega-based T-type probe, composed of a suspended platinum wire. The
experiments are conducted at room temperature inside of a scanning electron microscope. The results
show a substantial increase in thermal conductivity when the samples are heat-treated at 3000C for 20
hours. The highest measured thermal conductivity of a heat-treated sample is 196100.2 W/m-K, and the
highest measured thermal conductivity of a non-heat-treated sample is 9.71.6 W/m-K. The heat-treated
samples appear to show strong length-dependence, but further experiments must be conducted to establish
this relation. The high thermal conductivities of these samples show their potential for use in energy and
thermal management applications.

Project Objectives
Carbon nanostructures such as carbon nanotubes, carbon nanofibers, and the graphene sheets of which
these structures are made have amazing mechanical, thermal, and electrical transport properties. The great
properties that these structures possess give them potential for use in energy and thermal management
applications. However, there are many obstacles to overcome before this potential can be realized. The
ability to extend these properties beyond the dimensions of the individual structures is limited by their
small size and even smaller out-of-plane dimensions. Thus, these individual nanostructures have limited
use in practical applications. One solution to this problem is to develop nanostructure composites that
take advantage of the properties of the individual components.

While it is already possible to grow such structures, modeling them is an important step in understanding
the transport properties of these three-dimensional nanostructures. One step in modeling the components
that make up the structure is to characterize the individual elements of the structures. This particular
project examines the thermal transport properties of carbon nanofibers, specifically the thermal
conductivities of both heat-treated and non-heat-treated carbon nanofibers. The primary objective for this
project is to experimentally determine whether there is a difference in thermal conductivities between
heat-treated and non-heat-treated carbon nanofibers.

Methodology
The carbon nanofiber sample groups are composed of a batch purchased from US Nanomaterials
Research. The non-heat-treated samples (US4450) are termed non-graphitized, and the heat-treated
samples are termed graphitized. US Nanomaterials Research uses chemical vapor deposition (CVD) to
grow the nanofibers. The samples in each batch ranged in diameter from 200 to 600 nm. The graphitized
samples are heat-treated by US Nanomaterials Research at a temperature of 3000C for 20 hours. A total
of ten measurements of thermal conductivity are performed, five on each of the graphitized and non-
graphitized.

Dames [1,2] established a method for measuring the thermal conductivities of nanowires and nanotubes
using a one-dimensional heat transfer model. The governing equation for one-dimensional steady-state
heat transfer is:



240
Where is the thermal conductivity of the wire, is the temperature of the hot wire at the position ,
is the joule heating dissipated in the wire, is the length of the wire, and is the wire cross-sectional
area. Convection and radiation are ignored for the analytical model because the experiment is conducted
under a high vacuum in a scanning electron microscope and using only small temperature differences.

The sample is placed in contact with the center of the wire creating a heat sink boundary condition at
. The model is based on the use of a Wollaston wire with the silver coating etched away in the
center, leaving the platinum core exposed. The silver coating is electrically and thermally conductive, and
it has a much larger diameter than the platinum core. Therefore, it is a safe assumption that the ends of the
platinum wire are at the ambient or room temperature. Figure 1 shows the temperature distribution across
the platinum wire with and without the sample attached.

The method that is used for these experiments is the three-omega method used by Bifano [3]. In these
experiments, the platinum wire is heated using a current source which provides a small alternating
current, 35 to 65 A RMS. A lock-in amplifier detects the three-omega voltage signal, and it measures the
RMS voltage across the platinum wire. It is essential that the current oscillates at a frequency that is low
enough to maintain the steady-state approximation. For the purpose of this experiment, 1 Hz is an
appropriate frequency to use.

The platinum wires resistance changes as a function of temperature above the ambient temperature
according to:

is the electrical resistance,

is the electrical resistance when the wire is at the ambient temperature,


is the temperature coefficient of resistance, and

is the average temperature rise across the wire.



The lock-in amplifier detects the change in three-omega RMS voltage and therefore the change in
resistance of the wire given the known driving current. The three-omega electrical resistance of the wire
is given by:



The change in resistance directly gives the average temperature rise in the wire for the cases of when the
sample is and is not contacting the wire. Since the driving current and original wire resistance are known,
the amount of RMS Joule heating,

, is given by:

is the RMS value of the driving current.

is plotted as a function of

for the range of


heating currents. The thermal conductivity is determined from the drop in the slope the line between the
cases when the sample is not touching the heated wire and when the sample is touching the heated wire.

Results Obtained
The non-graphitized samples had thermal conductivities ranging from 2.2 0.4 W/m-K up to 9.71.6
W/m-K. Two of the non-graphitized samples are shown in Figure 2. The graphitized samples had thermal
conductivities ranging from 24.59.8 W/m-K up to 196100.2 W/m-K. Two of the graphitized samples
are shown in Figure 3. Figure 4 plots the thermal conductivities as a function of heat-treatment and
length.

Significance and Interpretation
The graphitized samples have noticeably higher thermal conductivities than the non-graphitized samples.
Figure 4 clearly shows this as the minimum thermal conductivity from the heat treat

241
One factor that may play a role in the distinct differences between the samples thermal conductivities is
the length. The lengths of the non-graphitized samples are generally shorter than those of the graphitized
samples. All of the measured non-graphitized samples have lengths less than 20 m, while three of the
graphitized samples measured have lengths over 100 m. Due to the large differences in lengths between
the two sets of samples, it is difficult to say how much of the thermal conductivity differences are due to
the length as opposed to the heat treatment. Further experiments are required to determine the length-
dependence versus heat-treatment-dependence. The thermal conductivities of the graphitized samples
seem to show strong length dependence. While it is not as evident, the non-graphitized samples may also
have length dependent thermal conductivity. The two highest measured thermal conductivities in the non-
graphitized samples are measured in the two longest samples. Further experiments need to be conducted
to determine the extent to which length corresponds to higher thermal conductivities.

Figures and Charts

Figure 1. Schematic of the temperature distribution across the platinum probe wire.

The platinum probe wire is represented by the horizontal line. The analytical model uses a constant
ambient temperature boundary condition at each end of the wire and on the manipulator. A heat flux
boundary condition at the center ( ) of the wire is used to model the sample. The dashed parabolic
curve,

, represents the temperature distribution above ambient temperature (

)
due to a uniformly applied RMS heat flux before the sample is touched to the wire. The solid curve,

, represents the temperature distribution after the sample is touched. The figure is reprinted with
the permission of Bifano [3].


Figure 2. Non-graphitized carbon nanofibers sample numbers 3 and 5.

Sample 3 had a length of 12.7 m, an average diameter of 390 nm, and a thermal conductivity of 2.5
0.3 W/m-K. Sample 5 had a length of 19.9 m, an average diameter of 273 nm, and a thermal
conductivity of 7.2 1.0 W/m-K.
242


Figure 3. Graphitized carbon nanofibers sample numbers 2 and 5.

Sample 2 had a length of 492 m, an average diameter of 536 nm, and a thermal conductivity of 196
100 W/m-K. Sample 5 had a length of 102 m, an average diameter of 536 nm, and a thermal
conductivity of 66.8 5.6 W/m-K.


Figure 4. Measured thermal conductivities of the graphitized and non-graphitized samples

Acknowledgments
The author would like to thank Mike Bifano for all of the assistance and preparation that he provided in
setting up and executing the experiments. The author would also like to thank Dr. Vikas Prakash for
providing the materials and equipment necessary to conduct the research. The author would like to
acknowledge the support of the Ohio Space Grant Consortium. The author would finally like to
acknowledge the support of the Air Force Office of Scientific Research (AFOSR) grant FA9550-08-1-
0372 and the National Science Foundation MRI grant CMMI-0922968.

References
1. Dames, C., S. Chen, C.T. Harris, J.Y. Huang, Z.F. Ren, M.S. Dresselhaus, and G. Chen, Review of
Scientific Instruments 78, (2007).
2. Dames, C. and G. Chen, Review of Scientific Instruments 76, 124902 (2005).
3. Bifano, M.P.F., J. Park, P. Kaul, A.K. Roy, and V. Prakash, Journal of Applied Physics 111, 054321
(2012).
243
The Planets and Me: An Exploration Webquest

Student Researcher: Caroline I. Miller

Advisor: Dr. Ann MacKenzie

Miami University
Department of Education, Health and Society

Abstract
In my lesson, students will become engaged by utilizing many of the resources that NASA offers online
through an educational webquest that I created. The students will explore all of the planets in our solar
system, and create a research page. From this research page, the students will make a model that is
relatively scale to the classroom. The planets will then be hung around the room to make a miniature scale
of the solar system.

Lessons
The basis of this activity came from exploring the many resources that NASA has to offer on their
websites. First, the students are separated into teams of three or four. Once they are in their teams, they
will be assigned a planet, which they will research. Once they have their planet, they will all then begin
the webquest. A webquest is a lesson where most, if not all of the information the students learn is from
the Internet, and from websites that the teacher has previously found for that specific lesson. Teachers
create a page with instructions for the students on what they must learn from each website, then guide the
students through the websites through links that they
provide. Webquests also have educational videos on them,
as well as animations. The children will then get on the
webquest, which will give them directions on what they
need to research. They will fill out a worksheet of
information, which will include the size, distance, and
special characteristics of that planet. To make sure
everyone in the team benefits from the research, have the
student split the questions evenly within the team.

Once the students have researched their planets with the Webquest. They will construct a model of the
planet they studied. They will do this by using a set of supplies the teacher will provide them, such as
foam balls, modeling clay, and paint. They will create a relatively scale model of their planet by
comparing it to a model of the earth the teacher will have previously created. Once the students create
models of their planets, the class as a whole places their models around the room at relatively scale
distances from each other. Once finished, there will be a scale version of the solar system around the
classroom.

Objectives
- To explain the characteristics of the solar system in comparison to earth
- To convert the size of a planet to a realistic size to fit in the classroom
- Create a model of a planet

Alignment
Grade 5 Science Standards:
- Cycles and Patterns in the Solar System- this topic focuses on the characteristics, cycles and within the
universe.
-Develop descriptions, models, explanations and predictions; patterns in the solar system and within the
universe.
Grade 5 Math Standards: 5.MD.1. Convert among different-sized standard measurement units within a
given measurement system (e.g., convert 5 cm to 0.05 m), and use these conversions in solving multi-
step, real world problems.
244
Underlying Theory
This lesson is mainly inquiry based. Webquests are designed to
have children learn through an inquiry approach. Although there
are certain things that they should gain from exploring the
websites, the children are not limited, and are encouraged to
discover whatever they can about their planet on this webquest by
themselves. It is largely student directed with questions that they
are asked. The model building part of the lesson uses a
constructivist approach where the students then take the
knowledge that they previously learned from the webquest and apply it to the problem of creating a
classroom-sized version of their planet.

Student Engagement
In this lesson, the students are engaged from the beginning. They research their planets through many
resources online, which are both interactive and engaging. The webquest also offers much variety by
giving them resources such as videos, games, photos, and articles. This combined with building a model
gives them a hands-on experience allowing the students to be engaged and apply what they have learned.

Resources
This project is strongly based on technology. The students must have access to computers. Every student
can have his/her own, or each group can share a computer. For the model-building portion of this activity,
the students all need appropriate supplies to create their models. This would include foam balls, paint,
modeling clay, toothpicks, craft supplies and string to hang them up with when they are completed.

Results
Unfortunately, due to certain circumstances I did not get to use an entire classroom to test this lesson. I
did however send it to a few students to have them try the Webquests. I received great reports back that it
was easy to navigate. They also told me that it was easy to use at home with their laptops and where their
parents/guardians could help them complete the Webquest.

Assessment
This activity can be assessed by the rubric, which I have created out of forty points. The rubric is also
placed on the webquest so that teachers can have easy accessibility to it. The rubric shows the assessment
for the research sheet, drawing of the planet, the draft of the model, as well as the planet model. For the
students, there is a checklist on the Webquest so that they make sure that they are on track with the
activity.

Conclusion
This activity allows students to explore the planets in our solar system. It shows them how beneficial
online resources can be in researching a topic. Creating a model also gives them a visual of what our
planets would look like if the classroom were the solar system. Lastly, the Webquest created an easy
interactive research tool that could be used in the classroom or at home. Although, I geared the lesson for
5th graders, it could be easily adapted for older or younger students.
245
Regulation of Genes By Ets Family Transcription Factors

Student Researcher: Michelle M. Mitchener

Advisor: Dr. Alicia E. Schaffner

Cedarville University
Department of Science and Mathematics

Abstract
A cells potential to proliferate, differentiate, and respond to its environment is based on its ability to alter
its gene expression. Transcription can be regulated by DNA-binding proteins called transcription factors.
These factors bind various promoter/enhancer elements leading to the activation or repression of specific
target genes. The Ets family of transcription factors has been linked to tumor progression in several types
of cancers, as they control genes regulating the cell cycle, apoptosis, extracellular matrix remodeling, and
cell migration. Therefore, Ets transcription factors are thought to play a key role in tumor invasion and
metastasis. This study sought to elucidate chromatin modification states and Ets transcription factor
locations both prior to and following signal transduction pathway activation.

Project Objectives
In order for the Ets family members to exert their effects, they first must be phosphorylated. In serum-
stimulated fibroblast cells, specific growth factors bind a receptor tyrosine-kinase which then activates
Ras. A kinase cascade ensues, as Ras phosphorylates Raf, which phosphorylates Mek, which
phosphorylates the MAP kinase Erk. Phosphorylated Erk translocates to the nucleus where it
phosphorylates Ets, which then promotes target gene expression. However, it is currently not known
whether Ets is bound to the promoters of these target genes prior to phosphorylation. The first goal of our
research is to elucidate whether Ets is bound to its target gene promoters constitutively or only when
phosphorylated by the activated pathway.

Another area of interest involves the state of the chromatin near these target genes both prior to and
during transcription. Transcription factors exert their effects by interacting with other proteins such as
chromatin remodeling complexes and histone modification enzymes. Ets2 is known to interact with the
histone acetyltransferase CBP. Therefore we hypothesize that there is increased acetylation of histones in
the target gene area upon activation of the pathway, which leads to the opening up of the chromatin to
allow for target gene transcription. The second goal of our research is to compare chromatin acetylation
states before and during Ets2 target gene expression. We are investigating this by performing ChIP
(chromatin immunoprecipitation) analysis on known Ets2 target genes c-myc, MMP-9, and miR17-92.

Methodology
Two different mouse embryonic fibroblast (MEF) cell lines were utilized in these studies. One MEF cell
line was wild-type, while the other MEF line contained constitutively active Ras (due to a glycine to
valine mutation at residue 12 which renders GAP activity null) and thus constitutively expressed Ets.
Cells were cultured in DMEM + 4.5g/L D-glucose + L-glutamine at 37C and 7.0% CO
2
. To confirm
expression of Ets in the mutant cell line but not in the wild-type line, nuclear extracts were obtained from
both cell lines and subjected to SDS-PAGE followed by Western analysis using -Ets1 primary
antibodies (polyclonal rabbit IgG, Santa Cruz Biotech). The positive control consisted of nuclear extracts
from MCF7 cell lines, while the negative control contained extracts from 293 T cells. Figure 1displays
the Western blot results, revealing expression of Ets1 in the mutant line but not in the wild-type line.

In order to perform a successful ChIP, the process of breaking strands of cross-linked DNA into pieces of
lengths 100-300 bp must first be perfected. The first technique used consisted of sonicating the
formaldehyde cross-linked cells for various lengths of time in order to shear the DNA into small pieces.
Cells were cross-linked for five minutes and sonicated on ice using a Model 150 V/T Ultrasonic
Homogenizer (BioLogics, Inc.) for 0, 6, 9, 12, 15, and 18 sets of 15-second pulses at 40% power.
Crosslinking was then reversed using NaCl and RNase at 65C overnight, and the DNA was ethanol-
246
precipitated, redissolved in distilled water, and purified using a QIAquick PCR Purification Kit.
Resultant DNA was run on a 1.5% agarose gel and the fragments were observed upon ethidium bromide
staining (Figure 2).

The second technique utilized to fragment DNA was a micrococcal nuclease (MNase) digestion. The
MNase digestion was optimized using a standardizing protocol with 0, 1, 2, 4, and 6L MNase. Resultant
DNA was run on a 1.5% agarose gel and the fragments were observed upon ethidium bromide staining
(Figure 3).

After optimization of the MNase digestion, a preliminary ChIP was performed using -RNA Pol II and -
IgG, the positive and negative controls, respectively, given in the ChIP kit (Chromatin Prep Module
#26158, ThermoScientific). Following DNA PCR-cleanup, the isolated DNA was analyzed using
quantitative real-time polymerase chain reaction (qRT-PCR) and GAPDH primers (denaturing at 95C for
5 seconds; annealing/elongating at 60C for 30 seconds; 50 cycles). The results of qRT-PCR are depicted
in graphical form in Figure 4.

Results Obtained
Results of the Western blot revealed expression of Ets1 in the constitutively active Ras cell line, but not in
the wild-type line, as expected. Decreased fragment sizes of DNA were observed with increasing
sonication time; however, the fragments remained inconsistently sized with not all fragments falling in
the desired range of 100-300 bp. Consequently, MNase digestions were pursued. These proved much
more effective, as evidenced by the decreased length of DNA fragments obtained post-digestion. The
qRT-PCR results indicate that the ChIP needs to be perfected. Given that the fluorescence obtained from
the positive and negative controls was nearly identical and fluorescence only occurred after numerous
cycles, DNA concentrations were probably low and more extensive washes during the ChIP protocol are
likely necessary to prevent the nonspecific binding observed in this study.

Figures









Figure 1. Western blot analysis showing expression of phosphorylated Ets1 in cells with constitutively
active Ras and unphosphorylated Ets1 in MCF7 cells (+ control). Wild-type cells revealed no Ets1
expression.

WT
Ras
+ Control
Ets1-p
Ets1
247

Figure 2. Ethidium-bromide visualized fragmentation of DNA by sonication. The numbers above the
wells indicate the number of sets of 15-second pulses each sample received. The first two lanes consist of
a 1kb marker and a PCR marker.



Figure 3. Ethidium-bromide visualized digestion of DNA by MNase. Following the two markers (which
are the same ones depicted in Figure 2), the numbers above the lanes represent microliters of MNase used
in each digestion.


1kb M PCR 0 6 9 12 15 18
1 kb
1kb M PCR 0 1 2 4 6
1 kb
248




Figure 4. SYBR green fluorescence measurements as a function of PCR cycles in RT-PCR analysis of
ChIP control DNA. Positive (+), negative (-), and total input (TI) controls are shown.

Acknowledgments
I would like to thank my advisor, Dr. Alicia E. Schaffner, for her guidance and support in this research, as
well as her molecular biology students who often helped with various experiments. I also am much
indebted to Cedarville Universitys Science and Mathematics Department for purchasing an X-ray
developer (SRX-101A, Konica Minolta Medical & Graphic, Inc.) for the Western blot analysis. Thank
you also to Cedarvilles School of Pharmacy for the use of their real-time PCR machine (Bio-Rad CFX96
Real-Time System, C1000 Thermal Cycler).
Pos (+)
Neg (-)
TI
249
Synthesis of Polymeric Ionic Liquid Particles and Iron Nano-Particles

Student Researcher: Joseph P. Montion

Advisor: Dr. Maria Coleman

University of Toledo
Department of Chemical Engineering

Abstract
1-Vinyl-3-butylimidazolium bromide which is a polymeric ionic liquid is hygroscopic. This
makes it ideal for the formation of Polymeric ionic liquid (PIL) particles. Using emulsion
polymerization microparticles can be formed which will have iron nanoparticles grown inside of
them to produce novel reactive materials. This materials main purpose will be to break down
heavy reactive materials. This project is still in progress, and so far the particles have been
successfully synthesized. The method for the iron formation is still being researched.

Project Objectives
The project's main goal will be to synthesize polymer particles from ionic liquids. Nanoparticles
of zero valent iron will be grown inside the polymeric ionic liquid particles to produce novel
reactive materials. The particles will eventually be used in waterways to break down heavy
organic pollutants. Emulsion polymerization will be used to produce PIL particles. Zero valent
iron is very reactive with organic compounds and is normally injected into the water. The
advantages of using iron grown in PIL particles is that it keeps the iron from conglomerating
which reduces activity. The PIL might act as a catalyst for the reaction, and ionic liquid can be
hydrophobic, which will help the particle disperse in the organic phase and react to from less
harmless compounds. Another approach that could be taken to forming a polymeric ionic liquid
nanomaterial with Iron particles would be electrospinning. Electrospinning would yield and PIL
that, instead of being a small particle, would be a long thin strand, which could possibly be more
effective in reacting with the organic pollutants.

Methodology Used
The methodology is standard emulsion polymerization. The Ionic liquid is suspended in a water
phase. Then an oil phase will be added to it with a surfactant. The water will suspend itself the
oil phase creating small drops of water in the oil. To ensure that the water is even distributed
throughout the oil phase, the water solution is added dropwise to the oil phase while it is being
stirred. After the two phases have been completely mixed, an initiator is added and the
polymerization begins in the water phase. [1]

Forming Iron inside of the particles has proven more difficult. There have been several different
methods studied. The first is an emulsion polymerization that uses the previously made particles
to hold the water solution then submerging them into the oil phase so that the iron is formed
inside the particles. The other method is to run the iron synthesis alongside the polymerization
reaction.

Using the Ionic Liquid 1-Vinyl-3-butylimidazolium bromide, I was able to successfully create
particles using the methods above. These particles look like a white paste and are very
hygroscopic. The iron formation reaction was also used on them by using aqueous ferrous
chloride and sodium borohydride. This had an effect on the particles although it was not
determined whether the irons had formed inside the particles or just on the surface.
250
So far with these results all we can determine is that it is possible to form the particles and grow
the iron alongside of them. There is a lot of room for further study into this area, and more
testing needs to be compete to come to a conclusion on whether this fulfilled the goals of the
project or not.

Acknowledgments
The University of Toledo Department of Chemical Engineering
Dr. Maria Coleman

References
1. Berge, N. D. and Ramsburg, C. A. (2010) Iron-mediated trichloroethene reduction within
nonaqueous phase liquid. Journal of Contaminant Hydrology, 118, 105-116.
251
The Study of Wildfires Using Remote Sensing

Student Researcher: Nathaniel J. Morris

Advisor: Dr. Augustus Morris

Central State University
Department of Manufacturing Engineering

Abstract
In order to determine the fire effects on vegetation and forestry, remote sensing is the best technique to
use. The image sensor for remote sensing should be at an altitude of 36 km to capture a sufficient amount
of affected areas. The payload will be mounted on the High Altitude Student Payload (HASP). The
payload must conform to HASPs interface requirements and regulations. As HASP ascends to 36 km
above sea level, the payload and its internal components must survive extremely low temperatures and
near vacuum conditions. Since HASP is lifted by a small volume zero pressure balloon, the HASP
platform is subjected to unpredictable rotations and tilting along its axes. The unpredictable movement of
the HASP will be compensated for by the electronics inside the payload. The compensation for rotation
and tilting is to guarantee an effective and a high quality remote sensing experiment.

Project Objectives
The scope of the project is to develop a payload that fits within the HASP interface requirements and
regulations while performing effective remote sensing on fire affected areas. The remote sensing will
determine how high-intensity fires affect the health of vegetation and the restoration of forests. In order
to gather data effectively, a variety of sensors for the payloads health and high resolution image sensors
are integrated into the payload. Consequently, the payload is mounted onto HASP and interfaced for
telemetry communication. HASP and the remote sensing payload will ascend to 36 km above New
Mexico and stay afloat for approximately 18 hours. During the flight operation, the ground station will
monitor the health status of the payload and verifying if the payload is functioning correctly. At 36 km the
payload will record image data from areas that have been affected by wildfires. For each image taken of
the fire affected area, the orientation and the geographical center of the image must be recorded during the
flight and extracted to be used in a geospatial analysis.

Methodology Used
The two areas of concern in this project are the remote sensing and the geospatial analysis. The remote
sensing portion of this project includes the development of the physical payload and electronics. The
geospatial analysis is a post flight study of the image data recorded by the payload at an altitude of 36 km.
The project outcome is to determine the effects of wildfires on vegetation and forestry.

Due to HASP interface regulations, the external section of the payload is refrained from being any larger
than 15cm x 15cm x 30cm. Additionally, the payload is required to handle a 10g vertical and 5g
horizontal shock and have a mass less than or equal to 3 kg. Consequently, the sensors and electronics
must be designed to have a footprint small enough to fit inside a payload with foam insulation and be
mounted in a way to absorb substantial shock. The payload is designed to be robust enough to handle
environmental factors at an altitude of 36 km. These environmental factors include pressures below 3
mbar and temperatures ranging from -40C to 30C. The payload side panels are constructed of a
fiberglass-balsa wood composite, framed by angled aluminum for additional support. Also, the
composite panels are coated with appliance white to reduce the amount of infrared heating. The layer
behind the composite panels is made up of a polystyrene foam insulator. This polystyrene foam will
prevent any excessive freezing during the ascension to 36 km. The design mentioned above for the
payload is sufficient to withstand the harsh environmental conditions at 36 km and satisfy HASP interface
requirements.

The electronic design is essential in collecting practical information for a thorough and effective
experiment. The electronics include two image sensors, temperature, humidity, 3-axis accelerometer,
252
GPS, and a pressure sensor. The two image sensors are in different wavelength bands. One image sensor
is sensitive to the visible wavelength and the other is sensitive to the near-infrared wavelength band. This
combination of wavelength bands provides insightful information about vegetation health. The
temperature, humidity, and pressure sensors are primarily for the payloads internal health during the
flight. Most importantly, the 3-axis accelerometer and GPS provides information about the orientation of
the image data that is being recorded. Hence, these sensors are used to control the quality of the
experiment when it is ascending to 36 km. In addition to the sensors, there is a basic stamp
microcontroller that controls that rate of the image recording and how often data is transmitted to the
ground station during the 18 hour flight. The transmitted data is packaged into 10 kilobyte files that are
stored onto an online server. The transmitted data can be access from the server via the internet.

The remote sensing is the most important part of this project. The image sensor at an altitude of 36km is
pointing directly down towards the earths surface, capturing fire affected areas. To verify healthy
vegetation and restoration of fire affect areas, the remote sensing has to be in specific wavelength bands.
The two wavelength bands are the visible and near infrared. The wavelength range for visible is 400nm
700nm and for near infrared is 750nm 1400nm. These two wavelength bands were chosen because of
how healthy vegetation reflectance varies dramatically from the visible to the near infrared. Variation in
reflectance defines the reflectance profile for particular objects on the ground. Therefore, the two
wavelength bands are a way to help differentiate between other objects and vegetation, as depicted in
Figure 1.


Figure 1. Reflectance with respect to wavelength.

Since images are going to be collected after the fire has affected an area, there has to be a way the images
can be compared to another source of images before the fire. One of these sources is the USGS satellite
image database. The Landsat 5 satellite does remote sensing at 30m resolution in visible and near
infrared. Hence, the Landsat 5 satellite images are a perfect source that can be used to compare with the
payloads remote sensing images. Once the images from both sources are obtained, geospatial software
ENVI 2011 can be used to create overlays of where the healthy vegetation is located after the fire and
where it is located, before the fire.

Results Obtained
The results obtained from this project were impressive in terms of the amount of area captured by the
image sensors. However, due to the requirement to capture a significant amount of area, the wide angle
lenses on the image sensors caused a considerable amount of distortion. Therefore, this distortion
provided a challenge during the geospatial analysis. In order for the geospatial analysis to be effective, the
centers of the images in each wavelength band must correspond. The image datas centers from this
experiment were close enough to at least perform the analysis with reasonable outcomes. The number of
images taken during the 18 hour flight was 1510 images, covering approximately 482 km of ground. The
sensor data established the payloads status with respect to time and altitude. Hence, the image datas
geographical location was identified and compared to Landsat 5 satellite images effortlessly.
253



Figure 2. NDVI image of Albuquerque, New Mexico.

Figure 2 depicts the Normalized Difference Vegetation Index (NDVI) of a set of pictures above
Albuquerque, New Mexico. The NDVI provides an image in gray scale based on the reflectance of the
material. The brighter areas of the image have a high reflectance based on the wavelength band
combination, and the darker areas have a low reflectance. In Figure 1, the reflectance of healthy
vegetation is highest for the selected wavelength band combination. Therefore, healthy vegetation can be
identified on a NDVI image by locating the brightest section of the image. The red arrows located on
Figure 2, points out some of the areas that are identified as healthy vegetation. When comparing the
NDVI image to Landsat 5 satellite (Figure 3) image, the geospatial analysis is correctly identifying the
areas in New Mexico that consist of healthy vegetation.


Figure 3. Landsat 5 satellite image of Albuquerque, New Mexico.

254
Figure 3 is a Landsat 5 satellite image with red arrows locating the same landmarks as the NDVI image
Figure 2. However, the geospatial analysis was not able to assist in determining the difference in healthy
vegetation after a wildfire. This incompetence is due to the lack of additional wavelength bands and high
distortion in the images recorded. In order to improve this experiment, the payload would be required to
record images in more than two wavelength bands and reduce the image distortion by selecting a
narrower angle lens. In conclusion, this research project has portrayed that a small payload is capable of
performing geospatial analysis at an altitude of 36 km, without degrading the image quality.

Acknowledgments
The author of this paper would like to thank the Ohio Space Grant Consortium for providing support in
finalizing this project. The author would also like to thank Ms. Denae Bullard, Mr. Vincent Rono, and
Mr. Kimenyi Joram. Finally, the author would like to thank Dr. Augustus Morris and Dr. Gregory Guzik
for project organization and guidance.

References
1. Arnold, L., Gillet, S., Lardiere, P., Schneider, J., A test for the search for life on extrasolar planets,
Astronomy & Astrophysics, September 2, 2002.
2. IPAC, Near, Mid & Far Infrared, http://www.ipac.caltech.edu/outreach/Edu/Regions/irregions.html
Accessed 05-01-2011.
3. Landsat 5 Satellite Image- Albuquerque, New Mexico Accessed 04-10-2012.
255
How Ailerons Generate a Rolling Motion for an Aircraft

Student Researcher: Gerald A. Munguia

Advisor: Jeremy Gallogy

Sinclair Community College
Department of Aviation

Abstract
The word Aileron is French for "little wing". Ailerons are used to generate a rolling motion for
an aircraft. They are small hinged sections usually on the outboard portion of a wing "outboard
aileron", but may sometimes be situated nearer the wing root "inboard aileron". They usually
work in opposition: as the right aileron is deflected upward, the left is deflected downward and
vice versa.

The ailerons are used to bank the aircraft by causing one wing tip to move up and the other tip to
move down. This creates an unbalanced side force component of the large wing lift force which
causes the aircraft's flight path to curve. Airplanes turn because of banking created by the ailerons,
not because of rudder input.

Project Objective
To give people a better understanding of how ailerons create a rolling motion for an aircraft.

Methodology
In order to understand how ailerons work we have to first take a look at the forces that act upon an
airplane in flight.

There are four aerodynamic forces acting on a plane in flight:

1) Lift- the upward acting force
2) Weight/gravity- the downward acting force
3) Drag- the air resistance or backward acting force
4) Thrust- the forward acting force

These four forces are continuously battling each other while the plane is in flight. Gravity opposes
lift, thrust opposes drag. To take off, the aircraft's thrust and lift must be sufficient to overcome its
weight and drag. To land, an aircraft's thrust must be reduced safely below its drag, as its lift is
reduced to levels less than its weight. In flight at constant speed the forces equal each other: thrust
equals drag and lift equals the pull of gravity.

Results Obtained
The ailerons on an airplane's wings control roll around the longitudinal axis (the line from the nose
of the plane to the tail). They are tied to the control wheel, or stick, in the cockpit. When the
control wheel is turned left, the aileron on the left wing goes up and the one on the right goes down
and vice versa.

Ailerons alter the lifting ability of the wings slightly. When an aileron is lowered, the lift on the
outer portion of that wing increases, causing that wing to rise a little. When an aileron is raised, the
lift on the outer portion of that wing is decreased slightly, causing that wing to drop a little. Since
the ailerons work together, simultaneously tied to the control wheel, their action causes the airplane
to roll.

256
Significance and Interpretation of Results
The Wright Brothers used wing warping instead of ailerons for roll control. As aileron designs were
refined, and aircraft became larger and heavier, it was clear that ailerons were much more effective
and practical for most aircraft for roll control. With ailerons in the neutral position, the wing on the
outside of the turn develops more lift than the opposite wing due to the variation in airspeed across
the wing span, which will cause the aircraft to continue to roll. Once the desired angle of bank
(degree of rotation on the longitudinal axis) is obtained, the pilot uses the opposite aileron to prevent
the angle of bank from increasing due this variation in lift across the wing span. This opposite use
of the control must be maintained throughout the turn. The pilot also uses a slight amount of rudder
in the same direction as the turn to counteract adverse yaw (a yawing movement in the opposite
direction to roll) and produce a coordinated turn where the fuselage is parallel to the flight path. A
slip indicator, the ball", in the cockpit lets the pilot know when this is achieved.

Figures and Charts





257






References
1. Chanute Air Museum: How Does an Airplane Fly?
2. NASA Glenn Research Center: Ailerons
3. Wikipedia
4. Introduction to Aircraft Maintenance by Avotek
5. U. S. Centennial of Flight Commission: Ailerons
258
ACTH Signaling in Tetrahymena Thermophila
Student Researcher: Justin S. Nichols
Advisor: Heather Kuruvilla
Cedarville University
Department of Science and Mathematics

Abstract
Adrenocorticotropic hormone (ACTH) is an avoidance-inducing chemorepellent in the single-celled
eukaryote Tetrahymena thermophila. Upon introduction of a chemorepellent into their environment,
Tetrahymena will exhibit avoidance through ciliary reversal and abnormal swimming patterns, both of
which can be observed under a dissecting microscope. ACTH 6-24 (charge of +8) is a positively charged
fragment of ACTH that causes high levels of avoidance in Tetrahymena. We hypothesized that ACTH 6-
24 would signal in Tetrahymena through a pathway analogous to pathways used by other polyatomic
cations in Tetrahymena or to the human ACTH signaling pathway. Other positively charged
chemorepellents in the ciliate have been shown to signal for avoidance through a G-protein receptor and
Ca2+ dependent depolarization (Kuruvilla et al., 2008). In humans, ACTH acts on zona fasciculata cells
in the adrenal glands to induce cortisol secretion. ACTH signals through a pathway involving a G-protein
receptor, the secondary messenger cAMP, inhibition of bTREK-1 K+ channels and Ca2
+
based
depolarization (Enyeart et al., 1996; Kimoto et al., 1996; Liu et al., 2008).

Project Objectives
By using pharmacological agents to selectively inhibit hypothetical pathway intermediates, avoidance to
ACTH in Tetrahymena can be blocked if its pathway involves one of the inhibited intermediates. Our
goal in this project was to either verify or reject our hypothetical ACTH 6-24 signaling pathways by
blocking potential intermediates in the pathway in order to gradually map out the signaling cascade of the
molecule.

Methods
In order to test our hypothesis, behavioral assays were performed on Tetrahymena thermophila. 300l of
cells were washed with Tris buffer and transferred to the first well of a three-well microtiter plate. A
fraction of the cells were transferred to the second well, which contained 300 l of buffer as well as the
inhibitor of interest diluted to various concentrations. After an adaption period of about 15 minutes, cells
from the second well were transferred to the third well, which contained 300 l of buffer along with the
diluted inhibitor and 5M concentrations of ACTH 6-24.

The cells were observed for the first 1-5 seconds of entry into the third well and were scored as either
showing avoidance (as indicated by ciliary reversal and circular swimming patterns) or as having been
inhibited (indicated by normal helical swimming patterns). Within groups of 10 cells, percent avoidance
was calculated and the percentages of all the groups of 10 tested were then averaged.

Results and Discussion
Contrary to our hypothesis, ACTH 6-24 does not seem to be signaling through pathways similar to
previously researched pathways in Tetrahymena thermophila or through a pathway analogous to the
human ACTH signaling pathway. Chelating intracellular and extracellular Ca2+ stores did not affect
avoidance, which is peculiar since previous avoidance responses in Tetrahymena have almost always
involved Ca2+ dependent membrane depolarization. Depolarization does not seem to be taking place
through Cl- or K+ channels either, since blocking these channels did not block avoidance. Also, the
receptor used to bind ACTH does not appear to be a G-protein linked receptor or a tyrosine kinase
receptor, since Rp-cAMPs and Genistein did not block the avoidance response.

One interesting finding is that nociceptin appears to cross-adapt at some level with ACTH 6-24, but it is
not at the receptor level or the depolarization level. Perhaps they share some unknown second messenger.
259
ACTH 6-24 has not signaled through any prototypical Tetrahymena signaling pathways and may be
signaling through a yet-undiscovered mechanism.

Electrophysiology studies would be helpful to determine if the Tetrahymena are indeed signaling for
avoidance through a change in membrane potential or through an entirely different mechanism. Since
ACTH 6-24 seems to cross-adapt at some level with nociceptin, perhaps nociceptins signaling pathway
can be elucidated to provide clues about the method of ACTH 6-24 signaling.

Charts












References
1. Enyeart, J. J., Mlinar, B., & Enyeart, J.A. (1996). Adrenocorticotropic hormone and cAMP inhibit
noninactivating K+ current in adrenocortical cells. Journal of General Physiology, 108(4), 251.
2. Haiyan, L., Enyeart, J. A., & Enyeart, J. J. (2008). ACTH inhibits bTREK-1 K+ channels through
multiple cAMP-dependent signaling pathways. Journal of General Physiology, 132(2), 279-294.
doi:10.1085/jgp.200810003
3. Kimoto, T., Ohta, Y., & Kawato, S. (1996). Adrenocorticotropin induces calcium oscillations in
adrenal fasciculata cells: Single cell imaging. Biochemical and Biophysical Research
Communications, 221(0538), 25-30. http://glia.c.u-tokyo.ac.jp/pdf/Kimoto1996.pdf.
4. Robinette, E. D., Gulley, K. T., Cassity, K. J., King, E. E., Nielsen, A. J., Rozelle, C. L., & ...
Kuruvilla, H. G. (2008). A Comparison of the polycation receptors of Paramecium tetraurelia and
Tetrahymena thermophila. Journal of Eukaryotic Microbiology, 55(2), 86-90. doi:10.1111/j.1550-
7408.2008.00310.x
Inhibitor % Avoidance
(Mean and SD)
1 mM EGTA Cells Killed
100 M EGTA 95.85 6.18
100 mM Thapsigargin 99.23 2.77
Cross-Adaptation % Avoidance
(Mean and SD)
50 M J113397 92.86 7.56
Nociceptin into ACTH 18.57 6.90
ACTH into Nociceptin 12.00 4.08
Inhibitor

% Avoidance
(Mean and SD)
100 M NPPB, 100M Cilnidipine, 1
mM Amlodipine
Lethal
10 M NPPB 94.00 5.16
10 mM TEA 98.18 4.05
1 mM TEA 98.18 4.05
Inhibitor % Avoidance
(Mean and SD)
ACTH 6-24, 5 M (Control) 93.75 5.17
50 M RpcAMPs 90.00 7.56
Genistein, 100 g/ml 95.83 6.69
Table 1. ACTH 6-24 signaling in response to Ca2+
chelators. EGTA, an extracellular calcium chelator, and
Thapsigargin, which acts on the ER to deplete intracellular
calcium stores, both failed to block ACTH at viable
concentrations.
Table 2. ACTH 6-24 signaling in the presence of specific
ion channel blockers. NPPB, a Cl- channel blocker,
cilnidipine and amlodipine, Ca2+ channel blockers, and TEA,
a K+ blocker, all failed to inhibit avoidance.

Table 4. ACTH 6-24 signaling in response to G-
protein and tyrosine kinase inhibitors. Rp-cAMPs, a
G-protein antagonist and cAMP inhibitor, and genistein,
a RTK inhibitor, failed to block avoidance to ACTH 6-
24.

Table 3. ACTH 6-24 cross-adaption with Nociceptin.
Cells treated with J113397, a nociceptin receptor
antagonist still showed avoidance to ACTH 6-24, while
those cross -adapted with nociceptin itself showed near
baseline levels of avoidance, indicating a shared
downstream secondary messenger.

260
Green Roofing vs. Traditional Roofing

Student Researcher: Leah M. Oty

Advisor: Dean Bortz

Columbus State Community College
Department of Architecture/Construction Management

Abstract
Roofs rank near the top of the list when it comes to places homes can see big benefits from possibly going
green. After all it is a perfect candidate for improving your homes over all energy efficiency, because of
its exposure to heat, cold, and sunlight. Also since your roof is your homes most important line of
defense when it comes to protecting you and your family from the elements, its easy to see why
homeowners are weighing there options on roofing materials and methods. The goal of this project is to
determine whether green roofing would be a better roofing method than the more popular traditional
roofing.

Project Objective
My objective is to research different types of roofing materials and methods to determine what would be
the best choice for homeowners, whether youre remodeling or building a new home.

Methodology Used
I used information from research involving green roofing and traditional roofing. I also used research
from case studies and tests on materials to determine what I think would be the best material to use on a
home today.

Results Obtained
Results obtained from the combination or research shows that lighter colored cool roofing is better at
reflecting more heat. This results in your home staying cooler leading to using less energy to heat and
cool. These cool roofing materials come in different shades that result in different amounts of reflectivity.
When comparing green roofing, metal roofing, and traditional asphalt roofing an important factor to
consider is the life-cycle of the products. Some roofing materials may cost more but when you consider
their life-cycle they can be a better choice. Green roofing has its advantages, but when considering this
product it is imperative that you make sure your roof can hold the weight load of the substrate and
vegetation.

Significance and Interpretation of Results
Based on the research that I have compiled both green vegetation roofs and metal roofs have more
advantages over traditional roofing materials. Both green vegetation roofing and metal roofing have an
increased upfront cost but they will pay themselves off by saving on other energy costs, and also have a
longer life cycle. For homeowners that cannot use green roofing I believe that cool metal roofing would
be the best choice. It comes in many colors and thicknesses, and also can be installed over your current
roofing material saving on material that would be going to the dump.

References
1. Cool Metal Roofing. 2006. Cool Metal roofing Coalition. 10 April 2012.
< http://www.coolmetalroofing.org/content/case_studies/>
2. Asphalt Roofing Manufacturers Association. 2012. Asphalt roofing Manufacturers Association. 10
April 2012. < http://www.asphaltroofing.org/about_history.html>
3. Roof Genius. 2012. Roof Genius. 20 March 2012. <http://roofgenius.com/roofmaterialchoices.htm>
4. U.S. Department of Energy. 2011. U.S. Department of Energy. 20 March 2012. <
http://www1.eere.energy.gov/buildings/cool_roofs.html>
261
NASA Weightless Wonder: C-9 Jet and Equations of Motion

Student Researcher: Kathryn R. Reilly

Advisor: Dr. Brian Boyd, Dr. Sachiko Tosa

Wright State University
Department of Science and Math, and Education

Abstract
This lesson will be designed for a high school level AP physics student, learning kinematic equations of
motion. A multi-step problem will be used to illustrate how to use the equations of motion to find relevant
quantities, such as velocity and displacement. Along with utilizing these physics equations to achieve
numerical answers for analysis, this lesson will encompass ideas of microgravity in space. The C-9 jet in
the problem will be introduced along with its uses for microgravity training for astronauts and other
microgravity experiments. By using the physics problem concerning the C-9 jet and its travel in addition
to actual applications of the C-9 jet for microgravity training and research, the students will be able to
understand how microgravity can be simulated for training and testing. Placing an emphasis on applying
the physics and math, they should also be able to better grasp the importance of equations of motion.
NASA videos will be utilized to illustrate the C-9 functions, motion, and maneuvers. By applying physics
and math to this NASA application, students will see how useful questions concerning distance, velocity,
acceleration, time, and shape of the motion of C-9 jets can be answered through physics, while also
learning about microgravity and how to collaborate with groups in designing a microgravity experiment.
Objectives
To use equations of motion to solve a multi-step AP-level physics problem concerning the C-9 jet
To introduce microgravity and its affects
To explain the role of the NASA C-9 jet and its applications as related to microgravity training
To design and predict what will happen in a microgravity experiment

Standards
Grade Twelve Physical Science Benchmark D: Apply principles of forces and motion to mathematically
analyze, describe and predict the net effects on objects or systems.

Grade Twelve Physical Science: Forces and Motion indicator #5: Use and apply the laws of motion to
analyze, describe and predict the effects of forces on the motions of objects mathematically.

Methodology Used
Content, creativity, and communication are three extremely important aspects of science. In this three to
five day lesson, all of these aspects are explored. Students will build their concept skills, focusing on
equations of motion and how to use them to solve a real-life problem. Students will be able to express
their creativity in creating a microgravity experiment with their group. Communicating ideas is an
extremely important skill for these students to gain and by explaining their hypothesis and predictions
about what is going to happen in the experiment, their communication skills will improve. They will learn
how important it is to support predictions with facts that they learned in the previous days lesson and
how to present their experiment, as part of a group, to their peers.

Lesson
Day 1: Introduction to Microgravity, including NASA supplemental videos
Day 2: Introduction to C-9 Jet and its microgravity applications, including NASA supplemental videos
Day 3: AP physics test curriculum problem using equations of motion to analyze the C-9 jets travel
262
Day 4-5: Group microgravity experimental design and presentations

Engagement
Students will of course be engaged when learning about microgravity, a fascinating physical phenomena.
The videos will help draw students in and hold their attention. When working as a group creating their
own microgravity experiment they will be free to communicate, but also driven to make a reasonable
experiment and explain what they think will happen using supporting facts about microgravity. The
presentation they are required to give motivates them to work with their group to present a cohesive,
thought-out experiment and their supported predictions.

Resources
Each student will have a copy of the Weightless Wonder packet, containing the motion problem as well
as a brief explanation of the C-9 jet and its applications related to microgravity.

Access to a television connected computer will be utilized for showing microgravity clips.

Results
After working through the AP problem in groups and then together as a class, the answers with full
solutions will be made available to students so they have a chance to study for similar AP-style problems.

Results of the group design microgravity experiment will vary based on the experiments the students
design. The students will act as scientists here and will be expected to communicate, discuss, and come to
some reasonable agreement about what the outcome of the experiments will be.

Conclusion
Students will get a good idea of how a typical AP-style equation of motion problem will be set up. They
will get used to doing multi-step problems with practice like this, allowing for preparation for the AP test.

Students will also have learned what microgravity is and how it affects NASA missions. They will learn
about the C-9 jet and how it is used related to microgravity. Group work will be emphasized when
designing their microgravity experiments and group presentations will enhance communication skills.

References
1. "NASA Reduced Gravity Student Flight Opportunity." NASA. Jim Wilson. Web. 9 Apr. 2012.
<http://www.nasa.gov/multimedia/videogallery/index.html?media_id=65696581>.
2. "Weightless Wonder - Reduced Gravity." NASA. Shelley Canright. Web. 10 Apr. 2012.
<http://www.nasa.gov/audience/foreducators/mathandscience/exploration/Prob_WeightlessWonder_d
etail.html>.
3. "What Is Microgravity?" NASA. Ed. Kathleen Zona. Web. 12 Apr. 2012.
<http://www.nasa.gov/centers/glenn/shuttlestation/station/microgex.html>.
263
The Electromagnetic Spectrum: Exploring the Chemical Composition of Our Solar System

Student Researcher: Stephanie A. Rischar

Advisor: Dr. Robert Ferguson

Cleveland State University
Department of Education CSUTeach Science

Abstract
When atoms are excited they emit light of certain wavelengths which correspond to different colors. The
emitted light can be observed as a series of colored lines with dark spaces in between; this series of
colored lines is called a line or atomic spectra. Each element produces a unique set of spectral lines.
Since no two elements emit the same spectral lines, elements can be identified by their line
spectrum. When we are able to identify elements by their unique spectral lines, we can use this
information to determine the chemical composition of various light sources.

In this lab, students must first use a flame test and spectroscopes to determine the unique spectra for
different elements. The students will then apply this knowledge to other sources of lights throughout the
classroom (incandescent, fluorescent, sunlight, etc.) and also to images of spectra of various stars and
objects in our solar system. Students will then have the opportunity to look at various spectrum and
images of different celestial bodies in our solar system.

Lesson
Part 1: Introduction to the Electromagnetic Spectrum
This lesson and lab activity will serve as an introduction to the electromagnetic spectrum. To begin the
lesson, students will be taken on a video tour of the electromagnetic spectrum via NASAs Mission:
Science web page. These video segments not only introduce the spectrum as a whole, but also go into
depth for each of the parts of the spectrum (Radio Waves, Microwaves, Infrared, Visible Light,
Ultraviolet, X-Rays, and Gamma Rays).

Part 2: Exploring Visible Light Emission Spectra
After learning about each part of the electromagnetic spectrum students will be given the opportunity to
use spectroscopes to look at the spectra produced from visible light sources (sunlight, incandescent light,
and fluorescent light). It will then be discussed that different light sources produce different colored
bands due to the different chemical composition. Students will then look at the spectra of various
elements (Sodium, Strontium, Lithium, Helium, Hydrogen, Calcium, etc.) to see the unique emission
spectra produced for each element. This can be done by using the spectroscopes to look at the elements
through gas discharge tubes (if available) or flame tests of chemicals containing the elements.

Part 3: Exploring the Solar System Using the Electromagnetic Spectrum
Students will then be given images of various celestial objects from our solar system. These pictures will
show images of planets and stars through Ultraviolet, Infrared, Radio Waves, Microwaves, X-Rays, and
Gamma Rays. This will allow the students the opportunity to see examples of every type of wave in the
electromagnetic spectrum.

Objectives
Students will

- Compare and contrast the regions of the electromagnetic spectrum
- Explain how a photon of light is emitted from excited atoms.
- Distinguish between emission spectrum of different light sources and elements


264
Alignment
This lesson is for a high school chemistry class and covers the following Ohio Standards.

- Demonstrate that waves (e.g., sound, seismic, water and light) have energy and waves can
transfer energy when they interact with matter.
- Explain how scientists obtain information about the universe by using technology to detect
electromagnetic radiation that is emitted, reflected or absorbed by stars and other objects
- Demonstrate that electromagnetic radiation is a form of energy. Recognize that light acts as a
wave. Show that visible light is a part of the electromagnetic spectrum (e.g., radio waves,
microwaves, infrared, visible light, ultraviolet, X-rays, and gamma rays).

Underlying Theory
This lesson was developed using the belief that students learn science best when exposed to meaningful
hands-on activities and real-life connections. The electromagnetic spectrum is an important but difficult
concept for students to grasp, especially since they cannot be seen by the naked eye. By providing
students with a means to view the emission spectra of different elements the students can get a visual of
the different wavelengths of light emitted. Furthermore, by providing students with images of celestial
bodies taken at different wavelengths, the students can connect the theory of the electromagnetic
spectrum with real-life imagery.

Resources
- Video Tour of the Electromagnetic Spectrum
- http://missionscience.nasa.gov/ems/emsVideo_01intro.html
- Tour of the Electromagnetic Spectrum Booklet
- Spectroscopes
- Chemicals for Flame Tests (Sodium Chloride, Strontium Chloride, Barium Chloride, Lithium
Chloride, etc.)
OR
- Gas Discharge Tubes (Helium, Sodium, Hydrogen, Lithium, Strontium, etc.)

Conclusion
During this lesson, students will have the opportunity to explore the electromagnetic spectrum through a
number of different methods. Students will be able to gain a stronger understanding of the abstract
concept of the electromagnetic spectrum, and see that each element has unique spectra which can be used
as identification.
265
Investigation and Design of a Powered Parafoil and its Applications in Remote Sensing

Student Researcher: Dominique N. Roberts

Advisor: Dr. Augustus Morris

Central State University
Department of Manufacturing Engineering

Abstract
Central State University created a ballooning satellite program in order to expose its students to the field
of aerospace engineering. Over the years the program has evolved to create research opportunities for the
students and offer them hands on experience in innovation and manufacturing. The objective of this
project is to create a high altitude payload that will use a motorized parafoil to replace the parachute and
be able to control the decent of the apparatus. Another important element is tracking the apparatus. APRS
is the tracking device used in our design. APRS is a digital communications information channel for Ham
radio. As a single national channel, it gives the mobile ham a place in any area at any time to capture what
is happening in ham radio in the surrounding area. The APR information is transmitted on a certain
frequency (144.390 MHz). This project will look at the design of a small, remotely controlled, powered
parafoil capable of carrying a lightweight camera system able to collect images in the visible light and
near infrared range for later processing. Such a system could prove to be ideal for many of the local
farmers in the Central State vicinity. This investigation will consider the challenges of carrying the
appropriate sensors, keeping the total mass to a minimum, and controlling its position through remote
control. Initial results from this investigation will form the basis for future work in the area of precision
agriculture.

Project Objective
The objective of this project is to create and test a ballooning apparatus that would be controlled upon
decent to allow the option of aerial photos to be used for agriculture. The main focus at this point is the
tracking devices used in the system and in the vehicle that will travel the path of the ballooning system.
Automatic Packet Reporting System (APRS) was designed to support rapid, reliable exchange of
information for local, tactical real-time information. This system is different from similar systems, in the
sense that it does not use the usual message-and text-transfer activity, but relies on the graphic display of
station and object locations and movements. Balloon Track for windows was used to determine the
vicinity where the apparatus will descend. Balloon Track uses data provided on the internet to predict the
path, range and bearing of a high altitude balloon flight to the landing site.

Methodology Used
The balloon system used consisted of 3 primary components. An 8-foot neoprene weather balloon will be
used to lift the entire system. The payload shell was donated by Taylor University and was constructed
out of foam insulation, (a plastic shell held together by screw pins). The balloon will be inflated with
helium from a regulated helium tank, and a PVC designed coupling the system will be designed to allow
easy balloon inflation.

A Hobo data logger device was used to record the temperature measured with an external temperature
probe. Each measurement is time and date stamped. An aerial photography system will be added to the
payload using two modified Aiptek cameras triggered by an electronic timer circuit. Also a third Go pro
camera will be added to take pictures every 30 seconds.

The data logger was set up to record temperature every 5 seconds. The camera system was designed to
take pictures every 3 minutes. Both devices were placed into the payload structure and sealed.
Meanwhile the balloon was inflated with helium to a diameter capable of lifting the total balloon system
and payload weight. While the payloads were attached to the inflated balloon, the camera system and the
data logger were turned on.
266
Artificial Electrocardiogram Parameter Identification and Generation

Student Researcher: David J. Sadey

Advisor: Dr. Daniel Simon

Cleveland State University
Electrical Engineering Department

Abstract
In this project a new method for the generation of parameters for synthetic models of real-world
electrocardiograms (ECGs) is proposed and tested. These synthetic models can accurately duplicate many
of the important morphological features of ECGs, and can be used to model a number of arrhythmias. It is
shown that the application of both biogeography-based optimization (BBO) and gradient descent
optimization (GDO) algorithms can be used to accurately obtain the parameters for the artificial
generation of cardiac conditions such as atrial fibrillation and ventricular flutter. These parameters can be
adjusted so that researchers can generate ECGs with variations in heart rate, sampling frequency, noise,
and other features. This will allow for the testing of various diagnostic algorithms as a researcher sees fit.

Project Objectives
The morphology and timing information found in ECG waveforms has been used to convey information
for diagnosing various heart deficiencies over the past century [1]. Traditionally, tools such as calipers,
axis-wheel rulers, ECG rulers, and straight edges have been used to measure various parameters of an
ECG [2]. However, these customary forms of ECG analysis have long proved to be inefficient, often
leading to unreliable diagnoses. In the days of modern computing, biomedical signal processing has been
used to improve ECG diagnoses.

Biomedical signal processing has been used to accurately extract features that help characterize and
understand information contained in ECG signals [1]. Traditional forms of ECG measurements have been
mimicked through use of computer based methods, with the added benefit of reducing user subjectivity
through visual assessment [1]. Over the past 20 years, diagnoses of various heart deficiencies and diseases
have successfully been achieved by digital techniques involving fuzzy classification [3], time-domain
signal averaging [4], and BBO trained neuro-fuzzy classification [5]. Testing for the effectiveness of these
diagnostic algorithms may prove problematic however. Currently, new biomedical signal processing
techniques and algorithms are commonly tested by using ECG waveforms obtained from large databases
such as Physionet [6]. Obtaining signals with desired features from a database such as Physionet is time
consuming, as it involves looking at hundreds of ECGs, lasting up to twelve hours long. Another problem
is that the signals obtained by databases such as Physionet have similar noise levels and one specific
sampling frequency [7]. This makes it difficult to assess how well one's diagnostic algorithm would work
in other clinical settings where factors such as noise and sampling frequency vary.

To combat the issues of finding specific ECG signals under varying conditions, this project attempts to
identify the respective parameters of various ECG waveforms for a dynamic ECG model. After these
parameters are identified, they are used to generate realistic ECG waveforms with a variety of different
effects. The resulting ECG waveforms are proposed to assess the performance of diagnostic algorithms
under a number of different conditions.

Methodology
In order to generate a waveform that can model many of the important characteristics of a typical ECG, a
dynamic ECG model originally proposed by McSharry et al. (2003) was used [7]. This dynamic model is
governed by a set of three ordinary differential equations

x x y o e =

y y x o e =

267
2
0 2
{ , , , , }
exp( ) ( )
2
i
i i k
i P Q R S T
i
z z z
b
u
o u
e
A
= A



where

)() , ( ) (the four quadrant arctangent of


the real parts of the elements of x and y, with a range of ), and is the angular frequency of the
ECG signal. The parameters , b, and theta represent the amplitude, width, and phase of the respective
P,Q,R,S, and T waveforms.

Using these differential equations, the identification of the parameters of various ECGs is treated as an
optimization problem. The idea is that by means of an optimization problem, the optimal parameters that
minimize the difference between our dynamic ECG model, and an actual model or ECG can be identified.
Two optimization algorithms are used to achieve this goal. These algorithms are biogeography-based
optimization (BBO) and gradient descent optimization (GDO).

The optimal parameters for the dynamic ECG model are first identified through BBO. BBO is a biology-
based optimization algorithm that uses the concepts of nature to find the optimal solutions for a problem.
This optimization algorithm was chosen because it has been shown to outperform other evolutionary
algorithms in a large number of benchmark functions [8]. To standardize the results of the BBO tests, the
cost function which was chosen to be minimized is the root mean square (RMS) value of the error
between the true ECG waveform (ECG
ACTUAL
) and the dynamic model (ECG
MODEL
):

( )
2
1
1
( ) ( )
n
ACTUAL MODEL
m
Cost ECG m ECG m
n
=
=



where m is the mth data point, and n is the total number of data points in the ECG waveform.

The final results of the BBO run are then sent through the GDO algorithm to further minimize the overall
cost of the dynamic ECG model. GDO uses gradients to find the local minimum of a function. Gradients
are defined as vectors that point in the direction of the greatest rate of increase in a scalar field. Therefore,
the gradient of each of the individual parameters is taken and subtracted from the cost function, such that

1
1 1
16
16 16
( ) ( )
. .
. .
( ) ( )
C x x C x C
x
C
x
C x x C x C
x
c
c
+ A A ( (
( (
A
( (
( (
c
= ~
( (
c
( (
( (
+ A A
( (
A


( 1) ( )
i i i
i
C
x k x k q
c
A
+ =

where C(x) is the cost function, x is the vector of ECG parameters,
i
is a small change in x
i
,. i is the ith
component of the respective vector, k is the kth incremented value of a specific parameter, and is
constant step size determined experimentally. The resulting parameter values are then shown to be at or
near their local optimal values.

With the parameters obtained from BBO and GDO, synthetic ECGs with realistic features can be
generated. These ECGs can be modified as well, having programmable features such as sampling time,
noise, RR-interval variation, peak variation, baseline wander, and so on.


268
Results Obtained
The results that were obtained through simulation showed much promise. As can be seen in Figure 1, the
dynamic ECG waveforms followed the peaks and troughs of numerous ECGs with great accuracy. The
results obtained from all waveforms tested can be seen in Table 1.

Realistic ECG Generation was found to be a success as well, as can be seen in Figure 2. Various
morphological features of real-world ECGs are shown to be easily implemented within the dynamic ECG
model.

Significance and Interpretation of Results
The results obtained over the entirety of simulations were encouraging. Extremely low cost values for the
final ECG waveforms demonstrate the accuracy of BBO and GDO to act as optimization algorithms. This
is further demonstrated by the fact that characteristic features of arrhythmias such as ventricular flutter
and atrial fibrillation, are reflected in the identified parameter values. These features include increased
heart rate and a missing P-wave in ventricular flutter and atrial fibrillation respectively. The success of the
parameter identification algorithms allows for the realistic representation of real world ECGs by the
dynamic model. The fact that there are 16 parameters available in this model to be tuned, in addition to
other effects, proves just how flexible the dynamic ECG model is.

Figures and Tables

Figure 1. Comparison of the Synthetic ECG, Best Matched BBO ECG, and GDO ECG.

Table 1. Parameter Values and Cost of Several Types of ECG Waveforms.




269

Figure 2. Comparison of Two ECG Waveforms showing a Variation of the RR-interval.

Acknowledgments
The author would like to thank all of the members of the Embedded Control Systems Research Lab at
Cleveland State University, including Berney Montavon, Paul Lozovyy, and Carr Scheidegger, who
helped contribute a great deal to this project. The author would also like to thank Dr. Dan Simon
especially, for his constant help, guidance, and contributions.

References
[1] L. Sornmo and P. Laguna, Bioelectrical Signal Processing in Cardiac and Neurological Applications,
Boston, Elsevier, 2005.
[2] T. Garcia and N. Holtz. (2001). 12-lead ecg: The art of interpretation . (illustrated ed., pp. 24-36).
Sudbury, MA: Jones & Bartlett Learning, 2002.
[3] R. Degani. (1992). Computerized Electrocardiogram Diagnosis: Fuzzy Approach. Methods of
Information in America, 31: 225-233.
[4] N. El-Sherif, et al. (1995). Definition of the Best Prediction Criteria of the Time Domain Signal-
Averaged Electrocardiogram for Serious Arrhythmic Events in the Postinfarction Period. Journal of
the American College of Cardiology, 25(4), 908-1014.
[5] M. Ovreiu and D. Simon, Biogeography-Based Optimization of Neuro-Fuzzy System Parameters for
Diagnosis of Cardiac Disease, Genetic and Evolutionary Computation Conference, Portland, OR,
pp. 1235-1242, July 2010.
[6] A. L. Goldberger, et al., Physiobank, physiotoolkit, and physionet: components of a new research
resource for complex physiologic signals, Circulation, vol. 101, no. 23, pp. e215e220, 2000.
[7] P. McSharry, et al. (2003). A dynamical model for generating synthetic electrocardiogram
signals. IEEE Transactions on Biomedical Engineering, 50(3), 289-293.
[8] D. Simon, Biogeography-Based Optimization, IEEE Transactions on Evolutionary Computation,
vol. 12, no. 6, pp. 702-713, December 2008.
270
Electrocardiogram Parameter Identification for Diagnosing Heart Arrhythmias
Student Researcher: Carre` D. Scheidegger
Advisor: Dr. Daniel Simon
Cleveland State University
Department of Electrical and Computer Engineering

Abstract
I present an application of an evolutionary algorithm called biogeography-based optimization (BBO) for
recognizing and diagnosing various heart abnormalities using electrocardiograms characteristic parameter
identification. I explored the algorithms ability to identify characteristic parameters of a bioelectrical
signal and in turn use these parameters to diagnose various heart abnormalities. I used a simulated ECG
signal and actual ECG signals in order to test the ability of BBO to identify the characteristic parameters.
These results reveal that BBO is capable of identifying the parameters of the ECG signal. I then present
research on diagnosing various heart abnormalities using the identified parameters.

Project Objectives
Heart arrhythmias are common and occasionally dangerous to patients in which they occur. Any
abnormality in the electrical impulse of the heart that initiates the heartbeat causes an irregular beat. The
irregularities can cause a heart to beat too fast, too slow, or simply irregularly. Heart arrhythmias are
diagnosed through an analysis of a persons electrocardiogram. By observing specific characteristic wave
shapes, duration, and amplitudes, physicians can appropriately diagnose heart abnormalities. A normal
ECG has a P, Q, R, S, and T wave that correspond to the various events that occur throughout the heart
during a single heartbeat.

The main objective of this research project was to develop a method to generate synthetic ECG signals
with dynamic modeling identify parameters, and diagnose heart arrhythmias. Specific events and
parameters are used for modeling and each P, Q, R, S, and T wave can be modeled by a function with
specific angular position, amplitude, and width. ECG signals are able to be generated using this modeling
with different heart arrhythmias, by varying the parameters used to build the models. Using this method
of generating synthetic ECG signals and the specific angular positions, amplitudes, and widths of the
characteristic waves, a method for parameter identification and diagnosing ECGs was developed.

Methodology
In order to diagnose ECG signals, a successful method for parameter identification had to be developed.
Using the basic method for generating synthetic ECG signals, BBO was applied to identify the best ECG
parameters with a given input ECG. Wave-based modeling was used in this research in order to generate
synthetic ECG signals. Sixteen parameters were established for synthetic ECG generation based on
Sayadi et al. and McSharry et al.s research. These parameters include the
i
, a
i
, and b
i
for each
characteristic wave (P, Q, R, S, T), and the heart rate in terms of . These parameters are the traits that
BBO was used to optimize over many generations.

In the process of optimization, a desired ECG signal is sent as an input to the BBO code. BBO identifies
the best sixteen parameters used to build the characteristic waveforms by optimizing the minimum cost of
the desired ECG compared to the BBO generated ECG. The cost that is minimized in this application
of BBO is calculated based on the difference between the ECG input and the BBO generated ECG. Over
many generations of BBO, the calculated cost is minimized and the BBO generated ECG often reflected
the initial input ECG with great accuracy. Using synthetic ECG signals and ECG signals taken from the
MIT-BIH Arrhythmia Database, the algorithm was tested for successful parameter identification.

After verifying that parameter identification for synthetic and MIT database ECG signals was successful,
I developed a method for diagnosing heart arrhythmias based on the parameters used to build synthetic
ECGs, or the parameter identified through BBO applications. Parameters shape the ECG, specifically
271
ECGs with arrhythmias. Each parameter value will cause a different change in the ECG when it is varied.
Identifying which range of parameters will produce an ECG with a specific arrhythmia is the basis for this
diagnosis method. The diagnosis program is included in BBO, where each parameter is compared to the
intervals expected for the three different synthetically generated arrhythmias. If an ECG parameter falls
within the interval base for an arrhythmia, the parameter is identified with the diagnosis. The diagnosis
method was tested on the same two types of input ECGs. This type of diagnosis was conducted for each
of the sixteen parameters.

Results Obtained
Tables 1 and 2 show the results of successful parameter identification and diagnosis of a synthetic ECG
signal generated using dynamical modeling. The diagnosis code was able to correctly diagnose the
percentage of the ECG with ventricular flutter and atrial fibrillation. It only missed diagnosing one
parameter with ventricular tachycardia. Figure 2 shows the input ECG compared to the ECG generated
using the identified parameters. Overall 73.33% of the synthetic ECG was diagnosed accurately. The
parameters that were misdiagnosed showed inaccurate parameter identification. The identified parameter
was not close enough to the actual value and fell within an incorrect diagnosis interval.

Tables 3 shows the diagnosis results from ECG signals taken from the MIT-BIH database. The table
shows that atrial fibrillation was successfully identified in 18.75% of the ECG signal. Figure 2 shows the
resulting ECG that was generated by the identified parameters. The output waveform very closely
follows the input MIT ECG. Overall, the arrhythmia diagnosis was successful. Table 3 also shows that
the diagnosis process was tested on ventricular flutter and ventricular tachycardia as well. It was able to
identify a significant percentage of the ECGs with the specific arrhythmia already diagnosed in each case.
For the reason of space limitations, every arrhythmia plot is not displayed below.

Diagnosis was difficult on normal ECGs taken from the MIT-BIH database. A large percentage of the
ECG was diagnosed normal, but a significant amount was also diagnosed with various heart
abnormalities. Although the diagnosis was wrong, over numerous runs of the same ECG signal, the
diagnosis remained consistent. Each time the ECG was run, the same diagnosis was returned. Since the
ECG was taken from the MIT-BIH database, it is impossible to know the exact parameters and whether
each one is for a normal sinus rhythm. This could be the cause of the inaccuracy of diagnosis. Further
research could help to fix this problem, including increase in number of ECG beats diagnosed at a single
time.

Figures and Tables

Table 1. Synthetic Diagnosis
Synthetic ECG Diagnosis
Diagnosis Expected Diagnosis % Actual Diagnosis %
Ventricular Flutter 12.50% 12.50%
Atrial Fibrillation 18.75% 18.75%
Ventricular Tachycardia 6.25% 0.00%
Normal Sinus Rhythm 62.50% 68.75%


272
Table 2. Synthetic ECG Parameter Identification and Diagnosis
SYNTHETIC
Parameter
Synthetic Value Indentified Value
P
-1.046666667 -1.07727
Pa
1.2 1.39705
Pb
0.25 0.23982
Q
-0.261666667 0.223139
Qa
-5 -14.7813
Qb
0.1 0.108632
R
0 0.040129
Ra
30 13.7917
Rb
0.1 0.155399
S
0.261666667 0.140022
Sa
-7.5 -24.7876
Sb
0.1 0.0637242
T
1.57 1.54479
Ta
0.75 0.829002
Tb
0.4 0.389555


Figure 1. Plot of Synthetic Input ECG versus Best BBO Individual ECG



273

Figure 2. Plot of Atrial Fibrillation Input ECG versus Best BBO Individual ECG
Table 3. Arrhythmia Diagnosis Results

Acknowledgments
The author would also like to thank Mr. David Sadey for his cooperation and help on this research project
and parameter identification tuning. Finally, the author would like to thank Dr. Daniel Simon for his
project involvement and help in the process of completion.

References
1. Simon, D.: Biogeography-Based Optimization. IEEE Transactions on Evolutionary Computation,
vol. 12, no. 6, 702713 (2008)
2. McSharry, P., Clifford, G., Tarassenko, L., and Smith, L.: A Dynamical Model for Generating
Synthetic Electrocardiogram Signals. IEEE Transactions on Biomedical Engineering, vol. 50, no. 3,
289 294 (2003)
3. Sayadi, O., Shamsollahi, M., and Clifford, G.: Synthetic ECG Generation and Bayesian Filtering
Using a Gaussian Wave-Based Dynamical Model. Journal of Physiological Measurement, vol. 31,
no. 10, 1309 1329 (2010)
274
Thermally Responsive Elastin-like Polypeptides

Student Researcher: Ciara C. Seitz

Advisor: Nolan B. Holland

Cleveland State University
Department of Chemical and Biomedical Engineering

Abstract
Elastin-like polypeptides are protein-based, responsive polymers that consist of repeats of the
pentapeptide ELP(GVGVP), where G=glycine, V=valine, and P=proline, and the valine residues can be
substituted to change the properties of the polymers.
2
These ELPs exhibit a reversible phase separation
above a transition temperature. This transition temperature is dependent on molecular weight and
concentration, and can be very useful in protein purification. In previous studies, moderate lengths of
poly(GVGVP) have transition temperatures (T
t
)>30C.
1
For many applications, it is useful to have the
ELP T
t
closer to room temperature to eliminate/reduce the need of heating or cooling for usage. The
transition temperature can be altered by the use of different amino acids in the ELP sequence, due to the
fact that each amino acid has different characteristics, such as their chemical properties. In this study, the
amino acid, phenylalanine (F), has been substituted into the sequence. Theoretically, phenylalanine
should exhibit a lower T
t
, when compared to the previous studies, which use valine in the sequence,
because valine, alone, naturally exhibits a T
t
of 24C, while phenylalanine, alone, exhibits a T
t
of -30C.
2

Phenylalanine has been substituted to replace every other, second valine in the ELP sequence. The
resulting construct is ELP(GVGVPGVGFP)
n
, where n is the total length of the ELP gene in number of
pentapeptides. A library of potentially useful genes, ranging in length of ELP, have been synthesized,
expressed and purified, and will be characterized by determining the Tt, which will vary depending on the
length of the polypeptide.

Project Objectives
Elastin-like polypeptides are a part of the family of responsive polymers that have been studied
extensively for the past two decades.
2
They have been used for many different applications including
drug delivery, tissue engineering, surface engineering, nanosensors, hydrogels, etc. These materials are
an attractive area of study for the reason that their genetically encoded synthesis provides precise control
over the primary features of polymers, such as their sequence and chain length, which are independently
related to the T
t
. The ELPs can be designed specifically for application, using methods previously
described, which make them attractive for drug delivery, as well as other biomedical purposes.
2
Another
desirable aspect of these elastin-like polypeptides is that they exhibit a lower critical solution temperature
(LCST) behavior. The ELPs phase separate above a specific temperature, T
t
, and are soluble in aqueous
solutions below the LCST. When the temperature is raised above the T
t
, they undergo a sharp (2-3C
range) phase transition, which leads to desolvation and aggregation of the polypeptide. This process is
completely reversible, and is also utilized in the purification process. The main goals of this research are
to synthesize the desired ELP constructs, which should theoretically exhibit a lower transition temperature
than previously studied polymers, build a library of potentially useful genes, ranging in length of ELP,
and to characterize the constructs by determining their transition temperatures, to verify they are lower
than the previous studies.

Methodology
The materials and methods used in the synthesis of the ELPs were based on methods preciously
described.3 In this study, phenylalanine was substituted into the amino acid sequence, replacing every
other second valine. The construct can be represented by ELP(Gly-Val-Gly-Val-Pro-Gly-Val-Phe-Pro)
n

,where n = total length of the ELP gene in number of pentapeptides, and Gly=glycine, Val=valine,
Pro=proline, Phe=phenylalanine. For simplicity, the constructs will be discussed using nomenclature:
ELP(GVGVPGVGFP)
n
.

275
Gene Design
In order to obtain the desired constructs, ELP(GVGVPGVGFP)
4
was initially produced by annealing a
pair of oligonucleotides, designed to have specific overhangs and cut sites to insert them into a cloning
vector. Once ligated into the plasmid, the DNA was cut out and double digested, and ligated back into the
vector. This process, called Recursive Directional Ligation (RDL), is repeated to obtain multiple
lengths of the ELP.
2
RDL is a general strategy for the synthesis of repetitive polypeptides or a specified
chain length, which involves a controlled, stepwise oligomerization of a DNA monomer to yield a library
of oligomers. It is a rapid method to generate a large, repetitive gene, with each round able to be
completed in about 2 days, generating large polypeptides in just a few weeks. Final sequences were
verified by DNA sequencing at the CCF Genomics core.


Figure 1. The molecular biology steps of RDL. (A) A synthetic monomer gene is inserted into a cloning
vector. (B) The gene is designed to contain recognition sites for two different restriction endonucleases,
RE1 and RE2, at each of the coding sequence. (C) An insert is prepared by digestion of the vector with
both RE1 and RE2 and subsequently ligated into the vector that has been linearized by digestion with
only RE1. (D) The product contains two head-totail repeats of the original gene, with the RE1 and RE2
sites maintained only at the ends of the gene. (E) Additional rounds of RDL proceed identically, using
products from previous rounds as starting materials.
2


Expression
The constructed genes were then expressed in E.coli.
2
To begin, two-10mL starter cultures, made from
frozen stocks of the ELP added to LBA (Luria broth medium with ampicillin). The cultures were
prepared and shaken in an incubator overnight at 37C. The following morning the starter cultures were
added to 1L of LBA medium in a 2L Erlenmeyer flask. The flask and contents were placed into the
incubator and shaken at 300rpm, at 37C, until the OD
600
(optical density absorbance at wavelength of
600nm) measurement reaches 0.8-1.0. To do this, a blank of LBA is initially measured, before the starter
cultures are added, and then periodically measured after that as cell growth continues. The expression is
induced by adding 240mg of IPTG (Isopropyl -D-1-thiogalactopyranoside), and left to shake for 4-5
hours. After 4-5 hours, the mixture was centrifuged, and the remaining pellets were kept for purification.

Purification
The ELPs were purified using a method called Inverse Transition Temperature Cycling, previously
described.
3
This method begins after the expression, in which the culture is spun cold and the pellet is
kept. The pellet is resuspended in PBS(phosphate buffer saline), and then sonicated, which is a method
for cell disruption that releases the protein. PBS is used because of its close match to ion concentrations
in the human body(isotonic), and because it is also non-toxic to the cells being used. A cold
centrifugation occurs and the liquid was kept, and then heated for a warm centrifugation. After the warm
centrifugation, the pellet was kept and resuspended in PBS, and placed on ice to resolubilize the protein.
Once the pellet was fully resuspended, the process was repeated. After the second warm centrifugation,
276
there was a final cold centrifugation, in which the final protein is obtained in the supernatant, filtered, and
ready to be characterized.


Figure 2. Flow chart of expression and purification steps

Results Obtained
Currently, a library of oligonucleotides, with the form ELP(GVGVPGVGFP)
n
, has been successfully
constructed. The constructs that have been obtained are as following:

ELP(Gly-Val-Gly-Val-Pro-Gly-Val-Gly-Phe-
Pro)n
ELP(GVGVPGVGFP)8
ELP(GVGVPGVGFP)16
ELP(GVGVPGVGFP)32
ELP(GVGVPGVGFP)40
ELP(GVGVPGVGFP)48
ELP(GVGVPGVGFP)64

These constructs are in the process of being expressed and purified, and will then be characterized using a
Cary 50 Bio UV-vis spectrophotometer, to measure the T
t
. The expected transition temperatures are
Expression

Spin culture,
dispose supernatant
keep pellet

Pellet brought up in
PBS and sonicated

Cold centrifugation
occurs and the
liquid is kept
because here the
protein is in two
fractions, insoluble
and soluble, and the
protein exists in
the solute part

Liquid is heated and
next a hot
centrifugation
occurs, in which the
protein is in the
insoluble fraction,
the pellet

Pellet is
resuspended in PBS
and put on ice

Second cold
centrifugation then
occurs and once
again the liquid is
kept and heated up

Second hot
centrifugation
occurs and the
pellet is kept and
resuspended in PBS
and put on ice

Final centrifugation
occurs and the
protein is collected
277
about 15C. The results will be compared to the theoretically calculated Tt, and then compared to
previously studied ELPs.
1,2
The goal is to obtain ELPs with lower transition temperatures, which will be
verified when each of the constructs are expressed, purified, and characterized.

Acknowledgments
This work was supported by the CSU Undergraduate Summer Research Program and the National
Science Foundation (DMR-0908795).

References
1. A. Ghoorchian, J. T. Cole and N. B. Holland. "Thermoreversible Micelle Formation Using a Three-
Armed Star Elastin-like Polypeptide" Macromolecules 43:4340-345, 2010.
2. D. W. Urry. "Physical Chemistry of Biological Free Energy Transduction as Demonstrated by Elastic
Protein-Based Polymers. J. Phys. Chem. B 101:11007-1028, (1997).
3. A. E. Chilkoti. "Purification of Recombinant Proteins by Fusion with Thermally-responsive
Polypeptides." Nature Biotechnology 17:1112-115, (1999).
4. Meyer, Dan E., and Ashutosh Chilkoti. "Genetically Encoded Synthesis of Protein-Based Polymers
with Precisely Specified Molecular Weight and Sequence by Recursive Directional Ligation:
Examples from the Elastin-like Polypeptide System." Biomacromolecules 3.2 (2002): 357-67. Print.
278
Semantic Framework for Data Collection and Query for Knowledge Discovery

Student Researcher: Matthan B. Sink

Advisor: Dr. Amit P. Sheth

Wright State University
Department of Engineering and Computer Science

Abstract
As researchers continue to discover new information, they look for a way to share their research with
fellow researchers. The newest way to do this is to use the Semantic Web, allowing them to store their
data in a universal format, providing the opportunity for other researchers around the world to access their
data. Due to the complexities that make up the Semantic Web, it is necessary for these researchers to
collaborate with Computer Scientists to successfully store their data. This collaboration requires that any
changes a researcher needs to make to their data must be passed on to a Computer Scientist, who then
performs the changes. This application allows a researcher - armed with an ontology - to create their own
forms specifically for their data needs, and store that data in valid RDF (Research Description
Framework) format without a Computer Scientist being required to assist in the process.

Project Objectives
The objective of this project is to provide a web application that utilizes an ontology to allow a researcher
to dynamically create a usable web form. The information entered by the researcher should be stored as
triples. Instead of needing to consult a web developer in order to create a new form, a researcher should
be able to use this application and design their own web form to be used for data entry, editing, and
deletion. The web application should be able to stand alone on a web server. It should also use a given
ontology for entity retrieval and triple validation when the web form is being created. All information
should be stored in a triple store through Virtuoso, and accessed using SPARQL queries.

Methodology Used
For this application to work, it is necessary for there to exist an ontology pertaining to the field of work
needing web forms. Given this ontology, the web application automatically populates three drop-down
boxes that the researcher can use to generate triples that are subsequently validated against the ontology.
Once the researcher has finished creating triples [that will make up the form]0, the triples are submitted
and stored in a triple store. When a specific form is requested, the triples associated with the requested
form are retrieved and the form is created and displayed for the researcher. Data entered by the
researcher is also stored in the triple store, where it can be queried, edited, or deleted at any time (see
Figure 1).

Form Design
For each new form a researcher wishes to create, the application enters a form design phase. The page is
populated with three drop-down boxes that contain (respectively) entities, predicates, and entities to allow
the researcher to select a triple sequence. Since the triples are instances of the ontology, it is necessary
for each new triple to be validated. In this scenario, validation means confirming that the subject is a
subclass of the domain of the predicate, and that the object is a subclass of the range of the predicate.
Through the use of a SPARQL query, the newly created triple is validated against the ontology in the
triple store. If the triple is valid, it is stored on the web page, until the researcher is done creating triples
and submits the information. The submitted information is then stored in a Virtuoso graph.

With access to a server that contained Virtuoso, all data pertaining to the ontology can be stored as triples,
and retrieved and queried as needed. The ontology is stored in its own graph, with instance data being
stored in a separate graph. This is done to help the ontology from being corrupted by excess data. Since
individual forms have their own set of triples, it is necessary to figure out a way to associate a forms
triples with the form itself, thus allowing the triples to be grouped according to its parent form. To
accomplish this task, a UUID (Universally Unique Identifier) is created with each form, and this UUID is
279
added to each triple that makes up a form. The UUID is added to the subjects URI (Uniform Resource
Identifier) in the triple. When the form needs to be used for any reason, the triple store can be queried for
all triples that contain the forms UUID, and these triples become the form.

Form Editing
When the researcher wants to enter data into a form, the application transfers to a data entry phase. The
information from the design phase is removed from the page, and replaced by drop-down boxes that allow
the researcher to select which form they would like to add data to. The list of existing forms, and the
contents of the form itself, is retrieved from the triple store using the UUID for the desired form inside a
SPARQL query. Once the desired triples are retrieved, the results are parsed, and each entity is given its
own input box for the researcher to enter data.

In the case where a forms list of triples contains the same entity multiple times, it would be unnecessary
to require the user to enter the same data repeatedly (and would also introduce the possibility of data
inconsistency). To solve this issue, any time repeat entities are detected within a form, the repeated
entities are linked together. This means any changes made to any one of the repeated entities will be
reflected across all instances of the repeated entity. By connecting the entities, the user is saved the hassle
of typing the value repeatedly, and the issue of data inconsistency is removed from the equation.

The issue of unique identification also arises when there are multiple sets of data that are connected to a
single form. By adding a timestamp (down to the milliseconds) to all triples associated with a specific
data instance, each set of data for the form is uniquely identifiable, in the same way that a forms triples
are identifiable via the UUID associated with the form. Again, a query for all the triples associated with a
specific forms instance ID will return all the triples that contain data associated with that form instance.
These triples can then be used to populate the form fields, and the data can be edited and deleted as the
researcher wishes.

The data entry phase can also be used to edit data (editing also includes deleting). When a selected form
has existing data associated with it, there is a second drop-down box created that lists all of the specific
data instances. If one of these instances is selected by the researcher, the triple store is queried to retrieve
all the triples associated with that data instance. The form is then automatically populated with the
retrieved data, and the researcher is free to edit the data, and submit the changes to update that data
instance in the triple store.

Significance and Interpretation of Results
As research continues, researchers discover information that is to be shared with the world. One way to
share this information is to put in on the web in a universally known format that allows anyone to access
the information. By using RDF, it becomes possible to share information worldwide. This application
allows researchers to share information with the world without needing to know anything about RDF,
while using forms they specifically designed to fit their data.

An application that allows researchers to input their data and have it automatically stored in RDF format
removes the requirement of the researcher needing to know about RDF and the Semantic Web to share
their research using these technologies. Having the new research stored in RDF format allows for a
broader sharing of research and results worldwide. With this application, researchers, with an ontology
on their domain, have the ability to design web forms for data entry that contains specific information to
their research without having to include extra data, or manipulate results to input the data.

280
Figures/Charts



Figure 1. System Architecture.

Acknowledgments
The author would like to thank Dr. Amit Sheth for his support and the opportunity to work on this project,
and much appreciation and gratitude to Vinh Nguyen and Sarasi Lalithsena for brainstorming assistance
and ideas.
281
Sensing and Energy Harvesting Capabilities of Hard Magnetorheological Elastomers (H-MRE)

Student Researcher: Robert A. Sinko

Advisor: Dr. Jeong-Hoi Koo

Miami University
Department of Mechanical and Manufacturing Engineering

Abstract
Magnetorheological elastomers (MREs) are an emerging branch within the smart materials field that
consists of hard or soft magnetic particles embedded in a rubber compound. Current applications and
research have been focused on changing the stiffness of these materials [1] and they have shown promise
as components of vibration absorbers and base isolation systems [2]. These particular applications use
soft magnetic material; however, MREs that contain hard magnetic filler materials were the primary focus
of this project. When a magnetic field is applied perpendicularly to these materials, the filler particles
generate torque and can be used as a controlled actuator. Preliminary work has been conducted to
characterize these H-MREs (since their properties are significantly different than soft MREs) and has
shown their usefulness in engineering applications. The results of this research are an integral part of this
proposed project as the results helped develop the methodology and goals of the project. This project
focuses on assessing the capabilities of extending H-MRE materials into the areas of sensing and energy
harvesting through development and implementation of experiments.

Project Objectives
Aside from actuators, the two newest applications for which H-MREs are being considered are energy
harvesting and sensing. Sensors are utilized almost everywhere today as they are used to monitor the
performance of a system, whether it is fluid flow, vibration measurements, etc. Piezoelectric materials,
those that respond to electric stimuli, and Galfenol, an engineered material similar to MREs, have been
studied extensively for their application as self-sensing actuators [3]. It is hypothesized that H-MREs
could be used in a similar capacity by developing a way to monitor the displacement of the material using
some sort of magnetic circuit. Energy harvesting involves the conversion of one form of energy (kinetic,
solar, etc.) into a more storable form. Previous research has been conducted on using other smart
materials in this capacity [4] and it is also hypothesized that H-MREs could be used in a similar capacity
by capturing energy from mechanical vibrations and storing it in the form of electrical energy/power
using a specialized circuit and the same principles discussed above. The primary goal of this project will
be to determine the feasibility of using H-MREs in the capacity of energy harvesting and sensing
technology and investigate the best way to utilize these properties in future applications.

Methodology Used
In conducting an initial literature review for this project, it was discovered that extensive work has been
done in the fields of sensing and energy harvesting using a number of different smart materials. The most
common materials that have been researched in these capacities are piezoelectrics and Galfenol [5], and
both of these served as a good basis upon which to base the experimentation necessary to assess the
sensing and energy harvesting capabilities of H-MRE materials. However, before starting to design any
experiments to determine these capabilities, the researcher wished to gain a better understanding of the
mechanisms at work in these materials that made them attractive for these particular applications. One of
the primary advantages of using hard magnetic particles within these materials is that it results in the
samples being permanently magnetic with a permanent magnetic dipole, so this direction of magnetism is
constant throughout time. The basic principle of operation for either of these applications is based on
Faradays law of induction, which states that the EMF generated within a closed circuit () is proportional
to the rate of change in the magnetic flux (d
B
/dt). For a tightly wound coil of wire with N turns, this
equation can be expressed as:


N
d
B
dt

282

This equation helps to explain the physical laws which dictate the predicted behavior. However the results
of previous research suggest there are additional characteristics of H-MRE materials that also help to
explain the potential of these materials. In their paper [6], Faidley et al. explain some of the theoretical
mechanisms involved in the sensing behavior of S-MRE materials that is easily extendable to the
behavior of these H-MRE materials. The paper states that the changes in this magnetic flux that cause an
induced voltage are primarily caused by a change in the net magnetization of the material or the amount
of the material that is contained within the pick-up coil. Both of these mechanisms lead to promising
applications in the area of sensing since moving the material through the pick-up coil should produce
some detectable voltage, as well as stretching the material within the pick-up coil. Additionally, this also
demonstrates that energy harvesting is a good potential application for this material since it is so easy to
induce this EMF and then store it in the usable form of electricity.

With this behavior of the material in mind, experiments were designed for both sensing and energy
harvesting in order to assess the capabilities of the material. For sensing, the primary goal would be to
determine if the induced EMF within the pick-up coil could be used to measure some physical
phenomenon within the material, such as the motion of a sample (position, velocity, acceleration) or some
change of the geometry (strain). Based on the availability of the equipment in the research lab, it was
determined that the best experiment for sensing would involve measuring the axial displacement/velocity
of an H-MRE sample using a laser displacement sensor. These signals would then be compared to the
induced EMF within a pick-up coil to determine if they were similar and if the H-MRE could be a self-
sensing material. In these experiments the H-MRE samples were held vertically in a non-magnetic clamp
and actuated with a voice-coil actuator to measure both the displacement and the induced EMF as
pictured in Figure 1.



The second set of experiments was focused on the abilities of H-MRE materials to be used as energy
harvesters in future applications. For these experiments, the output measurement of interest was again the
EMF generated within the pick-up coil. The primary goal for these tests was to determine the optimal
operating parameters of the samples for use as an energy harvester. There are a number of different
variables that can be changed during fabrication of an H-MRE such as volume fill percentage of magnetic
particles, dimensions of the sample, orientation of the magnetic particles, as well as the base elastomer.
The primary focus of these experiments was to look at the effect of different volume fill percentages and
base elastomers for a number of H-MRE samples. As shown above in Figure 2, the H-MRE sample was
clamped in a fixture that allowed for the sample to be vibrated as a cantilevered beam. A number of
different frequencies were used to excite the material in order to determine the highest voltage output and
power potential of the H-MRE material.

Results Obtained
After determining the experiments that would best capture the capability of these materials as sensors and
energy harvesters, the experiments were conducted with a number of different samples. These samples
had been fabricated in previous years and included neodymium particles as the hard magnetic element
with a base elastomer produced by Dow Corning. In the sensing experiments, seven different samples
Figure 1 (left): Photo showing the
experimental set-up used for sensing
experiments with the H-MRE sample
moving vertically
Figure 2 (right): Photo showing the
experimental set-up used for energy
harvesting experiments with the H-MRE
sample moving horizontally
H-MRE
H-MRE
283
were tested and the results of the measured displacement from a laser displacement sensor and induced
EMF within a pick-up coil were compared. An example of the results for the HS II 20% volume fill
sample is presented in Figures 3 and 4 below. From these images, it is clear that the frequencies
measured by the laser displacement sensor and induced EMF are identical, although out of phase because
measurements had to be taken independently.


Figure 3. Over plot of laser displacement sensor
output and induced EMF within pick-up coil for 10
Hz
Figure 4. Over plot of laser displacement sensor
output and induced EMF within pick-up coil for 20
Hz

The energy harvesting experiments focused on trying to quantify what the ideal operating characteristics
of certain H-MRE samples would be when trying to design energy harvesters. First, the sample was
excited with a number of different frequencies to determine which corresponded to the maximum voltage
output. It was hypothesized that this maximum output would occur near the resonance frequency of the
material, and this was verified by Figure 5 below, as we see a clear frequency with the highest energy
output. Further, it was important to determine how the volume fill percentage of magnetic particles
influenced the performance of these devices. At low frequencies, there is not much difference in the
performance as a function of volume fill percentage. However, closer to the resonance frequency, it
became clear that higher volume fill percentages led to higher induced EMFs, as illustrated in Figure 6.

Figure 5. Frequency sweep of cantilevered sample
for energy harvesting showing resonance response
Figure 6. Frequency sweep of cantilevered sample
for energy harvesting illustrating volume fill
differences

Additionally, although it was difficult to get a noise-free signal when actuating the material in the vertical
direction for the sensing experiments, it was possible to gain some more insight into the energy harvesting
capabilities of the materials. By simply measuring the peak to peak voltage for the different frequencies
that the samples were excited at, a detailed comparison could be made to determine the maximum power
output of each of the samples. These results are presented below in Table 1 and display a number of
interesting trends for the different materials:

0 0.2 0.4 0.6 0.8 1
-1.5
-1
-0.5
0
0.5
1
1.5
Time
O
u
t
p
u
t

R
e
s
p
o
n
s
e


Induced Voltage (x1000)
Measured Velocity
0 0.2 0.4 0.6 0.8 1
-4
-3
-2
-1
0
1
2
3
4
Time
O
u
t
p
u
t

R
e
s
p
o
n
s
e


Induced Voltage (x1000)
Measured Velocity
0 5 10 15 20 25
-80
-60
-40
-20
0
20
40
60
Frequency (Hz)
V
o
lta
g
e
(
m
V
)
16 16.2 16.4 16.6 16.8 17 17.2 17.4 17.6
-40
-30
-20
-10
0
10
20
30
40
Frequency (Hz)
V
o
lt
a
g
e

(
m
V
)


10% Volume Fill
20% Volume Fill
30% Volume Fill
284
Each sample clearly has a distinct peak of the output power as measured across a 100 resistor,
however this value varies for different base materials and is hard to pinpoint with discrete data.
For HS II, the trend of increasing output power as a function of volume fill percentage is
observed, but this trend is not necessarily evident for the other base elastomers.

Table 1. Power output for different samples being actuated vertically within the pick-up coil.
Sample 10 Hz
Power
(mV)
20 Hz
Power
(mV)
40 Hz
Power
(mV)
50 Hz
Power
(mV)
60 Hz
Power
(mV)
80 Hz
Power
(mV)
100 Hz
Power
(mV)
HS II 10% 4.40 4.60 7.20 158.22 24.81 5.24 15.73
HS II 20% 5.22 5.22 6.20 572.40 7.68 5.22 15.96
HS II 30% 5.93 4.92 12.51 577.53 4.99 5.46 6.69
HS III 20% 4.49 4.92 5.98 388.93 5.12 6.28 31.91
HS III 30% 4.75 5.02 9.39 258.28 6.63 5.42 8.77
HS IV 20% 4.99 4.34 26.89 8.08 4.34 5.89 5.93
HS IV 30% 4.77 5.16 15.49 13.69 2.51 5.18 6.83

Significance and Interpretation of Results
The results presented above provide much insight into the capabilities of H-MRE materials as sensors and
energy harvesters. For the sensing experiments, there has been some difficulty in rendering an adequate
signal from the pick-up coil being used because of the small magnitude of the induced EMF (as low as 5-
10 mV). However, even with these noise-filled results, it is clear that these materials could be capable
sensors as their magnetic behavior clearly reflects the physical behavior observed when measuring
displacement with a conventional laser sensor. By refining the pick-up system for these devices, it is
entirely feasible that these materials could be implemented as a new type of flexible magnetic sensor in
the future. The energy harvesting experiments have shown even more promise based on the result
presented in the previous sections of this report. It is clear that for each sample there is an ideal operating
frequency that varies due to a number of factors such as size, shape, volume-fill percentage, etc. This
indicates that energy harvesters can be optimally designed for each individual application so that they
produce the maximum power. Also, the nature of H-MRE materials (flexible and moldable into any
shape) makes them ideal candidates for energy harvesters. Finally, it is evident that these materials can
produce a sufficient amount of energy to power any number of devices, but particularly remote micro-
sized devices that only use small amounts of power. Ultimately, these experiments have validated the
idea that H-MRE materials could feasibly be implemented in both sensing and energy harvesting
applications and future work will focus on development of devices that implement these technologies.

References
1. Deng, H., and Gong, X. (2008). Application of magnetorheological elastomer to vibration absorber.
Communications in Nonlinear Science & Numerical Simulation, 13(9), 1938-1947.
doi:10.1016/j.cnsns.2007.03.024.
2. Davis, L. (1999). Model of magnetorheological elastomers. Journal of Applied Physics, 85(6), 3348.
Retrieved from Academic Search Complete database.
3. Atulasimha, J., and A. B. Flatau. "Experimental Actuation and Sensing Behavior of Single-crystal
Iron-Gallium Alloys." Journal of Intelligent Material Systems and Structures 19.12 (2008): 1371-381.
4. Yiming Liu, Geng Tian, Yong Wang, Junhong Lin, Qiming Zhang, and H. F. Hofmann. "Active
Piezoelectric Energy Harvesting: General Principle and Experimental Demonstration." Journal of
Intelligent Material Systems and Structures 20.5 (2008): 575-85.
5. Calkins, F. T., A. B. Flatau, and M. J. Dapino. "Overview of Magnetostrictive Sensor Technology."
Journal of Intelligent Material Systems and Structures 18.10 (2007): 1057-066.
6. Faidley, L., Tringides, C., and Hong, W., "Sensor Behavior of Magneto-Rheological Elastomers,"
Proc. Conference on Smart Materials, Adaptive Structures, and Intelligent Systems, American Society
of Mechanical Engineers.
285
Bleed Hole Location, Sizing, and Configuration for Use in Hypersonic Inlets

Student Researcher: Leslie A. Sollmann

Advisor: Dr. Aaron Altman

University of Dayton
Department of Mechanical and Aerospace Engineering

Abstract
Hypersonic airbreathing engines with speeds approximately five times the speed of sound have been
explored for use as efficient long-range cruise missiles, global reconnaissance, and for access to space for
well over 50 years. Although there have been recent successes with government funded hypersonic
programs, much research remains before further progress can be completed. One of the limiting factors in
the robustness of a hypersonic airbreathing engine involves unstarted inlets which occur when too much
air is forced into an engine at high speeds. There have been many techniques implemented to starting an
inlet such as retractable doors, variable inlet geometries, and mass extraction through perforations.
Although the aforementioned techniques are all viable solutions, permanent perforations for mass
extraction are arguably most beneficial due to ease in manufacturing and weight reduction. This paper
analyzes a technique for developing bleed perforations for mass extraction. Computational results were
obtained for a specific bleed hole configuration and are discussed in comparison with theory.

Project Objectives
The primary goal of this research was to develop a low cost hypersonic inlet starting method at a
freesetream Mach of 3.5 for the axisymmetric Busemann inlet shown in Figure 1. This inlet featured a
contraction ratio, the ratio between inlet internal capture area to inlet throat area, of 5.8. Figure 1also
shows the plenum included in the inlet design. The plenum, would be included for future inlet
applications and vehicle integration, but would need to be taken under consideration for developing a self-
starting inlet design. Bleed holes were selected as the sole method for this project due to the ease and low
cost of manufacturing. An analytical method to determine the necessary size and location of bleed holes
in a hypersonic inlet was developed and tested with Cart3D, an inviscid automated Computational Fluid
Dynamics (CFD) code. This project included research of current hole sizing methods, design
development, and analysis via CFD.



Figure 1. Axisymmetric Busemann inlet with 5.8 contraction ratio and plenum feature. The overall size
of the inlet for this project was driven by wind tunnel limitations as the final inlet design is to be tested in
a hypersonic wind tunnel at the GoHypersonic Inc. facilities in Dayton, Ohio.

Methodology Used
The process for determining the required bleed hole sizing involves an analysis of mass capture and
perforation efficiency; this will be deemed the Mlder Theory for the remainder of the paper. The Mlder
Theory simply uses the change in area of the inlet to represent the amount of mass that must be extracted,
assuming some perforation coefficient. The following equation gives the amount of flow that needs to be
extracted across a given inlet where A is the area of the inlet and W is the total perforation area.


In this equation, k
p
is a constant and indicates the efficiency of mass extraction through individual bleed
holes. Bleed perforations are not highly efficient because they are located in the boundary layer and
286
internal inlet flow must turn to enter the perforation, thus less flow is extracted through the hole than it
would be if the hole was perpendicular to the flow. From this equation, perforation mass flow can be
determined. For this project, the hole diameters varied from 0.2 to 0.25, but were kept constant
throughout the length of the inlet to reduce manufacturing costs.

One of the seemingly most common methods for determining bleed perforation placement centers around
the Kantrowitz limit which determines spontaneous inlet starting. When the Kantrowitz limit is satisfied,
the inlet will start barring unforeseen viscous effects such as shock induced separation. The Kantrowitz
limit is an area ratio between a known station on the inlet just before the shock wave and a station on the
inlet where speeds reach Mach 1. Kantrowitz assumes that a normal shock is located at the start of the
inlet contraction section and flow is sonic, Mach 1, at the inlet throat. As long as the flow is lower than
Mach 1 it will not swallow the shock. The following equation shows the Kantrowitz limit.

()

()


Using the inlet capture area as A
1
and the freestream Mach number as M
1
, a value for A
2
can be
determined. The location along the inlet where the area is equal to A
2
will be the location of the first set
of bleed perforations. In the above case, A
2
is the location of the shock in the inlet that is preventing a
spontaneously starting inlet. By placing bleed holes at this location, flow blockage is reduced and the
shock is drawn further into the inlet. This method is repeated until A
2
is less than the inlet throat area.

Results Obtained
Through Cart3Ds automated CFD code, a bleed hole configuration combining the Mlder Theory for
hole sizing and the Kantrowitz limit for bleed hole spacing was developed and proven to remain started at
steady state conditions. Several design iterations were completed varying hole size, location, and angle.
Designs combining the Mlder Theory and Kantrowitz limit as seen in the final desing (Figure 2) proved
most efficient by the overall inlet mass capture ratio. Cart3D results demonstrated that configurations
with smaller diameter holes and offset bleed hole rows give more uniform flow which is ideal for
reducing pressure losses. Experimental analysis will be completed on the complete inlet design at a
freestream Mach of 3.5 as a comparison with the theoretical and computational results. Experimentation
will be completed at the GoHypersonic Inc. facilities in Dayton, OH.



Figure 2. Complete inlet design featuring inlet and plenum (upper middle). Also shown is final internal
inlet for the complete design (lower left) with 0.2 diameter offset holes spaced by the Kantrowitz limit.
Cart3D Mach contours of complete design with inlet and surrounding plenum (right) gave a relatively
high overall mass capture ratio of 0.882

References
1. Molder, Sannu, Evgeny V. Timofeev, Rabi B. Tahir. Flow Starting in High Compression
Hypersonic Inlets by Mass Spillage. Reyerson University and McGill University. AIAA 2004-
4130. 40
th
AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit. Fort Lauderdale,
Florida. 11-14 July 2004.
Mass Capture Ratio =
0.882
287
2. Van Wie, D. M., F.T. Kwok, R. F. Walsh. Starting Characteristics of Supersonic Inlets. Johns
Hopkins University. AIAA 96-2914. AIAA/ASME/SAE/ASEE Joint Propulsion Conference and
Exhibit. Buena Vista, Florida. 1-3 July 1996.
3. Tam, Chung-Jen, Robert A. Vaurle, Gary D. Streby. Numerical Analysis of Streamline-Traced
Hypersonic Inlets. Taitech Inc. AFRL/PRA. AIAA 2003-13. 41
st
Aerospace Sciences Meeting and
Exhibit. Reno, Nevada. 6-9 January 2003.
4. Jacobsen, Lance S., Chung-Jen Tam, Robert Behdadnia, Frederick S. Billig. Starting and Operation
of a Streamline-Traced Busemann Inlet at Mach 4. GoHypersonic Inc., Taitech Inc., AFRL,
Pyrodyne Inc. AIAA 2006-4508-671. 42
nd
AIAA/ASME/SAE/ASEE Joint Propulsion Conference
and Exhibit. Sacramento, California. 9-12 July 2006.
288
Geology of the Moon

Student Researcher: Steven E. Solomon

Advisor: Ms. Libbey McKnight

The University of Toledo
Department of Curriculum and Instruction

Abstract
The lesson that I will conduct research on will inspire students to pursue a career in astronomy and
science. One major part of planetary astronomy is planetary geology dealing with the Moon. The
minerals and rocks that planets and moons are made of are some of the same materials that exist on Earth.
These materials are composed of elements that were originally created in stars at one time. Moon rocks
are currently the most accessible rocks in the solar system outside of Earth. As a result, they are the best
resource for studying planetary geology. Therefore, the lesson will be using the encased Moon rocks
available to teachers in the Lunar Sample Disk that can be obtained through NASA. The two-day lesson
will focus on the formation of the Moon and the Moon's geology. The first day is a presentation that will
include a simulation of how the moon formed, and an experiment using density to show where specific
rock types can be found on the Moon. The second day will involve students conducting two hands-on
scientific investigations where they create their own density experiment and identify the rock types
provided in the Lunar Sample Disk.

Objectives
From the Ohio Department of Education Academic Content Standards:

Grades 11-12 Earth and Space Sciences Benchmark A:
Explain how technology can be used to gather evidence and increase our understanding of the universe.

Grade 11 Earth and Space Science Indicator 1:
Describe how the early Earth was different from the planet we live on today, and explain the formation of
the sun, Earth and the rest of the solar system from a nebular cloud of dust and gas approximately 4.5
billion years ago.

Grade 11 Scientific Inquiry Indicator 3:
Design and carry out scientific inquiry, communicate and critique results through peer review.

Methodology
Day One: Materials include - 1 marble, Play-Doh, a clear cup of water, a twig, a pebble.

The presentation will begin by explaining the formation of the Moon based on the giant impact theory.
The first demonstration will involve a marble being moved by hand towards a ball of Play-Doh. Students
will see that the marble will combine with the Play-Doh and displace a comparable sized portion that
moves around the ball. This will represent the primordial Earth being hit by a Mars-size object which
then displaces a piece of hot molten magma that falls into Earths orbit. Next, the presentation will
explain the differentiation of the Moons minerals where the denser olivine and pyroxene sank toward the
Moons center and the less dense plagioclase feldspar raised to the top of the magma ocean on the Moons
surface. This calcium-rich mineral would later crystallize into an igneous rock, anorthosite, and produce
the Moons crust which ended the magma ocean approximately 4.4 billion years ago. The second
demonstration will use a twig and pebble to show how this process occurred. The twig and pebble will be
dropped into a glass of water and the less dense twig will float while the denser pebble will sink. Lastly,
the presentation will outline information on the Moon rocks in the Lunar Sample Disk such as how the
rocks were collected during the Apollo program, how technology is used to gather information about our
solar system, and the basic geology of the Moon rocks. The Lunar Sample Disk includes: lunar regolith
found in the highlands that is rich in aluminum and most likely from the far side of the Moon; lunar
289
regolith found in the maria (plural of mare) that is rich in basaltic minerals such as iron and magnesium
and most likely is from the side of the Moon that we can see; breccia formed from rocks that were sutured
together from intense impacts on the Moon; basalt which resulted from lava flows; orange soil containing
tiny glass droplets that resulted from ancient pyroclastic eruptions; and anorthosite. During the
presentation, it will be explained that basaltic lava flows resulted from cracks made by impacts on the
Moons surface that were later intruded by magma due to heat evolved from the radioactive decay of
elements deep inside of the Moon (Bennett et al., 2009, p. 200). Students will be assigned homework due
the second day of the unit. The homework will include reading about the lunar rocks in pages 1-15 in the
Teachers Guide from NASA (G. Jeffrey Taylor et al., 1997). Students will also have to bring in two
items from home, one that floats and one that does not float. They will then test them in front of the class
the next day and determine which type of mineral from the Moon each item represents.

Day Two: Materials include - a clear cup of water, the items that the students bring, 6 magnifying
glasses, Lunar Sample Disk which teachers will need to obtain through a free lunar certification
workshop, a worksheet provided by the Teachers Guide.

Students will begin by running their density tests in front of the class as described above. The second
laboratory experiment will include the Lunar Sample Disk with the sample names hidden so that students
cannot see the labels. Before this pop laboratory session, the students will be given the Lunar Disk
Sample chart supplied by the Teachers Guide on page 42 which includes observations and interpretations
of the Moon rocks (G. Jeffrey Taylor et al., 1997). Students will also be provided magnifying glasses to
use from the Geology Toolbox from NASA. Students will then describe physical properties of the lunar
rocks inside of the disk based on the information that they have learned. They will then critique and
discuss what they think each rock should be labeled as with one another and hand in their final answers.
The teacher will wrap up day two by revealing the correct labels and finally answering any questions
students may have.

Results Obtained
Although this two day lesson plan has not been implemented in a classroom yet, I have tested the
experiments myself (excluding the use of the lunar disk) and they successfully demonstrate how density
can be used to identify lunar rocks, as well as how to physically characterize the types of rocks on the
Moon.

Significance and Interpretation of Results
The significance of this lesson is that students will gain knowledge addressing two benchmarks and an
indicator at grades 11 and 12 in Ohio. The lesson addresses scientific concepts such as density and
differentiation, which is applicable to many scientific subjects. They will also learn about the formation
of the Moon and the geology of moon rocks. Students will most likely have little knowledge about the
information presented in this lesson beforehand because the information is specific to studies of the
Moon. Therefore, this is a great lesson to get students excited about astronomy and science by going into
more detail about something that is very well known.

Acknowledgments
I would like to thank the Ohio Department of Education for the Current (2002) Science Standards. I am
also forever grateful to everyone who worked on Exploring the Moon: a Teachers Guide with Activities
for Earth and Space Sciences. All of the scientific information on the Moon and ideas were obtained
from this guide (except where otherwise cited). Also, because of NASA I was able to incorporate their
Lunar Sample Disk and Geology Toolbox that includes six magnifying glasses into my lesson plan.

References
1. Bennett, Jeffrey, Megan Donahue, Nicholas Schneider, and Mark Voit. The Essential Cosmic
Perspective. Boston: Addison Wesley, 2009. Print.
2. G. Jeffrey Taylor et al. Exploring the Moon: a Teachers Guide with Activities for Earth and Space
Sciences. National Aeronautics and Space Administration, 1997. Print.
290
Thermal Stability and Performance of Foil Thrust Bearings

Student Researcher: Brian J. Stahl

Advisor: Dr. Joseph M. Prahl

Case Western Reserve University
Department of Mechanical and Aerospace Engineering

Abstract
The performance map of torque, power, and load capacity as a function of speed for three open source
and geometrically identical air lubricated compliant foil thrust bearings is extended from a maximum
speed of 40,000 rpm to 70,000 rpm. The bearings are tested against Inconel thrust runners coated with
PS400. A second open source foil design is used to make two additional bearings one of which is also
mapped to 70,000 rpm. A non-dimensional torque based on hydrodynamic lubrication consolidates the
data within 15%. The four bearings tested on the high speed rig held their maximum load capacities at
speeds between 50,000 and 60,000 rpm, which corresponds to approximately 3.8 to 4.6 million DN. Load
capacity diminishes above these speeds. Separately heat treated top foils, in combination with previously
conditioned bump foils and backing plate, offer reduced variability when compared to a new matched set.

Introduction and Objectives
A gas foil bearing is a device which uses air or another gaseous lubricant to separate two surfaces in
relative motion. These devices are self-acting, requiring no separate control systems for either lubricant
flow as in conventional oil-lubricated bearings or electronic control as required by magnetic bearings.
Foil bearings offer the advantages of high speed and high temperature operation. They also offer
improved reliability, reduced maintenance requirements, and reduced complexity by eliminating the need
for supporting subsystems [6].

Thrust bearings constrain motion along the axis of rotation, while journal bearings constrain motion
orthogonal to this axis. Both types of foil bearings develop the air pressure necessary to support a load by
drawing air into a converging gap via viscous drag. Foil bearing systems typically use four different
components to achieve this load-supporting air pressure which can be seen in Figure 1. A rigid support
structure forms the foundation for the bearing, and a compliant layer of bump foils provides tolerance to
shock loads and misalignment. The hydrodynamic film develops on top of the smooth top foil, which is
supported by the compliant layer underneath. Finally, a hard and smooth surface rotates with the
turbomachine shaft and draws air into the gap between itself and the top foil.

There is a comparative dearth of knowledge on foil thrust bearings as contrasted with foil journal bearings
[3]. Today, thrust bearings are the Achilles heel of oil-free turbomachinery, preventing more widespread
implementation. Journal bearings typically carry the deadweight load of the shaft and the radial
vibrational loads. In fluid power machinery, there is often an axial load which far exceeds the shaft
deadweight load. Symmetric turbines can avoid these large axial loads, but thrust or pressure producing
machinery often cannot. Normally, a designer would increase the size of the thrust bearing to
accommodate the larger load. However, the thrust disk is already on the order of the blade height for
microturbines. Larger diameters present rotordynamic problems and exacerbate the existing problems in
high speed thrust bearing operation.

There is very little published performance data for foil thrust bearings, as most industry designs are
proprietary. The work of Bruckner [1] characterizes the complex interplay between structure,
hydrodynamics, and temperature in foil thrust bearings and delivers a model to describe these
interactions. Experimental research has primarily focused on exploring the factors which limit thrust
bearing performance. Dykas [3] investigates these factors, but the use of proprietary bearings complicates
the validation of numerical models. Dykas et al. [4] develop an open source bearing geometry which is
designed for simplicity and modularity, and Dickman [2] publishes the first known set of performance
291
data for a non-proprietary bearing design. This performance data is primarily taken between 0 and 21
krpm, with some data at 40 krpm.

This present work [7] extends, by 75%, the maximum speed published by Dickman [2]. It also considers a
second style of bump foils to be included in the performance library.

Methodology
The Low Speed Thrust Bearing Rig (LSTBR) is a pneumatically loaded, electronically controlled, and
motor driven test rig, designed to run high cycle start-stop tests between room temperature and 1000F.
The motor bearings are grease packed ball bearings which provide a very stable running surface for the
test bearings, at the expense of shaft speed. The rig has a top design speed of 21,000 rpm. The test
bearings considered in this work have a three inch (76.2 mm) average diameter which corresponds to a
maximum speed of 1.6 million DN on the LSTBR. DN is a measure of the linear surface speed and is
calculated as the bearing diameter in mm multiplied by the rotational speed in rpm. A diagram of the low
speed test rig is shown in Figure 2.

The High Speed Thrust Bearing Rig (HSTBR), designed and assembled by Mohawk Innovative
Technology Inc. (MITI), operates with a maximum rotational speed of 80 krpm. The rotating shaft is
supported by two foil journal bearings and an axial magnetic thrust bearing. The magnetic bearing is
designed to support 700 lbf of load, but even commercial bearings [3] have not been loaded much beyond
100 lbf. While grease packed bearings and an electronic motor are used in the LSTBR, non-contact
bearings and a pneumatically controlled impulse turbine in the HSTBR allow for a nearly fourfold
increase in shaft speed. However, this design sacrifices the axial stability that the ball bearings on the
LSTBR provide and the capability to test at elevated temperatures. The loader housing was redesigned
from the original MITI configuration to include oil lubricated bearings to support the shaft stack and to
damp axial vibrations. This redesign necessitated the loss of temperature testing capability. At the three
inch (76.2 mm) average diameter of the bearing, the maximum bearing speed at 80 krpm is 6.1 million
DN, although the bearings have not yet been tested beyond 70 krpm (5.3 million DN) on the HSTBR.
Figure 3 shows a cutaway diagram of the horizontal axis HSTBR.

The three operational regimes for foil bearings are described by the Stribeck curve. While normal
operating conditions should fall in the hydrodynamic regime, performance gains can be realized by
controlled conditioning of the bearing in the mixed regime. The operational principle is that the two
surfaces are brought close enough together to make asperity contacts. Ideally the runner and bearing
knock the high asperities down to achieve a smoother surface, a process called conditioning. This should
allow the bearing to run at a smaller gap thickness and thus a higher gas pressure and load capacity.

One technique for conditioning a bearing is to hold the speed constant and increase the load, which results
in a dramatic increase in torque. When held at this constant load, the torque trends downward with time,
which indicates beneficial rubbing contact between the top foils and runner. This rubbing serves to make
the two surface topographies match, allowing for a smaller average air gap thickness. Eventually, small
increases in load lead to a rapid and unchecked increase in torque. If this load is not removed, the bearing
foils will weld to the runner and destroy both. It is hypothesized that the mechanism for this unchecked
rise in torque is thermal [5]. Asperity contacts generate heat, this heat cannot be conducted away fast
enough, the top foils thermally deform, and this deformation leads to increased surface contact.

Torque should increase approximately linearly with load when the bearing and runner are separated by a
full hydrodynamic film [2]. As the two surfaces begin to touch, the torque increases much faster. While
the bearing may be able to operate steadily in this manner, the inflection point in the torque vs. load graph
can show where the fully hydrodynamic region ends and the mixed lubrication region begins. Data for
torque as a function of load at a fixed speed is collected after the load has been steadily increased to the
maximum which can be stably supported. The load is slowly removed and the quasi steady torque
response is measured. The advantage of collecting the data while removing the load is that thermal events
are less likely to corrupt the data. By starting the runner and bearing surfaces as close as possible and
then increasing the gap by removing load, the torque responds to the removal of load and not an incipient
292
thermal runaway. This test is repeatable at low loads while further conditioning serves to improve
(lower) the torque at high loads.

Results and Discussion
A study is conducted to extend the experimental performance map for an open source foil thrust bearing
design. A complete discussion can be found in Reference 7. Two bearings with identical geometry
(referred to as BNB style), previously tested to 40 krpm, are tested between 30 and 70 krpm and the raw
test data are reported (a representative result is shown in Figure 4). A third bearing of the same geometry
is fabricated and tested to ensure the repeatability of previous results and to add to the number of samples
considered. This data extends the range of published performance data to 5.3 million DN and is useful for
the validation of numerical models exploring high speed phenomena. The development of a crude model
for the non-dimensional torque, based on hydrodynamics, does appear to capture some of the relevant
physics. The non-dimensional torque is given by:
2
2
m
NR AL


where is the measured torque, is the viscosity of air as determined by the measured temperature, N
is the rotational speed, R
m
is the mean radius of the bearing, A is the surface area, and L is the load
supported by the bearing. Values for non-dimensional torque lie primarily between 15 and 20, using the
same proportionality and size constants across all of the bearings.

An additional bump foil design is considered to add breadth to the available performance data and is also
reported. The TEE style of bump foils is not as optimal as the BNB series, as evidenced by its lower
hydrodynamic load capacity. The wear area, or area thought to be primarily responsible for supporting the
load, on the TEE bearings is about half of that seen on the BNB bearings. However, both bearings support
approximately 20 psi as a maximum pressure load capacity when using the wear area, suggesting an
upper limit for a given top foil and runner wear couple without cooling. Additional data on different bump
styles is needed to confirm this result.

Two factors which limit the interpretation of this work are the poorly characterized axial vibrations on the
HSTBR, and the lack of crossover data between the two test rigs. Presently, the only monitoring of axial
position is an optical proximity sensor looking at the turbine end of the shaft. A second sensor is needed
to measure the amplitude and phase of the bearing motion. Measuring the motion of the backing plate
and/or top foils in addition to the runner would give a better overall picture of the impact that these
vibrations have on the gas film. Additionally, coast downs on the HSTBR do show good agreement in
torque with the LSTBR data at 8.6 lbf, but there is no data for extended operation on the HSTBR at
speeds of 21 krpm or below. Load capacity drops between 21 krpm on the LSTBR and 30 krpm on the
HSTBR. This is not likely due to high speed thermal issues, as the load capacity rises from 30 krpm
through 50 krpm. In the interest of preserving the health of the HSTBR, no crossover data between the
two rigs was attempted. Such data would be helpful in making meaningful connections between data
from the two rigs.

The maximum load capacity for the three BNB bearings tested occurs at 50 krpm, while the TEE bearing
tested held its maximum load at 60 krpm as shown in Figure 5. This result ignores performance on the
LSTBR, due to the aforementioned crossover issues. Better resolution testing (smaller speed increments)
can be conducted to more precisely define this maximum. The data shows that load capacity does not
indefinitely increase with speed, and speeds above approximately 4 million DN degrade the load capacity.
Significant performance gains can be realized through the use of cooling air, although passive thermal
management would retain the benefits of maintenance and control-free operation. Modelers should note
the speeds at which the load capacity diminishes for the geometries considered, while designers
considering foil bearings should be aware that high speed performance cannot be extrapolated from low
speed results.

293
Should future test work involve design iterations on the bearing top foil, it is shown that replacing the top
foil alone offers less variability than matching a new set of top and bump foils. A brand new bearing with
the BNB geometry was unable to support the load typical for its design, but when these top foils are
combined with previously well-performing bumps and backing plate, the bearing is easily conditioned.
This suggests that two sets of bump foils, manufactured and heat treated in the same way, can exhibit
different performance characteristics independent of the specific top foils matched to them. The influence
of the friction coefficient between the bump foils and backing plate on bearing performance is an area that
has not been well characterized for the non-proprietary design. The effect of solid lubricant spray
coatings on the bump foil would help refine models where the friction damping is not well known.

Figures

Figure 1. Thrust Bearing Design and Cross Section Schematic.


Figure 2. Low Speed Thrust Bearing Rig Cutaway.
294

Figure 3. High Speed Thrust Bearing Rig Cutaway.


Figure 4. Torque as a Function of Load for BNB005.
295

Figure 5. Hydrodynamic Load Capacity vs. Speed.

Acknowledgments
The author wishes to express his gratitude to his thesis advisor Dr. Joseph Prahl for his academic guidance,
assistance in running tests, and rebuilding work on one of the test rigs. The Tribology and Mechanical
Components Branch (RXN) at the NASA Glenn Research Center is thanked for the use of its test hardware.
Specifically, Dr. Robert Bruckner is thanked for his close technical guidance and mentoring. The author
also acknowledges Dr. Christopher DellaCorte for his oversight of the foil bearing research program.
Thanks to Richard Manco for his support in the lab, particularly for keeping the rigs operational and for his
assistance in replacing the high speed shaft. Kevin Radil and Dr. Brian Dykas are acknowledged for their
guidance on rig repair and rig operation, respectively. Former Masters student Joesph Dickman is
sincerely thanked for leaving the author with a pair of fully operational test rigs and for spending the final
semester of his program getting the author up to speed on the foil bearing research program at NASA-GRC.
Finally, the Ohio Aerospace Institute, the Case Alumni Association, and the Mechanical and Aerospace
Engineering Department of the Case School of Engineering, namely Laura Stacko, Dr. J. R. Kadambi, and
Dr. Iwan Alexander, are thanked for their generous financial support, without which this work would not
have been possible.

References
1. Bruckner, R., Simulation and Modeling of the Hydrodynamic, Thermal, and Structural Behavior of Foil
Thrust Bearings. Ph.D. Dissertation, Case Western Reserve University, Cleveland, OH, (2004).
2. Dickman, J., An Investigation of Gas Foil Thrust Bearing Performance and its Influencing Factors.
M.S. Thesis, Case Western Reserve University, Cleveland, OH, (2010).
3. Dykas, B., Factors Influencing the Performance of Foil Gas Thrust Bearings for Oil-Free
Turbomachinery Applications. Ph.D. Dissertation, Case Western Reserve University, Cleveland, OH,
(2006).
4. Dykas, B., Bruckner, R., DellaCorte, C., Edmonds, B., Prahl, J., Design, Fabrication, and Performance
of Foil Gas Thrust Bearings for Microturbomachinery Applications. Journal of Engineering for Gas
Turbines and Power. Vol. 131 / 012301-7 (January 2009).
5. Dykas, B., DellaCorte, C., Prahl, J., and Bruckner, R., Thermal Management Phenomena in Foil Gas
Thrust Bearings. In Proceedings of Turbo Expo 2006: Power for Land, Sea, and Air, no. 2006-91268,
American Society of Mechanical Engineers, (2006).
6. Dykas, B., and Howard, S. A., Journal Design Considerations for Turbomachine Shafts Supported on
Foil Air Bearings. Tribology Transactions, 47(4), pp. 508516 (2004).
7. Stahl, B., Thermal Stability and Performance of Foil Thrust Bearings. M.S. Thesis, Case Western
Reserve University, Cleveland, OH, (2012).
296
Optimization of Scenedesmus Dimmorphous in an Open Pond System

Student Researcher: Brittany M. M. Studmire

Advisor: Dr. Joanne Belovich, Cleveland State University; Dr. Bilal Mark McDowell Bomani, NASA
Glenn Research Center

Cleveland State University
Department of Chemical Engineering

Abstract
The sustainability of aviation directly depends on the availability of fuel. With the growing gap between
production and demand, increasing prices, and concentration of known reserves in politically unstable
regions, biofuels are considered a viable alternative to securing the future of aviation. Biofuels are a
renewable energy source, which could be customized to different fuel needs, including jet fuel. NASA
Glenn Research Center (GRC) has initiated a pilot program to develop in-house capabilities to study two
principal sources of biofuels: seawater algae, and arid land halophytes. The present program is focused at
putting together the initial infrastructure for the study, to developing a long-term program to study and
optimize properties and growth parameters, and to develop collaborations with aviation companies,
commercial ventures and government agencies to forward the application of biofuels to aviation needs.

Project Objectives
Current projects involving the scale-up of microalgae have proven to be expensive In order to reduce
scale-up and capital costs, the growth of the microalgae must be optimized. The purpose of these
experiments was to optimize the growth of scenedesmus dimorphous to better determine the kinetics of a
large-scale system.

Methodology Used
A 2 ft deep open pond system was used with six 6500K 400 watt Japanese Iwasaki bulbs suspended six
inches above the pond with a light/dark cycle of 16/8 hours, respectively (See Fig. 1). A Tunze wave
generator was added to the pond, with the amplitude and frequency of the waves adjusted, along with a
booster and laminar flow pump to encourage random mixing in the system. Previous testing has shown
that the mixing system achieves 95% mixing. A carbon source (C0
2
), needed for the algae to grow, is
pumped into the system, and is controlled by the pH of the system (6.5 setpoint). The temperature of the
system is controlled and kept at 28 oC +/- 1 oC. Innoculum was grown in four 2 liter jars with a working
volume totaling 6 liters. This was grown to an absorbance of about 2 OD, then added to the large pilot
tank, which has a working volume of about 700 liters. Absorbance samples were taken daily and
recorded. 50 mL samples were taken daily for future NO3- readings; algae suspension was centrifuged
down at 2000 RPM and supernatant was removed and frozen until the end of the experiment. 1 L samples
were taken every other day for lipid extractions. Suspension was centrifuged at 2000 RPM, supernatant
removed, and the resulting biomass pellet was allowed to dry in an oven at 55-60 oC until weight was
constant. The Bligh-Dyer lipid extraction and Hexane/Isopropanol methods were followed. A periodic
check on the health of the cells was conducted visually under the microscope about once a week.
Experiment was repeated twice to determine reproducibility.

Results and Discussion
Previous research in the small 250 mL flasks have shown a maximum absorbance of 2.5, which translates
to about 1.55 g/L of biomass dry weight (See Fig. 2). The correlation between absorbance and biomass
comes from previous research done in our algae lab that determined absorbance is related to biomass by a
factor of 0.62. Essentially, ensuring that biomass is at its highest concentration as quickly as possible is
vital in obtaining the highest quantity of lipids.

Our previous research on the small flasks have also shown that the largest growth of cells occurs within
days two through five, which translates into the biomass doubling in concentration in roughly 1.5 days
(See Fig. 3).
297

Comparing the growth of algae in our large tanks with our previous small flask experiments, it is apparent
that the small flasks experienced quicker growth. The larger tanks reached a maximum absorbance of just
under 2 over the course of roughly 20 days, where as the flasks peaked at 2.5 within 10 days (See Fig. 4).
In contrast, the larger tanks experienced higher specific growth rates, which translates into lower doubling
times (See Fig. 5).

While lipid extractions were taken, the results presented are questionable due to varying extraction
methods being used, and the inconsistency with which samples were taken (See Fig. 6).

Conclusion
Overall, our scale-up attempt was successful such that we obtained reproducible results in terms of cell
growth. There was approximately a 30% increase in the mean growth rate, which can possibly be
attributed to the light system, pH, temperature control, or a host of other things. Invertly, we saw a 23%
decrease in the maximum cell concentration. Possible explanations could be due to the mixing of the
system or the surface-light exposure to liquid-volume ratio. Future experiments will focus on bettering the
collection of data for nitrate and lipid kinetics.

Figures and Charts


Figure 1. Image of the open-pond system with lights suspended.


Figure 2. Absorbance readings for small 250 mL flask experiments showing maximum absorbance
occurs at day 10.

298

Figure 3. Mean growth rate of previous small flask experiment gives a doubling time of roughly 1.5 days.


Figure 4. Comparing the absorbance of the two large tank experiments with the small flask experiment.
The small flask experiment reaches maximum absorbance quicker than the larger tanks.


Figure 5. Comparing specific growth rates and doubling times between experiments. The larger tank
experiments gave shorter doubling times.

299

Figure 6. Lipid content comparison between experiments. Data is questionable due to varying extraction
methods used, and inconsistency in data collection.

Acknowledgments
Special thanks to: Dr. Joanne Belovich, Cleveland State University; Dr. Bilal Bomani, NASA Glenn
Research Center; Ohio Space Grant Consortium; Ohio Aerospace Institute.
300
Neural Controller for Legged Robots:
System Behavior Characterization via the Permutation Engine

Student Researcher: Nicholas S. Szczecinski

Advisor: Roger D. Quinn

Case Western Reserve University
Department of Mechanical and Aerospace Engineering

Abstract
A characterization tool was developed to aid in the creation of neural controllers for legged robots. As
biology becomes a more common model for engineering, it has become desirable to design controllers
based in neurobiology. The structure and connections of such systems are well understood in some model
organisms, but rarely are the physical properties of each neuron and synapse examined. The presented
tool, the Permutation Engine, is the first step toward an optimization tool for such systems. In its current
form, it characterizes the behavior of a user-specified network by sweeping through the user-specified
parameter space and numerically deriving relationships between system properties and behavior. It is also
capable of analyzing the stability of a limit cycle through a numerical adaptation of Floquet analysis.

Project Background and Objectives
Despite recent developments in robotics and controls, robots remain relatively poor locomotors. Indeed
some very impressive legged robots exist, but they are limited by speed and the ability to change gait.
Animals, however, effortlessly control multiple limbs into a variety of gaits, automatically selecting the
most relevant one to its current environment. Such adaptability would prove invaluable to any walking
robot attempting to navigate uneven or dynamic terrain. No robot currently possesses this ability.

It is hoped, however, that a robot whose controls are generated by a simulated neural circuit based on an
actual animal system could produce the same adaptive behavior seen in animals. Modeling connectomes
from literature, however, requires that the engineer calculate and tune the properties of each neuron and
synapse individually to produce the desired behavior, a method that is common in the field of
computational neuroscience and was attempted first on this project. This process, however, is slow and
tedious. Therefore a testing environment, the Permutation Engine simulation and analysis tool, was
developed in which the user can specify a connectome, static system parameters (if desired), and a
parameter search space for the system. The Permutation Engine will then run short experiments and
automatically analyze the output for the desired metric (bursting frequency, stability of perturbed
response, etc.). Once it has calculated the desired output for each property to be swept, the data is
regressed to an explicit function. Such an experimental approach to computational neuroscience is one
that has been gaining momentum recently [1][2][3], and so this nontraditional process is justified.
Moreover, similar systems have led to successful implementations of neural controllers in legged robots,
which is the ultimate goal of this project [4].

Methods
This software strives to find empirical relationships between potentially any number of parameters in a
neural system of any size by running short experiments that simulate the dynamics of a system,
automatically analyzing the system behavior, and then incrementing one or many of the parameters of the
system. Repeating the experiment multiple times, each with different system properties, will then reveal
relationships between network parameters and performance. In this way one can completely characterize
a system of virtually any size and then save only the regressed data, not only improving the designers
ability to assemble neural systems but also saving disk space that would have been necessary to save all
of the raw output.

301
The entire project was written in Matlab (The Mathworks, Inc.), not only because it was what the
programmer was most comfortable with, but also because it compiles the code as it runs, which
drastically reduces debugging time. The presented system has a very modular and hierarchical structure,
taking full advantage of Matlabs object oriented programming capabilities. Figure 1 shows two different
representations of the system, one containing qualitative names of each of the objects, and the other
labeled with the file/class names. The bottom layer represents physical models of the systems in question.
The next layer up contains objects that are networks of the neurons, assembled from the pieces in the
layer below. The second layer from the top contains
the permutation engine, which creates and contains
objects that represent different permutations of the
networks provided. Finally, the top layer contains
the software with which the user interfaces,
including input, menus corresponding to the type of
analysis one wants to perform, and methods that
analyze the output. By structuring the system this
way, the user needs to do little bookkeeping besides
instantiating the objects and assembling them into
the network desired.

The primary system behavior analyzed in this report
is a central pattern generators (CPG) bursting
frequency. Both in vivo and in our simulations, each
joint is driven by an antagonistic pair of muscle groups, each of which stimulated by one neuron from a
particular CPG. Each joints CPG is only connected to the adjacent CPGs by sensory afferents.
Coordinating these oscillators into a stable, coherent pattern is crucial to generating a unified stepping
motion, so this was the focus of much of this project.

The CPG frequency is determined by a method in PostProcessing.m. Its first operation turns the raw
voltage recording into a square wave that represents the CPGs bursting. This signal is much easier to
analyze, and from this the method determines the mean and standard deviation of the frequency. This
process is then repeated across system parameter permutations and the CPG frequency is regressed to fit a
surface that explicitly relates the inputs and outputs.

This software package also uses numerical methods to perform a Floquet type stability analysis. The
underlying principle to this analysis is that if one perturbs a system from a limit cycle, he or she can
examine how the system returns to or escapes from the stable trajectory. If the perturbation is small
enough, one can use perform a Taylor expansion of the solution trajectory to produce a linear system that
represents the difference between the limit cycle and the perturbed, actual path taken. If this function has
negative eigenvalues, then the perturbed response decays to zero and the solution converges back to the
limit cycle. If this function has a positive eigenvalue, then the solution escapes from the limit cycle and
never returns. This sort of analysis has already proven useful in terms of creating oscillators that can be
turned on and off at will, a very useful structure in a neural circuit.

Results
The purpose of the regression method is to generate a surface that passes exactly through every single
data point collected in one trial and interpolate between these points. This would allow the program to
discard the data that generated the surface and simply store a function that fits the data as the only record
of the experiment. Such relationships would be greatly beneficial for a designer to assemble a circuit that
behaves in a predictable manner or a program to optimize a circuits output. Therefore whatever fitting
method is used must be able to fit the data exactly to prevent false conclusions in the future.
Figure 1. A. An organizational map of the system
with descriptions of what each object does on each
block. A copy of the code is available upon
request.
302
The current polynomial regression surface fitting method is not robust enough to fit completely random
data sets to a surface, but does seem to be able to fit smoothly varying data to a surface with sufficient
accuracy when manipulated properly. Some preprocessing techniques have been developed to reduce
numerical error and improve goodness of fit. Figure 2 shows two different fits of the same data, both
unprocessed and processed. Such preprocessing reduced the
2
value of the fit by eight orders of
magnitude. The algorithm that produced the final fit is used in the current version of this software
package. It produced the formula:

( )

( )
{[] []}

where x is the tonic drive (in nA) to the first half-
center, y is the tonic drive to the second half-
center, and f is the CPGs frequency (Hz).

The Floquet analysis tool provided deep insight
into the dynamics and stability of the CPG system.
Eigenvalues of the perturbed system were
calculated as a function of the phase in which it
was perturbed. The data can be seen in Figure 3
(top). From the plot of the eigenvalues, it is clear
that the systems stability depends heavily on the
phase at which it is perturbed, that is, it is much
more stable when firing.

Viewing a state-space representation of this system
is also enlightening. Figure 3 (bottom) shows the
firing frequency of one neuron plotted against the
other. In this experiment, the perturbation caused
the oscillator to leave the limit cycle and seek the
stable fixed point near (50,50). Such a bifurcation
suggests that this process can be reversed, that is,
that a stable system can be made to seek a limit
Figure 2 Two different polynomial fits of the same data but with different preprocessing. The data points
are red +s and the surface is the calculated fit function. The plot on the left shows a fit of the entire dataset,
which is mostly zeros (
2
=3.51e03). The plot on the right shows a fit of the data set once range values of
zero were removed, a point was added at the origin, and the data was mirrored into all four quadrants
(
2
=1.01e-05).
Figure 3. A plot showing the perturbed response eigenvalue
(top) as a function of phase of perturbation. The black bar
corresponds to time when the perturbed neuron is firing.
Note that values equal to zero are circled in green. These
points represent trials in which the trajectory left the limit
cycle and approached the stable fixed point. A plot of the
state-space of the system (bottom) shows the perturbed
trajectory (lighter trace) leave the limit cycle and settle at a
stable fixed point.
303
cycle if stimulated properly. Experiments in the open source neural system sandbox Animatlab verify all
these results [5].

Conclusions
All of this information, which is the result of the development of the Permutation Engine, will drastically
improve neural controller design in the future.

Acknowledgments
I would like to thank Alex Lonsberry for his collaboration on this project.

References
1. A. A Prinz, C. P. Billimoria, and E. Marder, Alternative to hand-tuning conductance-based models:
construction and analysis of databases of model neurons., Journal of neurophysiology, vol. 90, no. 6,
pp. 3998-4015, Dec. 2003.
2. A. A Prinz, Computational approaches to neuronal network analysis., Philosophical transactions of
the Royal Society of London. Series B, Biological sciences, vol. 365, no. 1551, pp. 2397-405, Aug.
2010.
3. R. J. Calin-Jageman, M. J. Tunstall, B. D. Mensh, P. S. Katz, and W. N. Frost, Parameter space
analysis suggests multi-site plasticity contributes to motor pattern initiation in Tritonia., Journal of
neurophysiology, vol. 98, no. 4, pp. 2382-98, Oct. 2007.
4. A. von Twickel, M. Hild, T. Siedel, V. Patel, and F. Pasemann, Neural control of a modular multi-
legged walking machine: Simulation and hardware, Robotics and Autonomous Systems, vol. 60, no.
2, pp. 227-241, Nov. 2011.
5. D. Cofer, G. Cymbalyuk, J. Reid, Y. Zhu, W. J. Heitler, and D. H. Edwards, AnimatLab: a 3D
graphics environment for neuromechanical simulations., Journal of neuroscience methods, vol. 187,
no. 2, pp. 280-8, Mar. 2010.
304
Developing Traffic Data Collection Software with Multi-Touch Technology

Student Researcher: Usaaman Taugir

Advisor: Dr. Yi

The University of Akron
Department of Civil Engineering

Abstract
Over time civil engineers have used many different devices to collect transportation data, from pencil &
paper to data collection devices to computers. These tasks are error prone, because of the multi-tasking
that occurs. If only there was a way to develop a piece of equipment that could make it more efficient for
civil engineers to gather on site traffic data.

The solution to this problem would be a touch screen phone that could collect data by understanding
mimics of the operators fingers. The engineer could indicate how they see traffic flow and just move
their fingers and the data would be recorded.

The processes of this research project were as follows. Task one was choosing a program, and app system.
Java program and Android apps were chosen for conducting the research. Next task was development of
the program for the device. For the third task we tested the newly made program. After testing, the
development of the data collection system for a traffic network started. In the final phase of the research
the program was tested once again.

After the completion of the experiment results were obtained successfully. Our understanding turned out
to be true. When we used the phone & app as a collection method, the results validated that our method
reduced errors, and time taken to complete tasks. The project was a success but much more can be done to
improve the current system, and will be the focus for continuing research.

Project Objectives
Ever since modern road ways have been built, there has always been some type of technology or tool to
collect date for the process to build. Ranging from something as simple as a piece of paper and pencil to
laptop computers. Though these tools are very simple to use they are not always efficient in the process.
Recently within the past years, multi-touch technology has become very common among the general
public. Is it possible to use the technology we have today, to make it easier to collect data in the
transportation caviling engineering industry? If we were to make a device that could record our touches to
what we see when we go to observe traffic in an area, this could in return make it easier to record date
rather than using the pencil and paper or the laptop. Multi-touch technology can be put on something as
small as a phone, so it would seem reasonable to make something just as small and efficient to collect
data.

Developing multi-touch technology would improve the way transportation engineers collect data on the
field. The process of developing this type of technology is not easy because one would be taking what has
been done with pencil and paper and laptops for many years and turning it on to a device that has been
recently created. It can then be incorporated into what is necessary for this research project.

Methodology Used
TASK 1 DEVELOPING BASE AND SELECTION
In this task, multi-touch devices from different companies will be compared and their developing
environments will be evaluated to select the proper one. The candidates include iPhone/iPod Touch from
Apple, Zune HD from Microsoft and Nexus One from Google. This will take about 1 week.

TASK 2 DEVELOPMENT OF PROGRAM FOR SINGLE INTERSECTION
Once the developing platform has been determined, the first program for collecting vehicle turning
305
movements at single intersection will be developed. By using multi-touch technology, finger movements
on the screen will be interpreted to vehicle turning movements and saved in the database. These tasks will
last five weeks for programming and debugging with multi-touch technology.

TASK 3 TEST OF VEHICLE TURNING MOVEMENT PROGRAM
After the program has been developed in task 2, it will be tested both in laboratory and field. Microscopic
simulation software with 3D view will be employed to simulate different intersections under different
traffic conditions. The tester will be asked to record vehicle turning movement information by watching
the 3D simulation played in real time scale using the program developed in task 2. The collected data will
be compared with the ground truth from simulation software and possible improvement can be made
based on the result. After that, on field tests will be conducted at several locations to compare the traffic
data collected by three different methods, the traditional manual method, the one with hand-held device,
and the one with multi-touch device. All the data will be compared with ground truth obtained from the
video shot in the field. This task will last another two weeks.

TASK 4 DEVELOPMENT OF DATA COLLECTION SYSTEM FOR TRAFFIC NETWORK
When the program for a single intersection has been developed and tested successfully, it will be modified
and integrated into a data collection system as one of its key functions. Based on the experience in task 2
and 3, a travel time study program will be built into the system. More multi-touch units will be included
in this task since traffic data will be collected in the whole network instead of single intersection. With the
help of GRADE, the system is able to collect volume, turning movement, travel time, delay and other
information for a traffic network. The complication of this system will make this task last for ten weeks.

TASK 5 TEST OF DATA COLLECTION SYSTEM
After the data collection system has been developed and debugged, it will be tested with simulation
software first in the lab. Similar to task 3, operators will sit before the computer screen and watch the
simulation of traffic in 3D. Collected data will be compared with the ground truth from simulation
software. Further modification of the system is expected in this task. A field test may be performed once
encouraging results obtained from the simulation. This task will last four weeks.

Results Obtained
Observation of Traffic Patterns Using a Traffic Data Collection System
South Arlington and 5th Avenue, 4 Way Intersections, Akron, Ohio, United States


THE SATELI TE I MAGE OF I NTERSECTI ON, I MAGE TAKEN FROM GOOGLE MAPS

S= South N=North E=East W=West B=Bound

*SBSB means Southbound to Southbound, NBEB means Northbound to Eastbound.etc.

On South Arlington Street
306
SBSB 172
SBEB 6
SBWB 3

NBNB 169
NBEB 8
NBWB 4
On 5th Avenue
EBEB 3
EBNB 12
EBSB 5

WBWB 20
WBNB 14
WBSB 10

The above results are a sample of what kind of data that can be recorded, using the four different types of
methods, Pencil & paper, computer, Traffic Counter, and Traffic Counter App. All the data collected by
each device can be simplified to what you see above. The specific type of data was taken by watching a
video recording of traffic moving on the intersection. The video recording length was 26 minutes long. An
example of ones recording would be, observer sees one car approach the intersection going Southbound.
Depending on if it kept going its current route or turned the observer would put a tally in one of these
columns, SBSB, SBEB, and SBWB. This is a basic example of how all data was collected, and it
is very tedious. The videos sometimes have to be paused, or re-winded to make sure nothing was missed.

Data Collection from Using Different Devices


The above are results from watching a video simulation of a traffic video. The blue graph indicates the
total time taken for each process, and green indicates number of mistakes. Mistake is meant by that a tally
was placed in a wrong location, or a car was missed when collecting data. For example the process of
collecting data using a pencil and paper took a total of 6 hours, and had 41 mistakes. The process of using
a computer took 5 hours and had 35 mistakes. The process of using a Transportation Counter took 7 hours,
and had 37 mistakes. The last process of using the developed app was 2 hours, and 20 mistakes.
Conclusion drawn from these results is that our app a better tool for collecting traffic data verses the other
methods.

Conclusion
The main goal of this research project was to develop a device that could make it easier for transportation
civil engineers to collect on site data more efficiently. The way the research has gone, this goal is
accomplished. Our main requirements were to lessen the burden of multi-tasking when transportation
0
5
10
15
20
25
30
35
40
45
TIME (HOURS)
Accuracy (# of
Mistakes)
307
civil engineers collected traffic data. The device made for collecting data, was a multi-touch screen phone.
Instead of worrying about writing or pressing buttons, all that has to be done is finger motions of what is
being seen. From the testing trails we did, the results validate this. The same data type was gathered using
different methods, collecting data by pencil and paper, transportation data collection device, and computer.
All the data collected by these devices was tested against our own multi-touch device. All four methods
were ranked on efficiency, user friendliness, and accuracy. Out of all the methods, after testing it was
determined that our device indeed was more efficient, user friendly, and accurate.

With this being said there were many problems that we encountered trough out the research project. One
problem that came up was the time constraints, many things had to be rushed to meet certain deadlines.
The reason for this is because the complexity of this project. Secondly our testers that helped with the
project could also implement their own bias during the testing phase. This could have been done that not
everyone collects data the same way, so it is possible someone collected slowly or faster than another
person. The last major problem we ran into during testing was trying to find the same types of
intersections. What is meant by this is that we had to either use the same intersection each day, or make
sure traffic patterns were the same if we used another one. Getting the same type of intersection each time
was very difficult each test. Tough this research project was done pretty well there can be many
improvements that can be made to it. For one improvement is that we should have used the same type of
intersection, and on the same day and time. This would not put a difference of traffic patterns bias on our
results. Second improvement that we could have done is make it easier for the testers that are helping us,
to understand the multi-touch device better. A manual could have been developed for the device so this
could have been achieved easily. This would have saved the project a lot of time, and made it run more
smoothly. Lastly I feel we should have also given a survey to the testers, and asked them what method
they liked the best, the old ones or are new found one. Instead of just comparing data that was collected
using different methods, we could also have made a decision based on how the users felt by the different
methods for collecting data. Tough there were mistakes made during the process of this research project,
the project is deemed successful, we were able to successfully make a device to collect traffic data, and
successfully test it.

References
1. Buxton, Bill. "Multi-Touch Systems That I Have Known and Loved." Bill Buxton Home Page. Web.
28 July 2011. http://www.billbuxton.com/multitouchOverview.html
2. Gold, Lauren. "CU and Local Transportation Officials Adopt Biodiesel Fuel." The Chronicle
Online (2007): 0-1. Cornell Chronicle Online. Web. 06 June 2011.
<http://www.news.cornell.edu/stories/Jan07/sustainability.biofuel.html>.
3. Koc, W. "Design of RailTrack Geometric Systems by Satellite Measurement." ASCE Journals
(2011): 0-1. Web. 6 June 2011. <http://ascelibrary.org/teo/resource/3/jtpexx/224?isAuthorized=no>.
4. Kooistra, Durk. "DIY Multi-touch Screen." Humanworkshop || E-Zine. 04 July 2009. Web. 28 July
2011. <http://www.humanworkshop.com/index.php?modus=e_zine>
5. Malyzs, Rodrigo. "Investigation of Thin Pavements Rutting Based on Accelerated Pavement
Testing and Repeated Loading Triaxial Tests | Browse Manuscripts - Journal of Transportation
Engineering." ASCE Journals (2009): 0-1. ASCE Library. Web. 07 June 2011.
<http://ascelibrary.org/teo/resource/3/jtpexx/220?isAuthorized=no>.
6. Moridpour, Sara. "Enhanced Evaluation of Heavy Vehicle Lane Restriction Strategies in
Microscopic Traffic Simulation." ASCE Journals: 0-1. Web. 6 June 2011.
<http://ascelibrary.org/teo/resource/3/jtpexx/222?isAuthorized=no>.
7. Mucka, Peter. "Comparison of Longitudinal Unevenness of Old and Repaired Highway Lanes."
ASCE Journal (2010): 0-1. ASCE Library. Web. 07 Mar. 2010.
<http://ascelibrary.org/teo/resource/3/jtpexx/219?isAuthorized=no>.
8. Punith, V.S. "Laboratory Investigation of Open 2010Graded Friction Course Mixtures Containing
Polymers and Cellulose Fibers." ASCE Journals (2011): 0-1. ASCE. Web. 6 June 2011.
<http://ascelibrary.org/teo/resource/3/jtpexx/225?isAuthorized=no>.



308
9. Sulieman, Muhannad. "Structural Response of Pervious Concrete Pavement Systems Using Falling
Weight Deflect meter Testing and Analysis | Browse Manuscripts - Journal of Transportation
Engineering." ASCE Library. ASCE, May 2009. Web. 07 June 2011.
<http://ascelibrary.org/teo/resource/3/jtpexx/216?isAuthorized=no>.
10. Qu, Yi Grace. "Estimation of Design Lengths of Left 2010Turn Lanes | Browse Manuscripts -
Journal of Transportation Engineering." ASCE Journals (2009): 0-1. ASCE Library. ASCE, 01 Feb.
2009. Web. 07 June 2011. <http://ascelibrary.org/teo/resource/3/jtpexx/221?isAuthorized=no>.
11. Ye, Jianhong. "Optimal Measurement Interval for Pedestrian Traffic Flow Modeling | Browse
Manuscripts - Journal of Transportation Engineering." ASCE Library. ASCE, Feb. 2010. Web. 06
June 2011. <http://ascelibrary.org/teo/resource/3/jtpexx/211?isAuthorized=no>.
12. Zhang, Yuqing. "Anisotropic Viscoelastic Properties of Undamaged Asphalt Mixtures." ASCE
Journals (2009): 0-1. ASCE. ASCE, 14 Aug. 2007. Web. 6 June 2011.
<http://ascelibrary.org/teo/resource/3/jtpexx/223?isAuthorized=no>.
309
Characterization of Thin Film Deposition Processes

Student Researcher: Charles F. Tillie

Advisor: Dr. Jorge E. Gatica

Cleveland State University
Department of Chemical and Biomedical Engineering

Abstract
With the raise of environmental awareness and the renewed importance of environmentally friendly
processes, surface pre-treatment processes based on chromates have been targeted for elimination by the
United States Environmental Protection Agency (EPA). Indeed, chromate-based processes are subject to
regulations under the Clean Water Act and other environmental initiatives, and there is today a marked
movement to phase these processes out in the near future. Therefore, there is a clear need in developing
new approaches in coating technology aimed to provide alternative practical options to chromate-based
coatings in order to meet EPA mandates. This research focuses on calorimetric analysis to develop an
alternative process.

Project Objectives
The overall goal of characterizing the chemical vapor deposition reaction has many components to it.
Thermal characterization of the solutions being used to grow the films must be completed, including the
specific heat and heat of vaporization. Distinguishing between the thermal effects of the surface reaction
and vaporization must be completed to ensure an accurate model. It is also necessary to develop a data
analysis methodology to retrieve the kinetic parameters for the chemical vapor deposition reaction. The
laboratory environment in which the films will be grown must also be modeled. This includes
determining the optimum location of the stage inside the furnace and modeling the flow of air through the
furnace. This research focuses specifically on determining the reaction parameters through two methods
of kinetic analysis, one which assumes a reaction model and one which does not.

Methodology Used
These analyses are completed using a state-of-the-art Differential Scanning Calorimeter (DSC), a research
grade MDSC: Q200 Modulated DSC with Mass Flow Control from TA Instruments. This device
measures the amount of heat flow required to raise the temperature of a sample of solution at a user-
specified rate against an empty reference pan. These numbers can be translated into conversion data by
an integral method, treating the conversion as the incremental area under the curve divided by the total
area, demonstrated in Figure 1.

From the general mole balance on a batch process, and neglecting any outward flow of material, the
following equation can be derived:

,

Where C
a
is the concentration of component a in the pan, t is time, and r
a
is the reaction rate with respect
to component a. Previous research shows that a simple power law model of the following form can
adequately express the kinetics:

()(

,

Where k(T) is the temperature-dependent rate constant and n is the order of reaction. For a non-
isothermal batch process with this rate expression, the following design equation can be derived:

310

[(

) (

)] (

,

Where x
a
is the conversion of component a, k
0
is the pre-exponential factor (at reference temperature, T
0
),
E is the activation energy and R is the gas constant. This can be linearized to the form y = a + bx by
taking the natural logarithm of both sides, leaving:

(

)
(

] [(

) (

)]

Plotting the left side as a function of

theoretically yields a straight line, from which the kinetic


parameters can be easily evaluated. This was done using a polynomial regression technique in
MATLAB.

Alternatively, a method can be used which doesnt require a model. This has many advantages. The
previous method requires fitting the data to a specified model, which doesnt leave much room for error in
the model. Additionally, it has been shown that multiple models can be used to satisfactorily describe the
same kinetic curve. Also, the assumed model method requires a lot of data filtering, which can
compromise the results. This technique utilizes an isoconversional method, which involves running the
reaction at different heating rates and isolating data at the same conversion. The governing equation for
this method follows:

[
(

] [
(())

] [
(



Results Obtained
For the assumed model method, figure 2 shows the results of the linearization of the data for a reaction
order of n = 0.9. The activation energy had a value of 105 24 kJ/mol. The pre-exponential factor had a
value of 0.08 0.02 s
-1
.

The model-free method was used to generate preliminary results for comparisons sake. This method
returned a value for the activation energy of 41 24 kJ/mol.

Significance and Interpretation of Results
The results of the assumed model method were fitted inside confidence intervals of 23%, and the
uncertainty of the numbers themselves shows it. Narrowing down this experimental uncertainty must be
done before any significance can be given to these results. The results from the model-free approach
werent much more reliable. As can be seen in Figure 3, the line for the 7.5C/min heating rate
demonstrates a different shape than the other two at higher conversions. It is believed that this is what
caused the 7.5 C/min data in Figure 4 to be so far from the other sets.
Further, the difference between the results for the activation energy of the two methods was over 100%.
This could be a function of experimental error, but it could also point to the model that was assumed for
the first method not being ideal. Further experimentation needs to be completed in order to either confirm
or re-evaluate these numbers.

Acknowledgements
This research was completed with funding from the Ohio Space Grant Consortium, the Cleveland State
University Honors Program and the Fenn College of Engineering. Thanks are also extended to Andrew
Snell and Mike Clark.
311
Rocket Power

Student Researcher: Zachary M. Tocchi

Advisor: Dr. Lynn Pachnowski

The University of Akron
College of Education

Abstract
While working with several different groups of students throughout my short time as a college student, I
have noticed that many students do not see how one type of math relates to another. The purpose of this
project is to have students explore how one situation requires different levels and types of mathematics.
Students will construct paper rockets to be launched into the air. There will be a video camera on site
which will watch the rocket from launch to landing. When a rocket is launched, it makes a parabola.
Students will write an equation to the parabola their rocket made. Then we will use statistics to compare
some points of interest of the rockets path. Students will also use some geometry to talk about air
pressure, volume of the rocket, types of cone to use, etc. By seeing how we can use all sorts of different
levels of math for one project, students will see how math relates to the real world.

Theoretical Framework
In the year 2012, we are teaching to our students as if it were 1990. Someone mentioned this to me
recently and it really got me thinking teaching in a math classroom really has not changed much over
the past few decades. We used to teach using chalk boards and then someone brought in an overhead
projector. Now we just migrate over to the interactive white board technology. Pretty soon, students will
be issued tablet devices and all the same learning that was done prior to that will be done on the tablets.
When new technology is introduced, it is often just handed to teachers and they are never given help on
effectively integrating this technology into their classroom. The point I am trying to make is that sure we
have this technology, but we are not utilizing it correctly. So, the point of this project is to have students
working hands on with rockets and applying some math that they have been working on in class and
using some technology to enhance the learning.

Lesson Overview
Students will be in groups of two and will be given the activity High-Power Paper Rockets which is a
NASA created activity. I will have students building paper rockets to launch in about a week. Their task is
to build a rocket that they think will fly the highest or the furthest. The first day would be saved for
research. Students are allowed to research information about real rockets and what works well for
building these rockets. Students will also be given a limited NASA budget of $100 million dollars to
spend on materials and fuel for their ship. Dealing with money is an aspect of math that I think students
forget about after about the third or fourth grade. Maintaining a budget is a crucial part of everyones
lives, and it is just as crucial here. Next, students will design their rocket and do some basic geometry
with it. Students will be asked questions such as What is the surface area of your rocket? and What
shapes make up your rocket? After the rocket is built, students will hypothesize how far and how high
their rockets will go. When we go to launch, students will hopefully use the last of their budget to pay for
the fuel which is just air pressure in this case. There will be a video camera on scene to record the
rockets launch from start to finish. At this point, students will take and analyze the video and find the
parabola the rocket made as it went up in the air and back down. Other things students will use in their
calculations is the vertical motion model which is a formula that uses initial vertical velocity, time in
seconds, and initial height to calculate how high the rocket flew in the air. Students will know the height
their rocket reached so the only unknown is the initial vertical velocity. Students can solve for that using
the vertical motion model. With the parabola the rocket makes, students will use TI-nspire calculators to
help determine the equation of the parabola. Finally, students will analyze the data of the entire class. We
are going to get a correlation and regression model for the height of the rockets versus the initial air
pressure. We can also do the same thing with the distance from the starting point to the air pressure. To
sum everything up, students will write a report spelling out their findings, making any conclusions they
312
can from their data and what they would do for a future launch. Throughout the entire unit, there will be
videos posted on my website for students to watch at home if they forget how to do one aspect or another
of the math they need to use for this project.

Objectives
Students will design a rocket using resources provided to them as well as a budget.
Students will analyze the rocket using mathematics learned either in the class they are currently in
or in a previous math class.
Students will interpret their calculations and write a final report presenting their findings.

Alignment
The activity High-Power Paper Rockets lists numerous math and science standards covered in the
activity. The following are Common Core standards for English which correlate to creating an essay
about the students results: Conduct short research projects to answer a question, drawing on several
sources and refocusing the inquiry when appropriate; gather relevant information from multiple print and
digital sources; assess the credibility of each source; and quote or paraphrase the data and conclusions of
others while avoiding plagiarism and providing basic bibliographic information for sources. From the
Common Core math standards, the following standard applies during the process of finding the quadratic
equation of the rocket launch, Construct linear and exponential functions, including arithmetic and
geometric sequences, given a graph, a description of a relationship, or two input-output pairs (include
reading these from a table). When comparing results of launch information, the following statistics
standard will apply, Represent data on two quantitative variables on a scatter plot on a scatter plot, and
describe how the variables are related, and Compute (using technology) and interpret the correlation
coefficient of a linear fit.

Assessment
Verbal assessments will be used throughout the activity. I will be checking in with each student and group
to see their progress on the project. The final assessment of the project will be a written paper that gives
the data for the groups launch and interpreting their mathematical results. Students will need to include
their work for all math done for the project.

Conclusion
The purpose of this project is to show students an instance where many different types of math apply to a
real situation. Students are put in math classes where they are learning in a linear fashion. The fact is,
math is not linear which means we need to readjust the way math is presented to learners. Throughout this
project, the flipping the classroom approach might be very useful. Students could re-learn concepts that
were forgotten by watching video lessons for homework. Utilizing technology is important for this project
as well. While it is important that students understand the underlying concepts behind the math that goes
into real life situations such as a rocket launch, they should also know there are tools out there that help
them get to the answers they are trying to find.

References
1. National Aeronautics and Space Administration (n.d.). High-Power Paper Rockets. National
Aeronautics and Space Administration. Retrieved from
http://www.nasa.gov/pdf/295789main_Rockets_High_Power_Paper_Rocket.pdf
2. Common Core State Standards Initiative. English Language Arts Standards. Retrieved from
http://corestandards.org/the-standards/english-language-arts-standards
3. Common Core State Standards Initiative. Mathematics Standards. Retrieved from
http://www.corestandards.org/the-standards/mathematics
313
Flapping Flight Micro Air Vehicles

Student Researcher: Tyler J. Vick

Advisor: Dr. Kelly Cohen

University of Cincinnati
School of Aerospace Systems

Abstract
Micro-Air-Vehicles (MAVs) are capable of being used for myriad applications due to their small size and
maneuverability; they are therefore of particular interest to the scientific and military community. MAVs
are in the same size and weight range as typical birds, bats, and may even be as small as insects. One
major challenge in creating miniature vehicles designed for prolonged intelligence-surveillance-
reconnaissance missions is having enough power to remain operational for the desired mission length. It
is therefore vital that flapping vehicles fly in the most efficient manner possible. MAV designers have
drawn much inspiration from nature, particularly for body mechanics and structures. It is only logical that
scientists and designers should strive for the efficiency of nature, and that flapping MAVs should fly at
speeds and flap at frequencies similar to those of actual birds in power-efficient flight. Through this
project, two investigations regarding flapping flight micro air vehicles are performed. In the first, it is
shown that the theoretical method of predicting optimal flight speed of a flapping flier does not align with
empirical findings based on observation of actual animals. It is noted that this is most likely due to the
inability to accurately calculate the minimum drag coefficient of birds. In the second investigation, it is
found that it is not realistic to expect to develop flapping flight MAVs capable of performing a useful
mission of at least 30 minutes at this point in time. Before this becomes a reality, more time and resources
will need to be invested in the areas of battery technology and the studies of unsteady aerodynamics and
closed-loop flow control.

Project Objectives
Two separate investigations, both dealing with flapping flight and its application to micro air vehicles, are
presented for this project. In the first, an attempt is made to validate the use of theoretical equations to
predict the optimal flight speed of a flapping flyer. In order to do this, the theoretical predictions are
compared to empirical measurements of the flight of cruising birds in nature. The second aim of this
project is to investigate the feasibility of flapping micro air vehicles to perform a useful mission. The
minimum endurance time required for a mission is typically defined to be about 30 to 60 minutes,
depending on the mission type. Based on the current state of the art in flapping flight and battery
technology, endurance estimates are carried out to approximate the maximum expected flight time of
flapping flight MAVs. In this study, the challenges of developing practical flapping vehicles are
discussed, improvements to the current technology level are considered, and alternatives to flapping flight
are examined.

Validity Study

Technical Background
It has been shown that the dimensionless Strouhal number can be used to describe the wing kinematics of
flying animals. Strouhal number here is defined by

where is flapping frequency, is stroke amplitude, and is flight speed. Emperical studies have shown
that the Strouhal numbers for cruising birds have converged to a narrow range (0.2 0.3), indicating high
propulsive efficiency over this range. This research has led to the finding that the wingbeat frequency of
birds can be accurately predicted using the empirical relation,



314
where b is wing span, and for direct fliers, and for intermittent fliers.

Based on approximations for necessary power required for horizontal flapping flight, optimal flight
velocity can be shown to be


where is weight, is density of air, is wing planform area, is wing aspect ratio, and

is the
minimum drag coefficient. Given the required variables, this optimal flight speed can be used in
conjunction with the empirical relation for frequency given above to calculate Strouhal numbers for birds.
Note that this Strouhal number is found using a theoretical optimal flight velocity and an empirical
relation for frequency. It will be assumed that the empirical relation is correct, as the aim is to validate the
theoretical equation.

Results Obtained
It is clear that data for the following variables must be found for each bird: , , ,

, , . Table 1
below shows the values for each of these variables for five different species of birds, as found in
literature, and gives the calculated optimal flight speed, and the corresponding Strouhal number.

Table 1. Data, optimal flight speed, and Strouhal number of selected birds.
W (N) S (

) AR

f (Hz) b (m)

St
Sparrow 0.2403 0.01014 3.93 0.039 14 0.226 7.4578 0.314
Black Vulture 20.4048 0.327 5.824 0.0205 4.53 1.38 12.8945 0.250
Fulmar 7.9952 0.124 10.298 0.246 4.58 1.13 6.1086 0.455
Rock Pigeon 3.4335 0.062 7.016 0.44 6.67 0.66 5.3864 0.492
Wandering Albatross 83.8755 0.583 15.540 0.009 2.49 3.01 18.8242 0.173

Note that in only one case (Black Vulture) is the Strouhal number within the range expected for birds (St
= 0.21 for direct flight, St = 0.25 for intermittent flight). The results from the four other bird species vary
significantly.

Conclusions
The first and most likely explanation for why the Strouhal numbers do not reflect those found in the
literature is that the values used for minimum drag coefficient are inaccurate. Measuring minimum drag
coefficients is a challenging task, and methods of measurement can vary. Drag coefficients for the bodies
of birds are often calculated using frozen bird bodies in a wind tunnel, but it has been speculated that such
methods are inaccurate. It is worth noting that minimum drag coefficient values are difficult to find in
literature, which accents the difficulty in making valid measurements and calculations. The inaccuracy of
drag coefficient measurements has led to the desire for an empirical relation for optimal flight speed as a
function of weight and bird dimensions only. Such a relation would not require the use of a minimum
drag coefficient term, and would naturally encompass the unsteady aspects of flapping flight, unlike many
theoretical models. This new empirical relation could be used in conjunction with the Strouhal relations to
calculate the minimum-power speed-frequency combination that should be expected for a bird-like MAV
of any size and weight, thus enabling MAVs to perform extended useful missions.

The second possible source of discrepancy in Strouhal numbers is the method of calculating theoretical
optimal flight speed. The equation seen above is based on steady aerodynamics, while it is considered fact
that bird flight employs unsteady mechanisms such as rotational and translational delayed stall. Many
mathematical models for flapping flight assume rigid, inviscid wings, of constant size and shape. If one
does not make such assumptions, however, models can become increasingly complex. There is therefore a
trade-off between comprehensiveness and simplicity of theoretical models. If an accurate and simple
empirical relation between optimal flight speed and the weight and dimensions of birds can be found, this
315
could become a very useful tool in choosing a most power-efficient frequency-speed combination for
biomimetic MAVs.

Feasibility Study

Literature Review/Background
This study begins by examining the current state of the art in flapping-flight micro air vehicles. The
company AeroVironment has developed a biomimetic vehicle resembling a hummingbird. The vehicle is
19 grams and has a wingspan of 16 cm. Its top speed is 11 mph and can provide live video stream for
heads-down flight. However, the endurance time is only 11 minutes. This must be increased three-fold to
satisfy typical requirements for a mission.

In order to better understand the reasons for such a limited flight time, the major challenges of MAVs are
considered. The two main challenges considered are stored energy and aerodynamic efficiency. Power
sources most often come in the form of batteries, or less often, internal combustion. Batteries are low-
cost, quiet, replaceable, rechargeable, and have potential for improvement, but their main disadvantage is
their low energy density. Combustible fuel sources have higher energy densities, but miniature internal
combustion engines provide challenges when it comes to mixing fuel and air, and starting, adjusting, and
control. They also tend experience rapid heat loss, decreasing their efficiency. Other energy sources such
as reciprocating chemical muscles that can convert chemical energy into motion through a noncombustive
reaction exist but have not been proven capable of providing the forces necessary for flapping flight.

The aerodynamic efficiency of flapping flight as we understand it is much less than that of rigid-winged
aircraft. The lift-to-drag ratio for flapping flight can be a quarter to a sixth of the ratio for rigid wings.
Such low efficiencies require much greater power for flight. Birds and insects are known to employ
unsteady aerodynamic mechanisms in flight, but the current understanding of these mechanisms is not to
the level such that it can be employed in flapping MAVs. Natures fliers exhibit closed-loop flow control
of vortices, which vehicle designers are yet to comprehend fully. Most current models for flapping flight
are open-loop; once a greater understanding of unsteady flow control is realized, closed-loop models will
have to be developed in order for flapping MAVs to take full advantage of the increased efficiencies it
provides.

In this study, an estimation of the maximum endurance achievable in a 19 gram flapping MAV is
presented based on current commercial off-the-shelf battery capabilities and typical mass distributions of
MAVs. Additionally, significant improvements in the current technology are assumed, and endurance is
estimated again. Alternatives to flapping will also be discussed.

Results Obtained
The first chart below gives the mass distribution for a 19 gram flapping vehicle. The first three subsystem
percentages are based on typical distributions for MAVs, while the remaining subsystem mass values are
the smallest that were found by the author in the literature on MAVs. The second chart below gives an
estimated endurance for this vehicle for two cases. The first case assumes the structure and actuator
subsystem masses each are 10% of the total vehicle mass. The second, more conservative case assumes
15%. This estimate is based on a currently available Lithium polymer battery weighing 8.77 grams that is
capable of delivering 80.1 Watt-minutes of energy. The energy required for propulsion was approximated
to be 3.66 watts, assuming a lift-to-drag ratio of one, 11 mph flight, and mechanical and actuator
efficiencies of 50%. Additional power requirements of 1 watt are consistent with the literature.

316




It can be seen that the expected maximum endurance for a 19 gram vehicle with todays available
technology is roughly 20 minutes. This endurance is below the range required for a useful mission,
suggesting that before flapping flight MAVs become a practical tool, improvements will have to be made
in the areas of battery technology and miniaturization.

In order to discover how far away we are from a vehicle capable of performing a mission, three
assumptions of significant improvement are made: lift-to-drag ratio is increased by 50%, battery is
capable of delivering 50% more energy, non-propulsive power required is 0.88 W. With these
improvements, the resulting endurance ranges from 35 to 42.8 minutes, which may be enough to perform
a mission of moderate length. However, it is admittedly unrealistic to expect such improvements in
battery power and aerodynamic efficiency in the near future.

To complete the study, rigid-winged aircraft were considered as an alternative to flapping aircraft. These
vehicles have greater lift-to-drag ratios and can therefore carry more weight and more power supply. The
disadvantages of these vehicles include the fact that they are typically not as biomimetic or maneuverable,
and therefore are not likely suited for indoor missions. The Black Widow MAV developed by
AeroVironment in 1998 has a wingspan of 15.24 cm and is therefore of similar size to the flapping
hummingbird vehicle. However, this vehicle has four times as much mass, has a top speed of 25 mph, and
an endurance of 30 minutes. This endurance is achievable because the vehicles propulsion system, which
includes the battery, makes up 62% of the vehicles mass a percentage much higher than was achievable
in the 19 gram flapper analyzed above. A second rigid-wing aircraft is the SAMARAI monowing aircraft
developed by Lockheed Martin, the design of which is inspired by a maple seed. While only larger
prototypes have been build and flown, the company predicts that it will be able to build a 10 gram vehicle
with a length of 7.5 cm and an endurance of 20 minutes. Once again, this endurance is greater than that of
the AeroVironment hummingbird, as its wing generate greater lift. With further development, this
seemingly simple monowing design may have the potential to provide the maneuverability and endurance
desired by those currently pursuing flapping flight MAVs.


317
Conclusion/Recommendation
While recent progress in the area of flapping flight has been fruitful, significant advances must be made in
the areas of unsteady aerodynamics, battery technology, and miniaturization before the use of flapping-
wing micro air vehicles becomes a reality. Investments in these areas may someday enable the
development of flapping wing MAVs capable of performing the missions desired by military and civilian
organizations. Today, rigid-wing micro air vehicles are much more efficient, and can already be used for
outdoor intelligence, surveillance, and reconnaissance missions of moderate length.

References
1. Pennycuick, C. J., Predicting Wingbeat Frequency and Wavelength of Birds, J Exp Biol, Vol. 150,
1990, pp. 171 185.
2. Taylor, G. K., Nudds, R. L., and Thomas, A .L. R., Flying and swimming animals cruise at a
Strouhal number tuned for high power efficiency, Nature, Vol 425, 16 Oct. 2003, pp. 707 711.
3. Nudds, R. L., Taylor, G. K., and Thomas, A. L. R., Tuning of Strouhal number for high propulsive
efficiency accurately predicts how wingbeat frequency and stroke amplitude relate and scale with size
and flight speed in birds, Proc. R. cos. Lond. B, Vol. 271, 2004, pp. 2071-2076.
4. Azuma, A., The Biokinetics of Flying and Swimming, 2nd ed., AIAA Education Series, AIAA,
Reston, VA, Chaps. 3,4.
5. Spedding, G. R., Rayner, J. M., and Pennycuick, C. J., Momentum and Energy in the Wake of a
Pigeon (Columba Livia) in Slow Flight, J Exp Biol, Vol. 111, 1984, pp. 81-102.
6. Pennycuick, C. J., Power Requirements for Horizontal Flight in the Pigeon Columba Livia, J Exp
Biol, Vol 49, 1968, pp. 527-555.
7. Pennycuick, C. J., Wingbeat Frequency of Birds in Steady Cruising Flight: New Data and Improved
Predictions, J Exp Biol, Vol. 199, 1996, pp. 1613-1618.
8. Pennycuick, C. J., Obrecht III, H.H., and Fuller, M. R., Empirical Estimates of Body Drag of Large
Waterfowl and Raptors, J Exp Biol, Vol 135, 1988, pp. 253-264.
9. Pennycuick. C. J., Klaasen, M., Kvist, A., and Lindstrom, A., Wingbeat Frequency and the Body
Drag Anomoly: Wind Tunnel Observations of a Thrush Nightingale (Luscinia Luscinia) and a Teal
(Anas Crecca), J Exp Biol, Vol 199, 1996, pp. 2757-2765.
10. Tucker, V. A., Gliding Birds: The Effect of Variable Wing Span, J Exp Biol, Vol. 133, 1987, pp.
33-58.
11. Alexander, D. E., Natures Flyers: Birds, Insects, and the Biomechanics of Flight, Johns Hopkins
University Press, Baltimore, MD, 2002.
12. Grasmeyer, J. M., Keennon, M. T., Development of the Black Widow Micro Air Vehicle, AIAA-
2001-0127.
13. Youngren, H., Jameson, S., Satterfield, B., Design of the SAMARAI Monowing Rotorcraft Nano
Air Vehicle, American Helicoper Society, 65th Forum, 27 May 2009.
14. Anderson, M. L., Design and Control of Flapping Wing Micro Air Vehicles, Air Force Institute of
Technology, 2011.
15. Davis, W. R., Kosicky, B. B., Boroson, D. M., Kostishack, D. F., Micro Air Vehicles for Optical
Surveillance, The Lincoln Laborator Journal, Vol 9, 2 Nov. 1996, pp. 197 214
16. Mueller, T. J., Kellogg, J. C., Ifju, P. G., Shkarayev, S. V., Introduction to the Design of Fixed-Wing
Micro Air Vehicles, AIAA Education Series, AIAA, Reston, VA, Chap. 4.
318
Fuzzy Control of Two Two-Degree-of-Freedom Systems

Student Researcher: Alex R. Walker

Advisor: Kelly Cohen, Ph.D.

University of Cincinnati
School of Aerospace Systems

Abstract
Fuzzy Logic is a mathematical tool that has proven useful in controls applications, including controllers
for aircraft, trains, and even commercial appliances. It has been mathematically proven that fuzzy
controllers are capable of controlling systems of arbitrary complexity to any desired degree of accuracy.
The goal of this research is to explore this claim by developing fuzzy controllers for two two-degree-of-
freedom systems such that the fuzzy controllers are more robust than baseline, linear controllers. The
intuitive nature of the first system, a Self-Erecting, Single Inverted Pendulum, allowed a fuzzy controller
more robust than a linear controller to be developed while complex coupling of the second system, a two-
degree-of-freedom helicopter, made successful development of a fuzzy controller more difficult.

Methodology
Both systems investigated are educational products from Quanser and the linear controllers used as
baseline linear solutions were provided by Quanser. The first system investigated was a Self-Erecting,
Single Inverted Pendulum, and the second system investigated was a 2-DOF Helicopter, constrained to
pitch and yaw motion only. All of Quansers hardware interfaces with the Simulink environment, so
models of the controllers were built and dynamically updated in Simulink.

Quansers Self-Erecting, Single Inverted Pendulum package consists of a powered cart constrained to
move, via a rack and pinion, in one dimension along a track and an unpowered pendulum constrained to
rotate about one axis. The control objective for this system is to start the pendulum from the stable
equilibrium position, swing the pendulum to the normally unstable inverted position, and maintain the
pendulum in this inverted position while simultaneously maintaining the cart in its starting position. Cart
position and pendulum angle are measured, and cart velocity and pendulum angular velocity are derived
through a filtered differentiation.

Figure 1 illustrates the fuzzy control strategy implemented for the pendulum. In short, the controller is
split into two strategies, a swing up strategy and a capture strategy. The mechanism that switches between
the two strategies is the decision block, which applies the swing up strategy, the capture strategy, or no
strategy. Each of the swing up, capture, and decision blocks employs fuzzy inference systems.

The decision block diagram employs two fuzzy inference systems. The first uses pendulum angle and
angular velocity to determine if the pendulum has enough energy to successfully invert. Traditional
logical operators separate its output into three classes: choose to swing up, choose to invert, or do
nothing. It was found that this dead-zone strategy, in which no strategy was applied, was required to
prevent the swing up strategy from adding too much energy to the system, thus making a successful
capture of the pendulum difficult.

The swing up block uses three fuzzy inference systems: a pendulum excitation, an initial excitation, and a
cart position control. The pendulum excitation inference system utilizes pendulum angle and pendulum
angular velocity as inputs, and it outputs a voltage to apply to the cart motor. A very specialized set of
rules was developed based on observation and experience with what motions are required to successfully
add energy to a pendulum in order to potentially invert it. The initial excitation inference system is used
to make the cart less susceptible to moving out of its bounds. The cart position control inference system
uses cart position and cart velocity as inputs, and it outputs a voltage to apply to the cart motor which is
added to the voltage output by the pendulum excitation inference system.
319
The capture block utilizes two fuzzy inference systems: a pendulum control and a cart position control.
The cart position control inference system is identical to that in the swing up block. The pendulum
inference system utilizes a very specialized set of rules based on observation and experience with what
motions are required to successfully keep a pendulum in the inverted position. This inference system uses
pendulum angle and angular velocity as inputs, and it outputs a voltage to apply to the cart motor.

Quansers 2-DOF helicopter consists of two, single-rotor fans oriented similar to that of traditional
helicopters. However, the helicopter is constrained to only pitch and yaw, which couple, yielding more
complex system dynamics. The control objective for this system is to, at the very least, pitch up from an
initial -40 pitch angle to a 0 pitch angle, and then rotate 180 without noticeable coupling.

Figure 2 illustrates the fuzzy control strategy implemented for the helicopter. The helicopter strategy is
fairly straightforward. The MIMO Fuzzy Inference System utilizes a single fuzzy inference system whose
inputs are pitch error, pitch rate, yaw error, and yaw rate, and whose outputs are change in pitch motor
voltage and change in yaw motor voltage. These rates are integrated to produce control motor voltages.

Results
In the development of the pendulum controller, a number of gains on both inputs and outputs were
adjusted experimentally in order to produce a system response that fit within qualitative expectations of a
successful fuzzy controller more robust than the baseline linear controller. These experimental
adjustments yielded a fuzzy controller that was capable of successfully inverting the pendulum,
maintaining the pendulum upright, and maintaining the cart about its starting position. This fuzzy
controller was also found to be qualitatively more robust than the linear controller, maintaining the
pendulum in its inverted position subject to external perturbations more successfully than the linear
controller. However, the fuzzy controller was found to be slower to swing the pendulum to its inverted
position. Additionally, while the fuzzy controller allows the cart to oscillate slowly about the initial cart
position at an amplitude similar to that of the linear controller, the fuzzy controller also exhibited a slight,
undesirable high frequency oscillation less present in the linear controller response.

The coupling in the 2-DOF Helicopter system along with the absence of clearly distinct phases of control
(i.e. swing up versus capture) warranted a different approach to controller development and improvement
than that used for the pendulum. The main fuzzy inference system of the helicopter controller was
developed in patches, first starting with the rules governing the uncoupled motion. For this initial
development, it was assumed that one degree of freedom was not affected by the control action of the
other degree of freedom. It was found that a simulation of the helicopter responded very nicely to this
controller, but the actual system exhibited unstable coupling with this architecture.

In subsequent iterations of the controller, more rules were added to account for the reaction of one degree
of freedom to the control action of the other degree of freedom for the entire possibility of inputs. Rules
were also changed in order to produce more desirable results. Additionally, membership functions of
various inputs and outputs were modified in order to yield better responses.

This procedure resulted in a fuzzy controller which was able to converge to the desired positions.
However, this controller did not exhibit greater robustness than the linear controller. In fact, although the
linear controller exhibits a great amount of coupling, especially on the 180 yaw maneuver, the linear
controller response is much smoother than the fuzzy controller response. A great amount of coupling was
still evident in the fuzzy controller response, especially in the pitch response.

The difficulty in developing a fuzzy controller for the 2-DOF helicopter is the somewhat non-intuitive
coupling that occurs between the degrees of freedom. The non-intuitive nature of the problem increases
the time to design and perfect a fuzzy controller, because the fuzzy controller designer must become
extremely knowledgeable of the system dynamics before such an experience-based approach can be
taken. A more broad exploration of the solution space in terms of formulation of rules and membership
functions, aided by computerized search algorithms, would likely yield a more robust fuzzy controller.

320












References
1. Kosko, B., Fuzzy Thinking: The New Science of Fuzzy Logic, Hyperion, New York, NY, 1993.
2. Tsoukalas, L. H., Uhrig, R. E., and Zadeh, L. A., Fuzzy and Neural Approaches in Engineering, John
Wiley & Sons, Inc. New York, NY, 1997.
Cart Position
Cart Velocity
Pendulum Angle
Pendulum Vel.
Swing Up Capture
Decision
Pitch Error
Pitch Rate
Yaw Error
Yaw Rate
MIMO
Fuzzy
Inference
System
Voltage
Change
Integrator
Output
Voltage
Figure 1. Fuzzy Pendulum Controller Block
Diagram
Figure 2. Fuzzy Helicopter Controller
Block Diagram
321
Model Stirling Engine Manufacturing Project

Student Researcher: Erkai L. Watson

Advisor: Jay H. Kinsinger

Cedarville University
Elmer W. Engstrom Department of Engineering and Computer Science

Abstract
Stirling engines have been around for almost 200 years. They were primarily used in large industrial
settings, and later became popular for domestic use. With the development and increasing popularity of
the internal combustion engine, Stirling engine usage declined. Today there are no practical sized Stirling
engines in production, but there is considerable research on the development of this engine.(Senft, 2010)
This interest stems from two attributes of the Stirling engine: 1) the ideal Stirling thermodynamic cycle
has an efficiency equal to that of the Carnot cycle and 2) the Stirling engine is quiet and can run off any
heat source, therefore making it an excellent engine model. For the purpose of this research, the second
reason was of interest. There have been a surprising number of mechanisms invented to implement the
Stirling cycle. The focus of this research was to evaluate these different mechanical configurations in
light of the manufacturing complexity of each and choose the best design for the intended purpose.

Project Objective
The objective of this project was to research and design an appropriate Stirling engine for manufacture by
Junior mechanical engineering students in a lab. The primary purpose of this lab is for students to gain a
working knowledge of basic machining techniques. Currently, students machine a clamp that allows
them to learn how to use the mill, lathe, and surface grinder, as well as familiarize themselves with
precise measuring and tolerancing. It is hoped to eventually replace the clamp project with a model
Stirling engine project. This would be of greater interest to the students and would support concepts
learned in the thermodynamics class that is usually taken the same semester.

Methodology Used
There were many considerations for the design of this model engine. The most important aspect was the
manufacturability of the engine by students. In order to evaluate the difficulty and the time necessary to
make a working Stirling engine, prototypes were built. Other important features considered in choosing a
design were part expenses and the heat source for the engine.

The first engine design considered was a simple alpha type Stirling engine. The alpha configuration uses
two piston-cylinders. One piston-cylinder serves as the high temperature reservoir and the other as the
low temperature reservoir. The pistons are offset 90 (see Figure 1). Plans for this type of engine were
obtained from Koichi Hiratas website
1
. Hiratas LSE-01 model was chosen as a first prototype because
of its simplicity. It was designed to operate off large temperature differences and at high RPM. Building
this prototype offered valuable insight into the feasibility of manufacturing certain parts. For example, it
was decided to attempt machining an aluminum piston and cylinder to determine the difficulty and time
necessary to properly fit a piston to a cylinder without rings. Although building this model provided
valuable experience and insight, this was not a suitable final design because of the large temperature
difference that the engine required in order to run (over 300C temperature differential).

1
http://www.bekkoame.ne.jp/~khirata/
322

Figure 1. Alpha Configuration Stirling Engine
2


After successfully completing this prototype, a second design was selected. This second design was built
to determine the degree of precision achievable with the given tools in the machine shop. This second
design was a Low Temperature Differential Stirling engine. This type of Stirling engine, capable of
running off temperature differentials of less than 0.5C, was first pioneered by Prof. Ivo Kolin of the
University of Zagreb in Croatia in the early 1980s. (Senft, 2008)

The low temperature differential Stirling engine is a split cylinder or gamma type Stirling engine (see
Figure 2). It consists of a compression piston and a displacer offset 90 from each other. The displacer
moves air back and forth from the hot to the cold reservoir. This change in temperature causes a change
in pressure that drives the compression piston.

Figure 2. Split Cylinder Stirling Engine
3


The low temperature differential Stirling engines differ from conventional flame heated engines simply in
their shape and proportion. According to, Stirling engine researcher, Dr. James Senft, engines operating
at low temperature differentials must have low compression ratios.(Senft, 2008) In order to achieve a
low compression ratio, the volume swept by the compression piston must be small, yet the displacer
needs to sweep out a large volume in order to significantly change the temperature of the enclosed air.
These design considerations lead to the small stroke and large piston to displacer ratio that is common to
most low temperature differential Stirling engines.

2
Image used by permission of www.stirlingengine.com
3
Image from Senft, 2008
323

Figure 3. Large displacer to piton ratio typical of LTD Stirling engines
4


A low temperature differential engine was built following the plans for the N-92 in James R. Senfts
book An Introduction to Low Temperature Differential Stirling Engines. This engine required much
precision and care in building. Careful attention was paid to all details because even the slightest
interference or imbalance would affect the temperature differential necessary to run the engine. In order
for the engine to run on low temperature differences, extensive tuning was necessary. This consisted of
adjusting the parts of the engine to make them fit precisely. For example, the length of the displacer
connecting rod had to be adjusted to achieve near zero clearance with the top and bottom plates, but not
make any contact. After many adjustments were made, the engine was capable of running off a 10C
temperature difference. Under the right conditions it could run off the heat of a human hand!

Results Obtained
A low temperature differential type engine, capable of running off a cup of hot water, was chosen for the
final design because of its aesthetics and safe, readily available heat source. Starting with Senfts N-92
engine plans, modifications were made to facilitate manufacturing on a larger scale in the lab. For
example, the dimension of the displacer chamber was modified slightly to match standard pipe sizes so
that stock materials could be used unaltered. Expensive parts, such as the small roller-element bearings
used in the crank shaft, were replaced by point bearing that would both reduce the cost and demonstrate
an alternative to roller bearings. Some of the parts, such as the large round plates that form the displacer
chamber, were very cumbersome to machine on the lathe. This and certain other parts will be made on
CNC machines reducing the total manufacturing time necessary to complete an engine.

Conclusions
Having built two prototype Stirling engines in Cedarville Universitys machine shop, it seems possible for
this project, with some refinement, to be implemented into the Junior year lab. Because of the complexity
and intricate nature of the engine, every part must be machined to the specified tolerance or the engine
will not function. This combined with the larger scope of the project may mean that students will have to
work in groups to complete an engine in the available lab time. Nonetheless, a successfully completed
engine will provide a satisfaction well worth the effort.

References
1. Hirata, Koichi. Stirling and Hot Air Engine Ring. http://www.bekkoame.ne.jp/~khirata/ (accessed
April 2012).
2. Senft, James R. 2008. An Introduction to Low Temperature Differential Stirling Engines. Moriya
Press.
3. Senft, James R. 2010. An Introduction to Stirling Engines. Moriya Press.

4
Image from Senft, 2008
324
An Analog Robotic Controller Using Biologically Inspired Neuron and Motor Voxels

Student Researcher: Victoria A. Webster

Advisor: Dr. Roger Quinn

Case Western Reserve University
Department of Mechanical and Aerospace Engineering

Abstract
The possibility of creating analog neuron and muscle controllers for use in legged robotic control is
explored. While there is some behavior which lends itself to implementation in analog circuits (i.e.,
tension development in the Hill muscle model), there is other behavior which cannot be modeled by
analog circuits alone (i.e., the need for negative exponential values). Therefore a combination of analog
and digital systems is proposed. Calculations which cannot be performed via analog circuits care instead
ported to a microprocessor which interacts directly with the remainder of the circuit. A proof of concept
controller has been constructed.

Project Objectives
Biological systems often provide useful sample systems for robotics. By mimicking systems seen in the
animal world, engineers can design robots that are potentially more versatile and robust than traditional
wheeled or tracked robots. Additionally, an area of interest in robotics which is gaining momentum is the
development of voxels, or repeat units. It is hoped that in the future, rather than building a robot out of
raw materials, they could be built out of preassembled cell-like units. This idea can be extended to
controllers by developing neuron voxels which can be connected in the same manner seen in biological
organisms. Thus rather than programming a robot, one could design a neural circuit via various
simulation tools, then simply build and implement it with an assortment of neuron voxels.

The Biologically Inspired Robotics Laboratory (BIRL) at Case Western Reserve University is currently
working on a project to develop controllers in simulation based on the neural systems of cockroaches and
rats. These simulations often utilize either simplistic models of neurons which are computationally
inexpensive but are less accurate, or complex mathematical models of neurons which make simulations of
entire neural systems difficult to run in real time. Therefore, a controller that truly captures the
functionality of a neuron is useless for a robot, since it will not be able to perform the necessary
calculations quickly enough to solve problems in real time. However, an analog system with the same
properties as a neuron may be able to process information without the delay of a digital simulation,
allowing the robot to make real-time locomotion decisions with a network of neural-style circuits.

Additionally, in order to build robots which move like their biological inspirations, actuators with
properties similar to muscles are needed. There are a variety of relationships which dictate muscle
dynamics. While each of these relationships could be directly translated to an analog circuit, the final
circuit would be bulky and therefore less useful as an interchangeable unit. Instead, mechanical and
optical measures have been developed to implement the necessary muscle dynamic relationships. These
motor and neuron voxels can be assembled into a variety of controller schemes based on the neuro-
mechanical systems of biological organisms, in order to build locomotion controllers for stable, legged
robotic systems.

Methodology Used
There are many analogs between physical dynamical systems and analog circuits which can easily be
drawn once the differential equations describing a system have been identified. The two parts of this
study (neural and muscular) shall be described separately before discussing the combination of the entire
system.


325
Neural Model
Two neuron models were initially considered for this project. The first model is the second order system
of ordinary differential equations proposed by Izhikevich[1].


( )
where v is the membrane potential, and u is the membrane recovery variable. The dynamics of the
potassium and sodium ionic currents is accounted for by u. Additionally, this model uses an auxiliary
equation to reset the membrane potential after a simulated spike,
{


}
There are several key concerns when modeling differential equations with circuits. First, differentiation is
inherently unstable and as a result, rather than simply plugging in differential elements for derivative
terms, the integral version of the equation should be modeled. Therefore, the Izhikevich equations
become,


( )
These equations can then be modeled using a combination of operational amplifiers and capacitors.
However, the second consideration before turning these equations into analog circuits is the need for the
nonlinear term. Multiplication is better suited for digital calculation than analog calculation. With analog
circuits two options are available. The first involves using a commercially available multiplier chip.
However the stable operating range of these devices is limited and errors can be introduced due to drift.
The other option is to use a log antilog method for multiplication based on the addition/multiplication
rules of logarithms. The operational amplifier circuits involved in log-antilog calculations use diodes.
Since diodes are directional negative voltages cannot be multiplied without pre-processing. Due to these
difficulties a second linear neural model was investigated.

The second neural investigated is the Leaky integrate and fire neuron model[2]. This model is based on
the idea that the cell membrane behaves like a capacitor but that some current can leak out of the system
via a second resistance path. This model is therefore very simple to convert to an analog circuit form.

(()

)
In order to mimic the spiking behavior the voltage across the capacitor is monitored by a flip flop which
triggers a leak potential when the voltage surpasses the spiking threshold, this remains on until the voltage
has been reset

Muscle Model
The muscle model used in this project is the Hill muscle model[3][4]. This model consists of an actuator,
a spring element in parallel with the actuator with stiffness

, and a spring element in series with these


two components (stiffness

). The system also includes a damping term (b) to model the actual
damping seen in muscles. The maximum tension due to muscle stimulation is,

)
)

Unfortunately, this equation cannot be modeled solely with analog circuits. This is due to the fact that the
exponential is a negative value. As mentioned previously the circuits required to model exponential terms
use diodes and can therefore not have negative exponentials. Instead an Arduino microprocessor was
programmed to take in an analog input and make the necessary calculations.

The actual active tension is less due to the effects of muscle length and velocity on the development of
active muscle tension.


326

)



Therefore the actual active tension can be calculated by,



The length tension relationship can be implemented using a custom graded encoder and infrared
photodiodes such that the blackness level on the paper at any given muscle length corresponds directly to
the percentage of the total active force which can be utilized. This encoder serves to help make the
number of circuits involved in the calculations smaller.

However the muscle has physical properties and is not just a mechanical actuator which immediately
provides the designated tension. Instead the actual tension produced is determined by the differential
equation,

) )
As with the neuron model this can be converted to integral form and modeled with operational amplifiers
and capacitors.

Results Obtained
Simulated neural circuit models were developed which were capable of producing oscillating spiking
patterns in response to an external current input. It should be noted however that the time constant of the
circuit is set by the physical properties of the capacitor and resistor. Therefore without changing these
values it is not possible to increase the firing rate. In actual neurons the firing rate is important as it
encodes much of the information to be transmitted. Without the ability to vary the time constant it is
therefore impossible to mimic the exact dynamics of the neurons. Additionally, even with the use of an
integrating level between the muscle and neuron circuits oscillations are transmitted using these circuit
models. These oscillations would be reflected in limb movement and are undesirable. Therefore it is
unlikely that neuron models appropriate for modeling individual neurons are practical for producing the
activation dynamics for muscles. Since smooth limb motions are desired a firing rate model should be
used. Such models would be difficult to produce with analog circuit due to the fixed time constants.

The need for negative exponentials limits the practicality of having an analog controller for all facets of
the muscle model. However the differential equation governing muscle tension development can easily be
implemented using operational amplifiers. Additionally a graded encoder (linear or rotational) can be
used to provide length information to a microprocessor which performs the basic calculations before
returning control of the muscle model to the analog circuit. A proof of concept controller was constructed
and is capable of oscillating between joint limits in response to only an alternating input and feedback
from a custom graded encoder.

Figures and Tables

Figure 1. Leaky Integrate and Fire neuron model utilizing a flip-flop circuit and NPN transistor to
produce resetting current.
327

Figure 2. Flip-Flop for use in LIF neuron model.

Figure 3. Input current (step) and subsequent "neuron" response spiking behavior can be observed.

Acknowledgments
The author would like to thank the Ohio Space Grant Consortium for their financial support for this
project. In addition the author would like to that Nicholas Szczecinski, Alexander Lonsberry, Brian Tietz,
and Alexander Hunt for their assistance with neural and muscular modeling and continual support.

References
1. E. M. Izhikevich, Which model to use for cortical spiking neurons?, IEEE transactions on neural
networks / a publication of the IEEE Neural Networks Council, vol. 15, no. 5, pp. 1063-70, Sep.
2004.
2. Trappenberg, Fundamentals of Computational Neuroscience, 2nd ed. Oxford.
3. R. Shadmehr, A Mathematical Muscle Model, ReCALL, 1970.
4. D. Cofer, G. Cymbalyuk, J. Reid, Y. Zhu, W. J. Heitler, and D. H. Edwards, AnimatLab: a 3D
graphics environment for neuromechanical simulations., Journal of neuroscience methods, vol. 187,
no. 2, pp. 280-8, Mar. 2010.
328
In The Middle: Earths Weather Correlation with Venus and Mars

Student Researcher: Marcia J. White

Advisor: Dr. Rebecca Teed

Wright State University
College of Education and Human Services

Abstract
Many students know about the weather here on Earth, but most do not know what kind of weather is
happening on other planets? Here students will explore Mars and Venus to find out if the weather here on
Earth is similar or different than these planets. Students will complete a chart that will be comparing Mars
and Venus to Earth. Then students will decide whether these planets could actually support life. Students
will then try to change a certain aspect about a planet to make it possible for life to be supported.

Lesson
In this lesson students will investigate the similarities of Venus and Mars compared to Earth. Before
doing this lesson as a class students will learn about Earth in depth. The students will be split into two
groups or smaller groups. One group will be Mars and the other group will be Venus. In these groups
students will find information about their planet to make an argument of whose planet is more like Earth.
Students will not only notice similarities between these planets but their differences as well. Students
must use facts about their planet to be able to make their argument. Each student in the group will receive
a chart that has been started and the students will finish the chart by obtaining more information about
their planet. Students can also find other information that is not on the chart to help prove their argument
that their planet if more like Earth. To make their argument they will be allowed to use books, articles and
the NASA website.

Students will then determine if it will be possible to support life on their planet. How does this relate to
life being supported here on Earth? Students will have to investigate what is needed for life to be
supported on Earth and what each planet has that could also support life. Here students will have to use
their imagination to change a certain aspect about their planet to make it possible for life to be supported
or possibly survive. The students will create a poster board with drawings and the information they found
that could actually happen here on Earth as well.

Objectives
Students will develop a better understanding of the planets Earth, Venus and Mars.
Students will be investigating similarities and differences of Earth, Venus and Mars.

Methodology Used
The methodology used in this lesson was conducted by leading students in the right direction in finding
information about their planet. Giving each student a handout that is partially completed and guiding the
students in the direction of what information they should look for.

Results
These results were significant because students were trying to prove that their plant was more similar to
Earth than the other inspected planet. Students arrived to the conclusion that Mars and Venus are both
similar to Earth in some aspects but are also very different from one another. By investigating this
information they came to find and understand more about the planet they live on and the surrounding
planets.


329
Charts

Venus Earth Mars
Atmosphere 96.5% carbon dioxide,
3.5% nitrogen
77% nitrogen, 21% oxygen 95.3% carbon
dioxide, 2.7%
nitrogen

Temperature (average) 737 K (464 C) 288 K (15 C) 210 K (-63 C)

Diameter 6,794 km 12,756.3 km 6,794 km

Mass 4.8673 x 10
24
kg 5.972 X 10
24
kg 6.4219 X 10
23
kg
Earth Days 225 365 112
Rotate Direction East to West West to East West to East
Any water? Yes or No No Yes Yes
Any seasons? Yes or No Yes Yes Yes

Alignment
Grade 3-5 Earth and Space Sciences Benchmark D. Analyze weather and changes that occur over a period
of time.

References
1. "Solar System Exploration.". N.p., 16 April 2012. Web. 03 Apr 2012.
<http://solarsystem.nasa.gov/planets/profile.cfm?Object=Mars>.
2. Harvey, Samantha. Planetary Seasons. N.p., 09 Jun 2011. Web. 03 Apr 2012.
<http://www.nasa.gov/audience/foreducators/postsecondary/features/F_Planet_Seasons.html>.
330
Prediction of the Potential Classification of Solar Flares on Earth Using Fuzzy Logic Application

Student Researcher: Mahogany M. Williams

Advisor: Dr. Edward Asikele

Wilberforce University
Department of Engineering

Abstract
Soft computing can be frequently used to exploit the tolerance for imprecision and uncertainty to achieve
tractability and robustness like fuzzy logic which provides techniques for handling cognitive issues in the
real world. This paper will show the potential classification of solar flares due to the number of times a
certain type of solar flare occurred in a 24-hour period using fuzzy logic application. A flare is defined as
a sudden, rapid, and intense variation in brightness. A solar flare occurs when magnetic energy that has
built up in the solar atmosphere is suddenly released. Radiation is emitted across virtually the entire
electromagnetic spectrum, from radio waves at the long wavelength end, through optical emission to
x-rays and gamma rays at the short wavelength end. The amount of energy released is the equivalent of
millions of 100-megaton hydrogen bombs exploding at the same time. The first solar flare recorded in
astronomical literature was on September 1, 1859. Two scientists, Richard C. Carrington and Richard
Hodgson, were independently observing sunspots at the time, when they viewed a large flare in white
light. Solar flares that enter into the Earth's atmosphere will affect the EEF by disrupting the electrical and
magnetic components of the EEF, as well as the Earth's physical components.

Project Objectives
My project objectives included using the data it can be implemented the effect that solar flare
classifications can be determined based on the ranges and rules that were developed from the downloaded
data. The more intense solar flares will release very-high-energy particles. Thus the release high energy
levels of radiation could trigger an interruption within the Earths magnetosphere.

Methodology Used
Using a developed fuzzy logic, the rule base and membership functions that I developed could be used to
determine the solar flare classification levels.

Results Obtained
From the Fuzzy Logic system that was developed, one can realize that most active and reoccurring flares
are C-class, which is the smallest of the three. The number of X-class flares should be kept to a minimum
because it is the largest classification of solar flares, which means that the amount of X-rays it emits are in
very large quantities in a short period of time. The effects that an X-Class flare could have on Earth, if it
continually happened, are death and existence would seize to exist. Using this data it is possible to
classify the flares.

Significance and Interpretation of Results
Recent advances in technology have given scientists a look at what goes on inside the sun. The findings
from this research predict we may be in for a chaotic 2012. This is due to the massive solar storms
predicted to occur in 2012. The sun goes through an 11 year cycle where sunspots come and go on the
suns surface. During the peak of the cycle, solar storms occur. Solar storms start with powerful magnetic
fields in the sun which can snap at any time releasing immense amounts of energy into space. Some of
the solar particles from the explosion could hit earth at about a million miles an hour. Usually the
magnetosphere, a protective shield created by the earth's magnetic field, blocks most of the solar particles
keeping damage to a minimum. Unfortunately the magnetosphere has been weakening recently and its
ability to protect earth has been reduced. Another concern is that researchers predict the next solar peak
will be 30% to 50% stronger than the last one which occurred in 2001. Solar flares have done
considerable damage to the earth in the past, even when our magnetosphere was strong. In 1859 a strong
solar storm caused telegraph wires to short out in the United States and in Europe. This caused
331
widespread fires around the affected areas. Other Solar storms in 1989 and 2003 caused major blackouts
in Canada and United States. If a major solar storm were to occur, power grids could be knocked out
leaving millions of people without electricity. Also cell phones, GPS, weather satellites and many other
electronic systems that depend on satellites could stop working.

Figures and Charts

M-Class Flare



Figure 1. Activity, Evolution. Figure 2. Activity Size, Area. Figure 3. Area of largest spot.

C-Class Flare


Figure 4. Area of the largest spot. Figure 5. Activity Size. Figure 6. Evolution.

X-Class Flare


Figure 7. Activity, Evolution. Figure 8. Activity Size, Area. Figure 9. Area of largest Spot.

References
1. http://hesperia.gsfc.nasa.gov/sftheory/flare.htm
2. http://www.cbsnews.com/8301-505266_162-57365541/solar-flare-impacts-life-on-earth/
3. http://www.geek.com/articles/geek-cetera/whats-a-solar-flare-nasa-explains-20110816/
4. http://www.esa.int/esaSC/SEMHKP7O0MD_index_0.html
5. http://www.ehow.com/how-does_4567146_solar-flares-affect-earth.html
6. http://www.nasca.org.uk/Strange_Maps/solar/Solar_Flare/solar_flare.html
7. http://www.telegraph.co.uk/science/space/7819201/Nasa-warns-solar-flares-from-huge-space-storm-
will-cause-devastation.html
8.
a

b
Gonzalez, W. D., J. A. Joselyn, Y. Kamide, H. W. Kroehl, G. Rostoker, B. T. Tsurutani, and V.
M. Vasyliunas (1994), What is a Geomagnetic Storm?, J. Geophys. Res., 99(A4), 57715792.
332
9. [Sugiura, M., and T. Kamei, Equatorial Dst index 1957-1986, IAGA Bulletin, 40, edited by A.
BerthelJer and M. MenvielleI,S GI Publ. Off., Saint. Maur-des-Fosses, France, 1991.
10. World Data Center for Geomagnetism, Kyoto.
11. http://searchnetworking.techtarget.com/definition/neural-network
12. Summers, G.P.; Xapsos, M.A.; Burke, E.A.; , "Extreme value statistics to the prediction of solar flare
proton effects on solar cells," Photovoltaic Specialists Conference, 1996., Conference Record of the
Twenty Fifth IEEE , vol., no., pp.289-292, 13-17 May 1996
doi: URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=564002&isnumber=12206
13. Zhang Yinghui; Wang Yizhuo; Peng Yong; , "Effects of Solar Flare to Shortwave Communication,"
Electromagnetic Field Computation, 2006 12th Biennial IEEE Conference on , vol., no., pp.209, 0-0 0
doi: URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1632999&isnumber=34239
14. Adams, James H.; Gelman, Andrew , "The Effects of Solar Flares on Single Event Upset Rates,"
Nuclear Science, IEEE Transactions on , vol.31, no.6, pp.1212-1216, Dec. 1984
doi: URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4333485&isnumber=4333440
333
Configuration of Field-Programmable Gate Array Integrated Circuits

Student Researcher: Michael D. Williams

Advisor: Dr. Edward Asikele

Wilberforce University
Department of Computer Engineering

Abstract
A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by the
customer or designer after manufacturing. They call it "field-programmable" because it is configured once
the chip is placed into operation in the field. In my project I plan to analyze a full-custom logic circuit and
then implement it on the field-programmable gate array integrated circuit. Using visual hardware
descriptive language software, I will research the methods required to configure a FPGA. The FPGA
configuration is generally programmed using a hardware description language, similar to one used for an
application-specific integrated circuit. FPGAs can be used to implement any logical function that an
application-specific integrated circuit can perform. The ability to update the functionality after shipping,
partial re-configuration of part of the chip and the low non-recurring engineering costs, provide
opportunity for many applications. Applications of FPGAs include digital signal processing, aerospace
and defense systems, computer vision, speech recognition, cryptography, and metal detection to name a
few. Looking at a FPGA from the circuit level shows that it is designed like a railroad depot, where
different circuits can be configured, or connected, by modifying the logic block components. These
blocks are interconnected by a series of configurable routing channels, or railroads, which allow the
circuits to be so versatile.

Project Objectives
Fully fabricated FPGA chips containing thousands or even more, of logic gates with programmable
interconnects, are available to users for their custom hardware programming to realize desired
functionality. This design style provides a means for fast prototyping and also for cost-effective chip
design, especially for low-volume applications.

A typical field programmable gate array (FPGA) chip consists of I/O buffers, an array of configurable
logic blocks (CLBs), and programmable interconnect structures. The programming of the interconnects is
accomplished by programming of RAM cells whose output terminals are connected to the gates of MOS
pass transistors. Thus, the signal routing between the CLBs and the I/O blocks is accomplished by setting
the configurable switch matrices accordingly. The complexity of a FPGA chip is typically determined by
the number of CLBs it contains. In the Xilinx XC4000 family of FPGAs, the size of the CLB array can
range from 8 x 8 (64 CLBs) to 32 x 32 (1024 CLBs), where the latter example has an approximate gate
count of 25,000.

Typical FPGA chips can support system clock frequencies between 50 and 100 MHz. With the use of
dedicated computer-aided design tools, the gate utilization rate (percentage of gates on the FPGA which
are actually used in a particular design) can exceed 90%. The typical design flow of an FPGA chip starts
with the behavioral description of its functionality, using a hardware description language such as VHDL.
The synthesized architecture is then technology-mapped (or partitioned) into circuits or logic cells. At this
stage, the chip design is completely described in terms of available logic cells. Next, the placement and
routing step assigns individual logic cells to FPGA sites (CLBs) and determines the routing patterns
among the cells in accordance with the netlist. After routing is completed, the on-chip performance of the
design can be simulated and verified before downloading the design for programming of the FPGA chip.
The programming of the chip remains valid as long as the chip is powered-on, or until it is re-
programmed.


334
The largest advantage of FPGA-based design is the very short turn-around time, which is, the time
required from the start of the design process until a functional chip is available. Since no physical
manufacturing step is necessary for customizing the FPGA chip, a functional sample can be obtained
almost as soon as the design is mapped into a specific technology. The typical price of FPGA chips is
usually higher than other alternatives (such as gate array or standard cells) of the same design, but for
small-volume production of ASIC chips and for fast prototyping, FPGA offers a very valuable option.

Significance and Interpretation of Results
FPGA chips are a great innovation in technology today, with many applications. The versatility of the
chip creates a high demand for its functionality. The ability to be re-programmed after manufacturing
makes this integrated circuit very desirable in systems where area, modularity, and regularity are major
factors. Through the implementation of FPGA chips in systems today, many new possibilities are formed.
In the near future I predict that it will be possible to program FPGAs with the functionality to incorporate
an entire computer system on one chip, that is cost effective, provides adequate performance and is easily
re-configurable. Soon application specific integrated circuits (ASIC) will no longer be needed and FPGA
will set the standard for fulfilling application and functional specifications.

Figures/Charts



Acknowledgments
Dr. Edward Asikele
Dr. Arkan Kadum

References
1. "Field-Programmable Gate Arrays (FPGA) Information." On GlobalSpec. Web. 4 April 2012.
<http://www.globalspec.com/learnmore/analog_digital_ics/programmable_logic/fieldprogrammable_
gate_arrays_fpga>.
2. "MRCI | Research." MRCI | Research. Web. 4 Apr. 2012.
<http://www.mrc.uidaho.edu/mrc/research/index.php>.
3. "FPGAs." FPGAs. Web. 2 Apr. 2012. <http://www.altera.com/products/fpga.html>.
4. "Fpga4fun.com - Welcome." Fpga4fun.com - Welcome. Web. 6 Apr. 2012.
<http://www.fpga4fun.com/>.
5. "Field-programmable Gate Array." Wikipedia. Wikimedia Foundation, 27 Mar. 2012. Web. 5 Apr.
2012. <http://en.wikipedia.org/wiki/Field-programmable_gate_array>.
335
Wind Measurement and Data Analysis Applied to Urban Wind Farming

Student Researcher: Chung Y. Wo

Advisor: Dr. J. Iwan Alexander

Case Western Reserve University
Mechanical and Aerospace Engineering

Abstract
The goal of this project is to determine the reliability of wind data collected over a period of time, and the
implications of the data with regard to placing Wind Turbines near or in an urban setting. Typically Wind
Turbines are placed in open fields or off shore locations, with minimal wind disturbances due to the
presence of buildings. Consequently, the published performance data (such as the power curve) is based
entirely on "optimal" results of Wind Turbines found in open fields or off shore locations. By gathering
wind data using anemometers and LiDAR systems, as well as an on-campus Wind Turbine, empirical
performance data of a Wind Turbine in an urban setting may be found.

Objectives
The objectives of this study were to empirically determine the performance of a wind turbine placed in an
urban environment, particularly the effects of nearby buildings and their potential effects wind upstream
of the turbine. Performance is measured as the power output as compared to the wind speed, as shown by
a power curve. Since power curves provided by the manufacturer do not take into account location of the
turbine, analysis of recorded wind data from the turbine would allow an empirical power curve to be
found. An empirical power curve would be reflective of the turbines performance given the wind
velocity. When compared to the published power curve, the actual performance relative to the theoretical
performance would be found. Additionally, the turbulence intensity may be easily calculated from the
empirical data. The comparison between turbulence intensity and wind speed would also reveal the
effects of turbulence on power output.

Methodology
To analyze the performance of the wind turbine, data related to power output, wind velocity, and wind
direction was collected from a cup anemometer placed directly downstream of the wind turbine blades.
This data was sampled and recorded at 1Hz over a span of 12 months. After data collection, an empirical
power curve was created using IEC-614000-12-1, the international standard procedure for wind turbine
power curve production. This standard required that 1Hz data be collected for a minimum of 180Hr, with
a minimum of 30min at each wind speed, from 85% of cut-in speed up to 150% of cut-out speed. The 1Hz
data was then averaged over 10 minutes, then the power output was binned into 0.5m/s intervals and the
power outputs for each bin was taken as the averaged value. Calculating turbulence intensity was taken as
the standard deviation divided by the mean over a one minute span.


336
Results

Raw Data with published power curve superimposed

Empirical and Published Power Curves


0 5 10 15 20 25
-20
0
20
40
60
80
100
120
Wind Speed mps
P
o
w
e
r

i
n

k
W
Power Produced vs Wind Speed for SWT for March 2011- February 2012


Actual
Published Curve
See
Below
337
(Zoomed into low frequency data to show number of data points averaged)

Normalized Wind Distribution


8 9 10 11 12 13
40
50
60
70
80
90
Wind Speed mps
P
o
w
e
r

i
n

k
W
Power Produced vs Wind Speed for SWT for March 2011- February 2012


Actual
Published Curve
0 2 4 6 8 10 12 14
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
Normalized Data Occurance Plot
Wind Speed
N
o
r
m
a
l
i
z
e
d

o
c
c
u
r
a
n
c
e
s
3 Data Points
2 Data Points
15 Data Points
8 Data Points
37 Data Points
79 Data Points
157 Data Points
249 Data Points
435 Data Points
595 Data Points
338
Normalized Power Distribution

Turbulence Intensity Plots

-20 0 20 40 60 80 100 120
0
0.2
0.4
One Second Normalized Power Distribution
P
e
r
c
e
n
t
a
g
e

o
f

O
c
c
u
r
a
n
c
e
kW
-20 0 20 40 60 80 100 120
0
0.2
0.4
0.6
Ten Minute Normalized Power Distribution
P
e
r
c
e
n
t
a
g
e

o
f

O
c
c
u
r
a
n
c
e
kW
0 2 4 6 8 10 12 14
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Turbulence Intensity vs. Wind Speed
Wind Speed
T
I
339

Conclusion
From the scatter of raw data around the published power curve, it might appear that the actual
performance is as expected from the theoretical. However, when IEC-614000-12-1 standard is applied to
the raw data, the resulting performance curve suggests a different conclusion. The empirical performance
curve superimposed over the published power curve suggests that for lower wind speeds (<10m/s) the
performance data matches the published. However as the wind speeds increase and power output
approaches rated capacity, the performance curve wildly deviates from the published. Upon closer
inspection of the data, it is revealed that there are relatively few occurrences of higher wind speeds. While
data at most wind speeds (except for wind speeds between 12 and 12.5m/s) satisfy the IEC-614000-12-1
requirement that there be 30 minutes spent at each wind speed, or 3 data points of power data at each bin,
the relatively low number of data points causes the performance curve to be highly sensitive to outliers.
This is further supported by the normalized wind distribution. A very small proportion of the overall
wind reach speeds greater than 10m/s. Consequently, as shown by the normalized power output
distribution, the wind turbine rarely operates beyond 20% of its rated capacity, with no data showing
operation at the rated capacity. Therefore, while the urban environment itself may not have a particularly
strong effect on the performance at low wind speeds, the rarity of sufficiently high speed winds may lead
to performance loss.

The Turbulence Intensity vs. Wind Speed curve reveals the existence of the wind turbine cut in speed at
3.5m/s, or the minimum wind speed for which power will be generated. For wind speeds below 3.5m/s,
the corresponding data on the Turbulence Intensity vs. Power curve shows no power being produced. As
wind speeds increase beyond the cut in speed, the Turbulence Intensity increases asymptotically towards
1 for both curves, suggesting that higher wind speeds (and correspondingly, higher power output) are
more turbulent, but does not strongly affect performance. This is evident in the high frequency data region
beyond the cut in speed, but below 10m/s where the performance curve closely matches the published
curve.

Future Work
A LiDAR system should be deployed to measure the wind shear and velocity profile at various locations
around the urban environment. Additionally, several cup anemometers surrounding the wind turbine
should be used to measure the upstream and downstream wind velocities and directions. CFD modeling
of the urban environment would also give further insight into the changes that occur to the wind as it
passes through urban setting before reaching the turbine.

0 10 20 30 40 50 60 70 80 90 100
0
0.2
0.4
0.6
0.8
1
1.2
1.4
Turbulence Intensity vs. Power
Power
T
I
340
Acknowledgments
Thank you to Iwan Alexander, the Great Lakes Energy Institute, and Case Western Reserve University
for their continuing support and guidance in this research endeavor; to the Ohio Space Grant Consortium
for the opportunity to give this presentation and the financial support; to Laura Stacko for patience and
understanding.
341
Comparison of Simulations and Models for Aspiration in a Supersonic Flow Using Overflow

Student Researcher: Nathan A. Wukie

Advisor: Dr. Paul Orkwis

University of Cincinnati
School of Aerospace Systems

Abstract
Researchers at the University of Cincinnati and the Air Force Research Laboratory (AFRL) at Wright-
Patterson Air Force Base (WPAFB) have begun initial simulations of the Shock Wave Boundary Layer
Interaction (SWBLI) rig in the AFRL Trisonic Gas Dynamics Facility. Computational studies of a single
90 bleed hole in a supersonic freestream were conducted using the OVERFLOW solver to investigate the
influence of different simulation regions and bleed models on the freestream flow and boundary layer.
Simulations were run with different combinations of the main flow, hole, and plenum domains included
in the simulation or modeled via mass flow, pressure or Slaters empirical approach at the interface to the
modeled outlet. Simulations using just the main flow region were also run with the boundary condition
applied either as a discrete hole or distributed across the bleed plate. It was found that models that include
the discrete nature of the hole will produce downstream effects more closely matching that of a full
simulation in terms of streamwise vorticity, downstream total pressure and boundary layer region
turbulence quantities. The best modeling approach in terms of main flow solution quality, grid
requirements and convergence behavior utilized an overset grid for the hole and a pressure boundary
condition applied at the bottom of the hole where the interface with the plenum region would normally
occur.

Project Objectives
The ultimate purpose of this research is to investigate flow control devices in mixed-compression inlets.
In this work, mass aspirated from a supersonic flow through bleed holes is both simulated and modeled to
better understand the flow physics and demonstrate effective and efficient capabilities for modeling and
simulating bleed hole systems when using the OVERFLOW overset grid methodology. These include full
bleed hole/plenum simulations, boundary conditions that model the bleed effect (distributed mass flow,
prescribed outlet pressure and Slaters model), and hybrids between these approaches that utilize overset
grids.

Methodology
The OVERFLOW solver was used for this investigation and utilizes the Chimera gridding approach,
which provides significant flexibility in the development of the simulation grid. Chimera gridding allows
for different grid blocks to overlap in the simulation, making it much easier to develop grids for complex
geometries. The OVERFLOW solver solves the 3-D turbulent, Reynolds-averaged Navier-Stokes
(RANS) equations. Third-order accurate convective fluxes were used with the HLLC Upwind Scheme
2

and the Koren limiter
3
unless noted otherwise. Additionally, all simulations were run using local time-step
scaling. Grid sequencing and 3-level multigrid were also employed. The grids were generated primarily in
POINTWISE

together with several in-house software programs that facilitated the mesh generation
process for the SWBLI project. One of the grids used can be seen in Figure 1.

Three boundary conditions were investigated for simulating bleed. They include a simple equally
distributed mass flow (EDMF) boundary condition, Slaters bleed boundary condition as applied in the
WIND US code
4,5
, and a pressure outlet (PO) boundary condition. The equally distributed mass flow and
Slater bleed boundary conditions are currently being used in full simulations
6
of the SWBLI inlet model
by applying them over large rectangular areas corresponding to the aspiration plates in the model as a
preliminary model of the bleed.

The EDMF boundary condition works by specifying a constant mass flow per unit area over the boundary
to which it is applied. The input parameter for this boundary condition can be seen in Eqn. 1. If it is
342
applied over a larger rectangular area, then the discrete nature of the bleed holes and their interaction with
the freestream flow is lost. However, the application of the boundary condition over many small discrete
regions instead of a large distributed region offers the potential to recapture the discrete effect introduced
on the freestream flow. One potential problem with the EDMF boundary condition is that it requires the
mass flow rate to be imposed uniformly across the entire boundary condition patch. In a simulation
involving a larger application of the boundary condition, over an array of bleed holes, for example, it is
unlikely that the mass flow rate over the boundary would be constant, especially in cases that include
shock waves impinging on the boundary.
(1)
The second boundary condition investigated was Slaters porous bleed model as described in Ref. 5,
which has in turn been implemented by the authors into the OVERFLOW code for use on the SWBLI
project. The bleed boundary condition for Slaters bleed model uses a specified plenum pressure as well
as the pressure and temperature on the boundary to calculate the mass flow rate for each face on the
boundary. This allows the correct local mass flow rate to be calculated based on the local flow
conditions.
5
This bleed model has the ability to accommodate a limited amount of blowing into the
freestream.

The third boundary condition that was investigated for modeling bleed was the pressure outflow boundary
condition, where the user has the ability to specify a constant pressure over the boundary. This is done
through the input parameter that can be seen in Eqn. 2 as defined in Ref. 7.


Pressure Outlet Parameter =
p
p

(2)
The results obtained using surface boundary conditions at the top or bottom of the hole will be compared
against each other, and the results obtained from the full plenum bleed hole simulation. Those
comparisons will be made to determine how each model affects the boundary layer and freestream flows
and then also, which method is most appropriate for the SWBLI project.

Results and Discussion
Computational studies of a normal bleed hole in a supersonic freestream were conducted using the
OVERFLOW code to investigate the influence of different bleed simulations and bleed models on the
freestream flow and boundary layer. Three different simulation configurations were used as well as three
different boundary conditions in order to simulate and model bleed. Simulations with a freestream, bleed
hole, and connected plenum region were run first and served as comparison solutions for cases using
boundary conditions to model the bleed. The three boundary conditions investigated (EDMF, Slater
model, and Pressure Outlet) were then compared against the full plenum cases as well as each other to
determine the positive and negative aspects of each boundary condition in terms of modeling bleed
effectively. Effectiveness was determined by the ability of the model to generate results that corresponded
to the full plenum simulations.

The two different methods of applying boundary conditions included a distributed application of the
boundary condition, as well as a discrete application. The results show that in all cases, the discretely
applied boundary conditions were able to produce a freestream and boundary layer flow effect that was
superior to the distributed application method, using the same computational mesh. The discretely applied
boundary conditions were able to generate streamwise vortices and influence the downstream profile of
the boundary layer, whereas the distributed application method failed in both of those aspects. It was seen
however, that the discretely applied Slater bleed model as well as the pressure outlet boundary conditions
at the top of the bleed hole calculated mass flow rates that were approxiamately 25 percent different than
the full plenum simulation. These cases can be seen in Figures 2 and 3.

EDMF Parameter =
V
( )
jet
V
( )

343
The last iteration of the study applied the boundary conditions at the bottom of the bleed hole. In this
case, the EDMF and Slater model boundary conditions resulted in unsteadiness and their mass flow rates
were 14 and 16 percent different than the full plenum simulation. It is thought that the handling of
turbulence quantities at the boundary is a significant cause of error in the EDMF and Slater models. The
pressure outlet boundary condition applied at the bottom of the hole produced the the best results in terms
of its resulting effect on the freestream flow and the mass flow rate that it was able to achieve in
comparison to the full plenum case. The Mach number, turbulence kinetic energy, total pressure and
streamwise vorticity contours in this case make extremely close comparisons to those for the full plenum
case. Also, the mass flow rate was within 1 percent of that calculated in the full plenum case. The
pressure outlet case can be seen in Figures 4 and 4.

Future work for this project includes simulations of bleed hole arrays to investigate the interaction of the
bleed holes with each other, as well as an extension of the current work to include an impinging shock
wave. Also, a grid resolution study of the bleed holes would indicate the amount of computational
resources that can be spared in comparison to the accuracy of the simulation that would be lost in the grid
coarsening process. Lastly, modifications to the EDMF and Slater model boundary conditions in
OVERFLOW are planned in order to better handle turbulence quantities at the boundary for those cases.

Figures



Figure 1. Grid version v16. (left) Full 3D simulation domain. (middle) Bleed hole grid
with corresponding grid blocks extending into the freestream and plenum regions. (right)
Top view of the bleed hole grid.


Figure 2. Centerplane Mach number contours. (left) v16 simulation, (center) v13 simulation Slater
model, (right) v15 simulation Slater model. o/D = 0.2, = 0.254


Figure 3. Streamwise cross-plane streamwise vorticity contours. (left) v16 simulation, (center) v13
simulation Slater model, (right) v15 simulation Slater model. o/D = 0.2, = 0.254

p/ p


p / p

344



Figure 4 Centerplane Mach number contours.
(left) v16 simulation, (right) v17 simulation PO.
o/D = 0.2, = 0.254
Figure 5. Streamwise cross-plane streamwise
vorticity contours. (left) v16 simulation, (right) v17
simulation PO. o/D = 0.2, = 0.254

Acknowledgments
I would like to acknowledge the U. S. Air Force Research Laboratory for partial support of this effort
through the Collaborative Center for Aeronautical Sciences as well as the OSGC.

References
1. Seddon, J., Goldsmith, E.L., Intake Aerodynamics, Blackwell 2nd Ed. 1999.
2. Toro, E. F., Spruce, M., and Speares, W., Restoration of the Contact Surface in the HLL Riemann
Solver, Shock Waves, Vol. 4, 1994 pp. 25-34.
3. Koren, B., Upwind Schemes, Multigrid and Defect Correction for the Steady Navier-Stokes
Equations, Proceedings of the 11
th
International Conference in Numerical Methods in Fluid
Dynamics, edited by D. L. Dwoyer, M. Y. Hussani, and R. H. Voigt, Springer-Verlag, Berlin 1989.
4. Slater, J. W., Saunders, J. D., Modeling of Fixed-Exit Porous Bleed Systems, AIAA 2008-0094,
May 2008.
5. Slater, J. W., Improvements in Modeling 90 Bleed Holes for Supersonic Inlets, AIAA 2009-0710,
June 2009.
6. Apyan, A. C., Orkwis, P. D., Turner, M. G., and Duncan, S., Mixed Compression Inlet Simulations
with Aspiration, AIAA 2012-0777.
7. Nichols, R. H., Buning, P. G., Users Manual for OVERFLOW 2.1, Version 2.1t, August 2008.
8. Manavasi, S., Morell, A., and Hamed, A., Investigation of segmented bleed modeling in supersonic
turbulent boundary layer, AIAA 2011-307.
9. Hamed, A., Manavasi, S., Shin, D., Morell, A., and Nelson, C., Effect of Reynolds number on
supersonic flow bleed, AIAA 2010-591.
10. Hamed, A., Li A., Manavasi, S., and Nelson, C., Flow characteristics through porous bleed in
supersonic turbulent boundary layers, AIAA 2009-1260.
11. Hamed, A., and Li, Z., Simulation of Bleed-Hole Rows for Supersonic Turbulent Boundary Layer
Control, AIAA-2008-67.
12. Hamed, A., Lehnig, T., Investigation of Oblique Shock/Boundary-Layer Bleed Interaction, Journal
of Propulsion and Power, Vol. 8, No. 2, 1992.
13. Shih, T., Rimlinger, M. J., and Chyu, W. J., Three-Dimensional Shock-Wave/Boundary-Layer
Interactions with Bleed AIAA Journal, Vol. 33, No. 10, 1993, pp. 1819-1826.
14. Galbraith, M. C., Orkwis, P. D., and Benek, J. A., Multi-Row Micro-Ramp Actuators for Shock
Wave Boundary-Layer Interaction Control, AIAA 2009-0321.
15. Lapsa, A. P., Experimental Study of Passive Ramps for Control of Shock-Boundary Layer
Interactions, Ph. D Dissertation, The University of Michigan, 2009.

p / p


p / p

345
Stability of the Uncemented Hip Stem in THA

Student Researcher: Benjamin D. Yeh

Advisor: Dr. Timothy Norman

Cedarville University
Elmer W. Engstrom Department of Engineering and Computer Science

Abstract
Total hip arthroplasty (THA) is the surgical procedure of replacing the hip joint with a prosthetic implant.
An important component of this implant is the hip stem; specifically, how it interfaces with the femur and
retains stability over time. One undesirable behavior is relaxation of transverse stresses due to
viscoelastic behavior (bone creep) [1] of bone cement and in the cortical bone itself [2,3]. The effect of
this viscoelastic behavior in uncemented stems under press-fit conditions [4,5] is not well understood.
Studies using two dimensional and axi-symmetric models have shown that the viscoelastic behavior of
the cortical bone diminishes the contact pressure between the stem and bone but does not jeopardize the
stem stability [4]; however, reducing the stem-bone contact area (less than 100% contact area) does
reduce stem stability [6]. Both un-cemented conditions have been modeled with axi-symmetric FEA
models [5,6], and the cemented configuration with a full 3-D model of the femur [1,2]. I will be
continuing work with Dr. Timothy Norman, developing a full 3-D model of an uncemented stem and
performing analysis for multiple conditions. These conditions will include using both fixed and rough
surface interface boundary conditions, non-press fit and press fit interface conditions, as well as 100%
contact and <100% contact area conditions. This will allow us to investigate the effects of both bone
viscoelastic behavior and stem-bone contact area on stem stability in uncemented stems.

Project Objectives
In THA, an important component of is the hip stem. This section interfaces with the femur and provides
the support and stability. The two primary methods of fixing the stem into the femur are using a narrow
stem with a bone cement mantle or using a wider stem with no bone cement. The objective of either
method is to minimize the amount of post-operative movement, primarily distal subsidence, in the stem,
which can lead to clinical failure (i.e. loosening, pain, or instability of the implant). Distal subsidence, or
downward movement, of the hip stem in the femoral canal is a result of elastic deformation in the bone as
a response to loading. Additionally, because bone is a viscoelastic material, it tends to creep when
subjected to continuous or repetitive loading. This transverse creep can cause expansion in the femoral
canal, resulting in additional subsidence. This subsidence can lead to loosening and instability, resulting
in clinical failure

This project is focused on examining the stresses and displacements in the environment of an uncemented
hip stem in THA. The primary objective is to investigate the stability of the stem over time, and the
effects that press-fit and viscoelastic behavior play in developing the stress state. Additionally, a parallel
objective is to apply press-fit conditions to the stem by modeling the bone canal with a smaller diameter
than the distal portion of the hip stem. The additional radial stresses introduce at this bone/stem interface
may contribute to stem stability. This objective was selected because it more closely models in-vivo
condition, as the surgical technique of inserting uncemented stems involves reaming the bone canal
smaller than the implant and pressing the implant into place. Finally, a third objective of this project is to
implement viscoelastic material behavior to the cortical bone in the model. This will introduce the
additional settling of the stem over time, as the bone will relax around the press-fit and the stresses will
likely decrease.

Methodology Used
Models of the stem and femur were created in Solidworks. The stem (Figure 1) was modeled using the
Depuy AML stem as a reference for dimensions. Two separate implant assembly models were
developed. The first used a homogeneous femur, with no differentiation between the cortical and
cancellous regions. The femur model for the second assembly was obtained from the BEL repository.
346
This femur was partitioned into unique regions for the cortical and cancellous bone. Both assembled
model were developed following the Depuy surgical technique manual. The second model is shown in
Figure 2 with a closer view of the bone regions shown in Figure 3.

Both the meshing and the finite element analysis was performed using the software package ABAQUS.
Both models utilized 3D solid tetrahedral elements (Figure 4). The cancellous bone was assumed to be
completely isotropic and fully elastic. This assumption was made because it simplified the analysis and
because the cancellous bone provides little support to the hip stem. The cortical bone was assumed to be
transversely isotropic and viscoelastic. The standard loading on a hip stem is applied through the centroid
of the ball joint at the proximal end of the stem. This location is very nearly the center of the proximal
face of the stem, so the loading was applied as a point load at this location. The bone-stem interface was
modeled as completely fixed for the homogeneous femur model and as a rough interface in the split
region model. In both cases, the femur was fixed at the distal end (constrained from both translation and
rotation).

Results Obtained
At the time of this writing, only the model with the homogeneous femur has been solving completely. In
this model, subsidence was minimal at the distal end of the stem. The primary displacement occurred at
the proximal end of the stem in the medial and distal directions. The maximum von Mises stresses in the
stem were along the medial surface. The contribution of creep to transverse displacement was minimal.
The largest transverse stress in the bone occurred at the distal end of the stem. The split region model
solves with the load applied, but has not worked with either a press fit or viscoelastic material behavior.

Significance and Interpretation of Results
The distal subsidence was insignificant, contributing < 0.1% of the overall subsidence along the medial
edge of the femoral canal. This was expected due the fixed stem-bone interface as well as the collar on
the stem. Additionally, because the stem and bone canal were an exact fit, there was no press fit along the
stem. Without the press fit, the high radial stresses that cause the bone creep were not present. The small
effect of the viscoelastic behavior in this analysis is cause by bending in the stem, which created the
transverse stresses, the maximum being at the distal end of the stem.

Figures


Figure 1. Solidworks model of
the Depuy AML hip stem.
Figure 2. Solidworks model
of the hip stem inserted in
the femur.
Figure 3. Solidworks model showing
cortical and cancellous bone regions.
347


Figure 4. Meshed ABAQUS models of the femur, hip stem, and assembled
model. The femur shown here was obtained from BEL repository.
Figure 5. Von-mises stress results
for the homogeneous femur
model.

Works Cited
1. Norman, T. L., Thyagarajan, G., Saligrama, V. C., Gruen, T.A., Blaha, J. D. Stem surface
roughness alters creep induced subsidence and taper-lock in a cemented femoral hip prosthesis,
Journal of Biomechanics, Vol. 34, pp. 1325-1333, 2001.
2. Norman, T. L., Noble, G., Shultz, T., Gruen, T. A., and Blaha, J. D. Creep of cortical bone and
short term subsidence after cemented stem THA
3. Brown, C. U., Norman, T. L., Kish, V. L., Gruen, T. A., Blaha, J. D. Time-dependent
circumferential deformation of cortical bone upon internal radial loading, J. Biomechanical
Engineering, Vol. 124, pp. 456-461, 2002
4. Norman, T. L., Ackerman, E., Kish, V. L., Smith, T., Gruen, T. A., Yates, A.J., Blaha, J.D.,
Cortical bone viscoelasticity has diminishing effects on contact pressure in press-fit stems but
does not jeopardize implant stability 50th Annual Meeting, Orthopedic Research Society, San
Francisco, CA, March 7-10, 2004, pg. 505.
5. Shultz, T. R., Blaha, J. D., Gruen, T. A., Norman, T. L., Cortical bone viscoelasticity and fixation
strength of press-fit femoral stems: A Finite Element Model, J. Biomechanical Engineering, Vol.
128, pp. 7-12, 2006
6. Norman, T. L., Todd, M. B., SanGregory, S. L., Dewhurst, T. B., Partial stem-bone contact area
significantly reduces stem stability. Accepted to the 52nd Annual Meeting, Orthopaedic Research
Society, Mar. 19-22, 2006, pg. 680.
348
Applications of Ellipses

Student Researcher: Caitlin M. Zook

Advisor: Dr. Sandra Schroeder

Ohio Northern University
Mathematics Department

Abstract
The objective of my lesson is for students to see the applications of ellipses in planetary motion. Starting
with an introduction of conic sections, this lesson looks at ellipses in particular. Students will be
presented with the basic concepts of an ellipse, including the definition, equations, and the associated
vocabulary. Students will also look at how to construct an ellipse. The lesson will be loosely based
around an activity I found on NASAs website called the Elliptical Orbit Activity.

Once students are familiar with ellipses, the applications can be extended to Keplers three laws. The
background and history concerning Keplers discoveries can be included here to reflect the impact that
Keplers and others discoveries had to the accepted view of science at that time. An emphasis will be
placed on the first law which says that the planets orbit in an elliptical path with the Sun at one of the foci.
Using Keplers second law, students will look at Earths year and analyze how fast the Earth is traveling
at any point on its orbit. Students will also be introduced to Keplers third law which deals with the
relationship between the distance of the planets from the Sun and their orbital periods.

Learning Objectives
The student will know the definition and vocabulary associated with an ellipse.
The student will be able to construct and ellipse.
The student will understand Keplers three laws.

Alignment with Common Core Standards
High School Geometry:
G-GPE.3 (+) Derive the equations of ellipsesgiven the foci, using the fact that the sum or difference of
distances from the foci is constant.
G-MG.1 Use geometric shapes, their measures, and their properties to describe objects.
G.MG.3 Apply geometric methods to solve design problems.

Methodology
Background: Students will have just been introduced to conic sections, and this lesson will look at
ellipses more in depth.

Day 1: Students will be presented with the basic concepts of an ellipse.
Definition: An ellipse is defined by any two points F
1
and F
2
(foci) in the plane and any real number r, the
set of points P with the property that the sum of the distances from P to F
1
and F
2
is r.

Equation:
(

, where a is the semi-major axis and b is the semi-minor axis assuming b


< a and (x
0
,y
0
) is the center of the ellipse

Vocabulary: major/minor axis, semi-major/minor axis, foci, eccentricity

Students will construct an ellipse: An ellipse can be constructed using two push pins, a length of string,
and a pencil. Push the pins into the paper at two points; these will be the foci of the ellipse. Then tie each
end of the string to one of the pins. Use the pencil to pull the sting taut and so making a triangle. As you
move the pencil around keeping the string taut, the pencils tip will form an ellipse.

349
Students will be given their own problem to solve relating to the gardeners ellipse. (Traditionally
gardeners use this method to outline an elliptical flower bed using two posts and rope, and so it is called a
gardeners ellipse.) Given different dimensions, students will construct and design their own elliptical
flower bed to meet the set requirements.

Day 2: Once students are familiar with ellipses, the applications can be extended to Keplers three laws.
The background and history concerning Keplers discoveries can be included here to reflect the impact
that Keplers and others discoveries had to the accepted view of scientist at that time.

Keplers First Law: The planets orbit in an elliptical path with the Sun at one of the foci.
o Emphasis of the lesson can be placed on the first law.
o Students will diagram orbits with polar coordinates and look at the perihelion and the aphelion.
o Students will also compare at the eccentricity of the orbits of each planet in our solar system, and
also look at the eccentricities of other bodies in the universe, such as comets. (This information
can be found on a solar system data chart.)

Keplers Second Law: A line joining a planet and the Sun sweeps out equal areas during equal intervals of
time
o Students will look at Earths year and analyze how fast the Earth is traveling at any point on its
orbit. Then they will explain why the Earth spends less time in the winter half of its orbit than in
the summer half.

Keplers Third Law: The square of the orbital period of a planet is directly proportional to the cube of the
semi-major axis of its orbit.
o The formula for Keplers third law can be expressed as (

, with two planets having


orbital periods of T
1
and T
2
and semi-major axes of A
1
and A
2
respectively.

Closure: Students will write a brief summary including Keplers three laws and give an explanation of
each.

Learning Theory/Student Engagement
This lesson is very student centered. The teacher only lectures at the beginning of the first day as the
definitions and vocabulary are introduced. The teachers role then moves to that of facilitator, as students
work in small groups on the activity. Students will be engaged in a hands-on activity during Day 1 of the
lesson when they are constructing their own ellipses. During Day 2, students will be asked to analyze data
as they take a closer look at Keplers three laws.

Resources/Materials
Graph paper
Push pins (2 per student)
String (for each student)

Assessment
Students may turn in the ellipse they constructed for the gardeners ellipse problem. Students need to
construct an ellipse to meet the required constraints for the flower bed.

The summary can be collected from students which can give evidence as to how much the student
understands of Keplers laws.

Acknowledgments/References
I would like to thank my advisor, Dr. Schroeder, for her help in this project, as the one who brought my
attention to this opportunity and in guiding me towards the direction I wanted to take this project in.
350

Vous aimerez peut-être aussi