Vous êtes sur la page 1sur 14

Historic Developments in the Evolution of Earthquake Engineering

adapted from the 2000 CUREE Calendar illustrated essays by Robert Reitherman 1999 CUREE. All rights reserved.

Consortium of Universities for Research in Earthquake Engineering

CUREE

1301 S. 46th Street, Richmond, CA 94804-4698 tel: 510-231-9557 fax: 510-231-5664 http://www.curee.org

Historic Developments in the Evolution of Earthquake Engineering


his year's CUREe Calendar features graphic layouts and brief descriptions of twelve historic developments in the evolution of earthquake engineering. The reader should note that no attempt has been made to identify the twelve historic, or most historic, developments in this field. Our thesis is rather that the events selected here are indeed historic, influential, and instructive, while there are other equally worthy items that could be added to the list. The 1998 CUREe Calendar described other important developments: 1893 Shaking Table Experiments by Fusakichi Omori and John Milne and the Development of Modern Simulators; 1908 The Elastic Rebound Theory; 1911 The Bulletin of the Seismological Society of America; 1926 The Suyehiro Vibration Analyzer; 1932 John R. Freeman and the Strong Motion Accelerograph; 1933 The Field and Riley Acts; 1934 John A. Blume's Thesis and the Use of Computations, Field Measurements, and Model Testing to Predict Response; 1935 The Magnitude Scale; 1941 Calculation of the Earthquake Response Spectrum; 1956 the First World Conference on Earthquake Engineering; 1970 Edward L. Wilson's Development of SAP and the Growth of Modern Computer Programs for Structural Analysis; 1975 Seismic Isolation Design and Technology in New Zealand. he events selected here have at least four things in common. First, they represent fundamental advances, basic steps forward rather than refinements or derivations of underlying ideas. Now drawing to a close is the century in which earthquake engineering has largely originated and, in the latter half, flowered and matured, and it is important to recognize that our contemporary techniques and concepts, which take a bewildering variety of sophisticated forms, are the offspring of previous fundamental events all too easy to forget.

A T F

nother characteristic shared by the events chosen here is that their progeny have multiplied and thrived. Rather than being interesting footnotes in the history of earthquake engineering, miscellaneous trivia, or evolutionary dead-ends, the events presented here were not only breakthroughs in their own time, but have had an influence that continues to the present day.

hirdly, an attempt has been made, although limited in scope by setting the arbitrary limit of a dozen to match the months of the year, to include developments from different countries and disciplines. Ideas for extending this recognition readily come to mind, but space prevents their inclusion. inally, attempting to include events of very recent historysay within the last twenty yearswould be as prone to error as it is unneeded: A wide historical field of view requires a station point a considerable distance away from the subject, and in many different places the works of the present are already given awards as the now comfortably established field of earthquake engineering honors the current year's best papers and just-finished building projects at a steadily increasing pace. To these pages here is assigned a different purpose: to recognize the contributions made in earlier years, giving credit where credit is due and perhaps providing some insights to the current generation of earthquake engineers as to where their knowledge came from.

Special thanks to: the National Information Service for Earthquake Engineering for library and photographic services.
1

he Campanile or tower of Pisa was indeed leaning dramatically at the time Galileo (1564-1642) was a student and later a professor at the university in that city, where he had been born, but apparently it is a false story that he ever dropped objects off the Campanile to test theories of gravity. Instead, Galileo realized that, given crude measuring techniques, more precise experiments could be devised by using inclined planes to slow down the action of falling objects. Galileo found that gravity caused the velocity to vary proportionally with the time of the fall, not the distance (De Motu Gravium, 1590). His reliance on the experiment (which he called the cimento, or ordeal, by which statements about fact must be tested) and his actual conduct of numerous experiments extends his fame as a pioneer in all branches of science. His astronomical observations that supported the Copernican theory led him to state, What marvelous consequences arise from my observations! (Rene Taton, ed., The Beginning of Modern Science, New York, Basic Books, 1964, p. 276). Publishing his views in Dialogue on the Two Main Systems of the World (1632), which contradicted the position of the Catholic Church at that time, also brought severe consequences, including the banning of the book and the house arrest of Galileo for the last eight years of his life. His Discourse on the Two New Sciences (1638) deals with structural mechanics rather than astronomy, but had to be secretly published in Holland. It is another building in Pisa, the Duomo or cathedral, that figures prominently in the story of an important discovery of Galileos, one that is central to earthquake engineering to this day. The principle involved is the period of vibration. When the candelabra in the cathedral was set in motion as the candles were lit, he used his pulseliterally a wrist watch to time the motions. Noting that the period of time for one complete to-and-fro oscillation stayed the same as the motion died out, he began to analyze this important dynamic phenomenon and could predict it mathematically.

Cathedral (left) and Campanile (Right) in Pisa

Called fundamental or natural because the period of vibration is characteristic of a given structure and does not vary unless there is a change in either mass or stiffness, this trait more than any other determines how much a structure will respond to earthquake shaking. In simple terms, a structure is like an inverted pendulum, cantilevering up from the ground and loaded horizontally. Higher mode response, inelastic behavior, and the fact that real buildings or other structures are not perfectly symmetrical and have different vibration characteristics on different axes, are engineering realities that complicate the application of the basic principles of physics involved, not to mention the fact that the motion of the ground in earthquakes is also complex rather than a single impulse or harmonic series of vibrations. Galileo conducted experiments in dynamics by obtaining scratches on brass recording plates of the vibrations of strings, finding that higher notes corresponded to closer-spaced lines or higher frequency motion. The concept of frequency would prove to be basic to all problems of dynamics involving vibrations. (Robert Bruce Lindsay, Historical Introduction, in J.W.S. Rayleigh, The Theory of Sound, New York, Dover, vol. I, 1945, p. xiii). David Bernoulli discovered the principle of superposition in 1755 with regard to simultaneously occurring sound amplitudes, also of relevance to earthquake engineering when applied to amplitudes of earthquake vibrations and to the vibrations they cause in structures. Perhaps the culmination of the role played by the development of the dynamics of sound in laying the groundwork for analysis of other vibration phenomena such as earthquakes is Lord Rayleighs Theory of Sound, in which he mathematically carves the subject of the lateral vibrations of thin elastic rods, which in their natural condition are straight into neat mathematical equations. Earthquake engineers of the twentieth century still use the Rayleigh method to calculate the period of vibration of a structure.

1583: Galileo Galilei and the Period of Vibration


Along with Newton, Galileo is usually cited as the founder of physics and modern science as a whole. Among the many giants on whose shoulders Newton stood, Galileo was the most important, having the independence of mind to depart from the powerful speculations that had ruled the preceding centuries concerning forces on structural members, gravity, inertia, and other essential engineering concepts, and he did so mathematically and experimentally, rather than arguing from philosophical principles. The Systeme International unit of acceleration, the gal, is named after him.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


2

f the earthquake engineering field wishes to claim Isaac Newton (1642-1727) as one of its pioneers, it will have to wait in line. The history of science notes him for many things: calculus; development of optics and the invention of the reflecting telescope; laws of thermodynamics, conservation of momentum, and conservation of angular momentum; and especially his theory of gravity, which allowed him to mathematically predict the orbits of planets. Actually, he also needed another theory for that purpose, namely the theory of inertia, which he also developed (though with regard to both gravity and inertia, Galileo was an important forerunner and Albert Einstein an important successor). Aristotle postulated that a single force imparted to an Definition IV: "An impressed force is an action exerted upon a body, in order to object, if not resisted by something, would make that change its state, either of rest, or of uniform motion in a right line." object have an innate tendency to continually accelerate: by so much as the medium is more incorporeal and Axioms, or Laws of Motion, Law I: "Every body continues in its state of rest, or of less resistant and more easily divided, the faster will be uniform motion in a right line, unless it is compelled to change that state by forces the movementif a thing moves through the thinnest impressed upon it." Law II: "The change of motion is proportional to the motive force medium such and such a distance in such and such a impressed; and is made in the direction of the right line in which that force is time, it moves through the void with a speed beyond any impressed." Sir Isaac Newton's Mathematical Principles of Natural Philosophy and His ratio or at an absurdly infinite rate. (Aristotle, Physics, System of the World (Andrew Motte and Florian Cajori, translators, University of Jonathan Barnes, ed., The Complete Works of Aristotle, California Press, Berkeley, California, 1934) The Revised Oxford Translation Princeton University Press, Princeton, NJ, 1984, vol. I, p. 366, Bekker # 215b 11ff) Aristotle abhorred a vacuum, though it turned out that nature did not, as we now know that most of the universe is empty space. Today, thanks to Newton, we think it obvious that in the absence of any other force, an object in motion will keep going that direction and speed because of inertia, and we also know that because of inertia an object at rest will remain there quite happily until a force acts upon it. If Newtons theories seem obvious today, it is only because he had the independence and brilliance of mind to establish them so convincingly.

Sir Isaac Newton (1642-1727)

When engineers first began to calculate earthquake loading by applying an acceleration to the buildings mass, the product being the inertial force, the central required concept had been waiting for two hundred years: F = ma, the inertial force equals the mass times the acceleration. However, this only applies to completely rigid bodieswhich real structures arent. Other complexities encountered by earthquake engineers are the interaction of the period of vibration and a series of accelerations in an earthquake; damping; and inelastic behavior. In essence, however, dynamic computations are all aimed at the final step of multiplying the acceleration that effectively acts on the structure or component with the mass thereof to give the inertial force that must be resisted. Less familiar to engineers than his three laws of motion but just as important to the history of science are his four Rules of Reasoning in Principia, which, along with the great practical success of Newtonian physics in numerous applications in his own day and from then on, have done more than anything to establish modern science: Rule I: admit no more causes of natural things than such as are both true and sufficient to explain their appearances...more is in vain when less will serve." Rule II: ...to the same natural effects we must...assign the same causes. Rule III: The qualities of bodies...which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever. Rule IV: In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypotheses that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.

1687: Isaac Newton's Principia and the Theory of Inertia


When engineers calculate the forces that are developed within a structure as its base is shaken by an earthquake, their analyses are directly derived from the theory of inertia as explained by Isaac Newton. Indeed, the underlying concepts explaining the action of all forces acting on objects or structures were first clearly formulated in Newtons mind. In the Systeme International, the newton is the basic unit of force, which is defined in terms of inertia: It is the force necessary to accelerate a mass of one kilogram one meter per second per second.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


3

obert Mallet (1810-1881) was not the first to conduct fieldwork to study an earthquake, nor the first to conduct such an investigation for the purpose of trying to deduce the source of the earthquake and to draw an intensity map. Jared Brooks, for example, invented an intensity scale and applied it to accounts in different locales of one of the large New Madrid earthquakes in 1811. However, in the opinion of experts such as John Freeman (Earthquake Damage, McGraw-Hill, New York, 1932, p. 35), Bruce Bolt (Earthquakes: A Primer, W. H. Freeman and Co., San Francisco, 1978, p. 99), and Karl Steinbrugge (Earthquakes, Volcanoes, and Tsunamis: An Anatomy of Hazards, Skandia America Group, New York, 1982, p. 2), Mallet was the first to scientifically conduct such a study and greatly advance the field. In that light, the subtitle of his work on the Great Neapolitan Earthquake of 1857: The First Principles of Observational Seismology, takes on a double meaning. The book was in the form of a report to the Royal Society of London, which provided him with a stipend for his travel, and it was published in two beautifully illustrated volumes (Chapman and Hall, London,1862). What makes his accomplishment more impressive for anyone who has participated in such an effort in recent times is that todays reconnaissance or post-earthquake study is assisted by readily available accelerograms and seismological data, cameras both still and video, e-mail or fax communications, computers for compiling and editing the resulting publication, and the existence of a few meters of previous reports on the bookshelf to serve as models.

Diagram from Great Neapolitan Earthquake

The 1857 Neapolitan Earthquake is named after the kingdom where it occurred (Italy not yet being united as a single country). The city of Naples was not significantly affected. Mallet accumulated the most accurate damage statistics up to that time (and perhaps equal even to those compiled after some earthquakes of the past few years), noting the number of buildings damaged or collapsed, and the population and the number of casualties either dead or wounded, for 49 cities and towns. He calculated a casualty ratio for each locale, finding for the most strongly shaken region (the meizoseismal area, a still useful term, and much better than the usually misused epicentral area ) that almost 5% of the inhabitants perished in the collapse of stone dwellings. (vol. II, p. 164) Being a civil engineer, Mallet was able to observe the character of construction of each building he visited, accurately describe the types of failures that had occurred, and propose explanations based on calculations. To Mallet, the buildings were seismometers that could reveal more about the distribution and origin of the vibrations than had ever been deduced for an earthquake. The method of seismic investigation in this Report developed, indicates, that it is now in our power, more or less, to trace back, wherever we can find buildings or ruins, fissured or overthrown by ancient earthquakes, the seismic centres from which these have emanated, in epochs even lost perhaps to history. (vol. II, p. 384) He was particularly interested in trying to find out the ground velocity, as indicated by effects on buildings and objects, and he meticulously diagrammed and performed calculations of dynamics and statics on a number of cases. Mallet was also the author of an extremely lengthy and thorough catalogue of earthquakes from ancient times to the present, The Earthquake Catalog of the British Association (1858), which was the first great advance in this important line of work over previous annals such as Chinese accounts extending back to the eighth century BC. Mallet also conducted research on artillery, and by coincidence the next person to conduct a post-earthquake study as thorough as Mallets, Clarence Dutton (The Charleston Earthquake of August 31, 1886, U.S. Geological Survey, reprinted 1979), was also an engineer with background in that field. Dutton also continued the multi-disciplinary trend: Like Mallet, Dutton placed as much weight on the seismological aspects of the earthquake as its engineering ramifications and also recorded the impact on society.

Mallet's Isoseismals of the Neapolitan Earthquake of 1857

1857: Robert Mallet's Study of the Neapolitan Earthquake


The fieldwork conducted by Robert Mallet following the December 16, 1857 earthquake, which killed approximately 10,000 people, was the first comprehensive post-earthquake investigation along scientific lines. Today, we take it for granted that researchers will survey the region affected by an earthquake using the means at their disposal (which now includes records from seismographs and strong motion seismographs) to determine the source of the earthquake and the distribution of the shaking, and also that it is necessary to produce a chronicle of the earthquakes damage and other effects on society.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


4

he 1908 Italian earthquake is named Reggio-Messina after the large cities on the mainland and Sicilian side of the Strait of Messina, respectively, both of which were severely damaged, along with numerous smaller towns. Messina had previously experienced earthquakes in 1783 (also a very destructive one) and in 1894, 1896, and 1905. Not surprisingly, the northeast corner of Sicily and adjacent areas of the mainland appear as the highest zone of seismicity in current Italian seismic code provisions. It is one thing to qualitatively or theoretically understand how the motion of the earth imparts forces into a building. It is much more difficult to devise a quantitative method for calculating those forces that takes into account the relevant engineering variables at work in the response of real buildings, and in that effort, the work of Italian engineers and mathematicians in the early twentieth century deserves special mention. The work done after the 1908 Reggio-Messina Earthquake may be conveniently cited as the origin of the equivalent static lateral force method, in which a seismic coefficient is applied to the mass of the structure, or various coefficients at different levels, to produce the lateral force that is approximately equivalent in effect to the dynamic loading of the expected earthquake. Subsequent development also occurred in Japan in the teens and twenties by Tachu Naito, Riki Sano, Kyoji Suyehiro, and others. Substantial refinements of the method beyond these Italian and Japanese precedents in the USA only came much later. While not the most advanced analytical techniquethat category today is the computer-aided dynamic analysis method that includes inelastic effectsthe equivalent static lateral force analysis method has been and will remain for some considerable time the most often used lateral force analysis method. John Freeman, in Earthquake Damage (McGraw-Hill, New York, 1932), emphasizes the importance of the engineering response to this earthquake in the development of analytical methods. A committee of nine practicing engineers and five engineering professors was empanelled by the Minister of Public Works and charged To find a type of earthquake-resisting structure which can be easily erected and will not be beyond the financial reach of the injured population. (p. 566) The committee reported on the damage patterns from the earthquake in terms of specific construction features, reviewed earlier Italian earthquakes, and searched for all possible treatises relevant to the theory and practice of earthquake resistant design, both Italian and foreign. The committee studied buildings that performed well, and by estimating the ground shaking and forces to which they were subjected, inferred seismic coefficients that would have been adequate for their design. The fact that they found the ground story of a two-story building should be designed to resist 1/12 its tributary weight while the coefficient to Damage in Messina, Italy be applied to the second story should be 1/8, is interesting. George Housner has noted that this variation in force coefficients Source: John A. Freeman Earthquake Damage up the height of the building in the Italians formulas may have embodied early insights into dynamics. (Opening Remarks, Northridge Earthquake Research Conference, Los Angeles, California, August 20-22, 1997). These two fractions are also interesting since subsequently in Japan the figure selected, 1/10, was in between. (California was to import the Japanese 1/10 factor, include a 1/3 increase, and arrive at the long familiar 13.3% seismic coefficient in the Uniform Building Code). When Freeman wrote his book in 1932, he noted that Several engineering textbooks in the Italian language are found which pay much more attention to analyzing and computing earthquake stress upon structural members than is found at the present time in college textbooks upon structures in the English language. (p. 564)

1908: Reggio-Messina, Italy Earthquake and the Development of the Equivalent Static Lateral Force Analysis Method
Following the earthquake of December 28, 1908 in Sicily, which caused 120,000 fatalities, a committee of nine practicing engineers and five engineering professors was appointed by the Italian government to devise a seismic building code, which was to be a historic first in the application of acceleration-related factors, or seismic coefficients, in lateral force analysis.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


5

he story of the development of plate tectonics is one of the classic tales in the history of science. Because the essence of the theory is so easily explainedthe earths crust has not remained one unbroken shell but instead is fragmented, and the fragments slowly movefew people other than geologists realize how many pieces of evidence and associated steps in logic were painstakingly assembled to make a convincing theory. In one sense, the contemporaries of Alfred Wegener (1880-1930) were justified in not adopting his "continental displacement" theory, since at that time he had relatively little evidence to support it and an insufficiently developed hypothesis that could not explain the global-scale energy that drove the process. However, it is obvious in retrospect that those who argued against his concept had an excessively strong belief in their own inadequately based theories, the most popular of which was that mountain building and continents, earthquakes, and the existence and location of seas, were all caused by isostasy, the simple rising or subsiding of geologic materials of different density. Wegener, by contrast, hypothesized that horizontal rather than vertical movements of pieces of the upper layer of the earths lithosphere accounted for how the globe looks today and how it was configured in previous times. In 1912, Wegener developed, and in 1915 published, The Origin of Continents and Oceans, making him the individual most appropriately cited as the founder of modern plate tectonics theory. By primary profession a meteorologist knowledgeable about paleoclimatology, Wegener was the first to begin to accumulate a variety of suggestive evidence about plate tectonics before his death at 50 on a Greenland expedition to study the jet stream. Wegener built up an impressive number of findings, but it was left to others to accumulate the necessary evidence. Especially convincing was new data obtained on the nature of the seafloors topography, age, and other characteristics, that in the 1960s and 1970s caused the theory of plate tectonics to become suddenly accepted. Seldom is such a bold theory so rapidly and decisively established.

Alfred Wegener
Source: Historical Pictures Services, Inc., Chicago

Forerunners such as Francis Bacon and Alexander von Homboldt had noticed signs of similarity between the facing outlines of South America and Africa and had speculated that the two continents had once been joined, which was the observation that was the prime focus of Wegeners work. A more important predecessor was William Gilbert, whose book in 1600, (De Magnete, On the Magnet, Magnetic Bodies, and the Great Magnet of the Earth) was the first to note that there were invisible electrical forces acting at a distance, and that the entire earth functioned as a magnet. Perhaps the most convincing of the modern types of evidence for plate tectonics is paleomagnetic data. The story of the key elements of that body of evidence is welltold by Seiya Uyeda (The New View of the Earth: Moving Continents and Moving Oceans, W. H. Freeman and Co., San Francisco, 1971). In brief, the earth generates a magnetic field, as Gilbert discovered. The north magnetic poles location on the globe changes, and the field has reversed (north Diagram of the earth as a magnet magnetic becoming south) many times over geologic history. Upwelling magma that is to become new sea-floor material is magnetically blank, but Source: William Gilbert, De Magnete when it cools below its Curie point it is imprinted with the orientation of the surrounding magnetic field. The sea-floor is marked with magnetic lineations, stripes of different magnetic orientation imprinted on sea-floor rock, and the older the stripe, the further it appears to have moved from a line of sea-floor spreading, such as the Mid-Atlantic Ridge. Thus Wegener's continental displacement is an effect while the spreading of the sea-floor, driven by convection of molten rock beneath it, is the cause.

1912: Plate Tectonics Theory


Inaccurate in some respects and lacking an explanation for the tremendous engine that drives tectonic deformation of the earths surface, meteorologist Alfred Wegeners publication of his theory of continental drift can be cited nonetheless as the beginning of the development of the theory of plate tectonics. This breakthrough in earth sciences, substantially developed and validated in the 1960s and 1970s, is central to the understanding of what causes earthquakes and where and with approximately what frequency they can be expected to occur.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


6

hile the approach in its most basic formbreak a problem down into little pieces, solve each little piece, then combine the solutions into the overall answerhas been used in many fields, the specific breakthrough of the FEM awaited 1953. As told by Ray Clough (1920 - _) (The Finite Element Method After Twenty-five Years: A Personal View, Computers and Structures, vol. 12, 1980, pp. 361-370), he spent the summer of 1952 at Boeing trying to adapt work from the 1930s by D. McHenry and A. Hrennikoff so that he could model a plane stress system as an assemblage of bar elements. This technique proved unable to represent plates of arbitrary configuration. The following summer, Professor Clough returned to Boeings summer faculty program and was part of a structural dynamics team analyzing the stiffness of a delta wing structure. Mr. Turner suggested that we should merely try dividing the wing skin into appropriate triangular segments. The stiffness properties of these segments were to be evaluated by assuming constant normal and shear stress states within the triangles and applying Castiglianos theorem; then the stiffness of the complete wing system (consisting of skin segments, spars, stringers, etc.) could be obtained by appropriate addition of the component stiffnesses (the direct stiffness method). (p. 362)

Another of earthquake engineerings great analytical accomplishments, the development of the response spectrum, benefited from cross-pollination with the field of aeronatical engineering. Professor George Housners breakthroughs in that effort were facilitated by discussions at Caltech in the 1930s and 1940s by Theodore von Karman and M. Biot, who were interested in structural dynamics as applied to airframes rather than buildings (1941: Calculation of the Earthquake Response Spectrum, 1998 CUREe Calendar). In addition to cross-fertilization with aeronautical engineering, earthquake engineering benefited from Cloughs connection with naval architecture. Clough relates that I had no opportunity for further study of the FEM until 1956-57, when I spent my first sabbatical leave in Norway (with the Skipstekisk Forsknins Institutt in Trondheim) where he carried out some very simple analyses of rectangular and triangular element assemblages using a desk calculator. (p. 362) Further developments occurred upon his return to UC Berkeley, at a time when computers were beginning their rapid development as practical tools for complex engineering calculations. In 1958, An early FEM analysis Clough reported that with a few important exceptions, the application of electronic computers in the Source: Analysis of Tubular Joints by the Finite Element Method, design and analysis of civil engineering structures has been very limited to date. (Use of Modern Ray W. Clough and Ojars Greste, UC Berkeley, April, 1968 Computers In Structural Analysis, Journal of the Structural Division, American Society of Civil nd Engineers, May, 1958, p. 1636-1). In 1960, at the 2 Conference on Electronic Computation (Conference Papers, ASCE, p. 345), Clough christened the method in the title of his paper: The Finite Element Method in Plane Stress Analysis. (Richard Gallagher Finite Element Analysis & Fundamentals, Englewood Cliffs, New Jersey, 1975, pp. 3-5, briefly reviews the history of the FEM, extending his discussion of antecedents back to the 1800s.) Though his career was founded in sophisticated analysis techniques and progressed as he took advantage of the ever more powerful computers that were available through the 1950s and 1960s, Clough began a new and equally fruitful line of work in the 1970s: I became concerned that the advancement of structural analysis capabilities was progressing much more rapidly than was knowledge of the basic material and structural component behavior mechanisms, at least for the nonlinear response range.it is important to express my concern over the tendency for users of the finite element method to become increasingly impressed by the sheer power of the computer procedure, and decreasingly concerned with relating the computer output to the expected behavior of the real structure under investigation. Clough was involved in the plans for the first of the modern shake tables (Structural Dynamic Testing Facilities at the University of California, Berkeley, R.W. Stephen, J. G. Bouwkamp, R.W. Clough and J. Penzien, UC Berkeley EERC Report No. 69-8, 1969), and when the machine was installed in 1972, Clough was one of the first to begin pushing the limits of the testing technology of the time, just as he had helped to develop the FEM in tandem with the advancement of computers.

1953: The Finite Element Method


Professor Ray Clough, later a major figure in the earthquake engineering field, worked at Boeing Airplane Company in the summer of 1953 analyzing the stiffness of a delta wing design. He used the suggestion of M. J. Turner to theoretically divide the wing structure into triangular segments. By 1960, Clough had given the Finite Element Method its official name, and computers were powerful enough to meet its computational needs. It remains a fundamental tool in earthquake engineering, as well as in its original field of aeronautical engineering and in many other fields.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


7

t is frequently misstated that there are four basic structural materials: steel masonry, concrete, and wood. Left off that list is the important fifth structural material: the soil that holds the entire structure up, and which fundamentally obeys the same laws. Analytical tools for soil mechanics problems at the beginning of the twentieth century had not progressed much beyond Charles-Augustin de Coulombs angle of repose calculation from the 1700s, and while structural engineers were able to rationalize the quantitative behavior of each individual strut, indeed each bolt, in a complicated truss, there was no comparable set of techniques to analyze what went on in the volumes of soil and rock underneath or around foundations. Henry Petroski ("Soil Mechanics," in Remaking the World: Adventures in Engineering, Vintage Books, New York, 1997) notes that the term "soil mechanics" was not even coined until 1920. During World War I, Karl Terzaghi (1883-1963), realized that the primitive state of what would later become known as geotechnical engineering required the application of new methods. At that time, soils were classified in strictly geological termscoarse sand, fine Overturned building in a liquefaction area, sand, soft clay, stiff clay, and the likewith each such designation including soils of widely different engineering properties. Terzaghi 1964 Niigata Earthquake concluded that what was needed was a means of measuring quantitatively a variety of material properties that would distinguish soils in Photo courtesy of: Steinbrugge Collection, Earthquake a unique way and that would, not incidentally, enable engineers to predict by calculation such phenomena as bearing strength and Engineering Research Center, University of California, Berkeley settlement rate. (Petroski, p. 101-102). In 1958, Caltech professor George Housner explained the essence of sandboils caused by liquefaction in his BSSA paper: The formation of sandblows by earthquakes is explained in terms of well-known properties of water-bearing soils. The important parameters of the problem are the porosity, the permeability, the elasticity, and the degree of consolidation of the soil. The important conditions are that the water table be sufficiently near the ground surface and that the readjustment of soil particles by the passage of seismic waves be of sufficient magnitude. (p. 155) Housner noted that Sandy soils are not in a state of closest packing, and it is well known that vibratory stresses will cause a readjustment of the particles and thus consolidate the soil, (p. 156). If the pore spaces are filled with water, and if the particles tend to pack closer as they are shaken and consolidate, the fluid pressure will be pumped up, in some cases sufficiently to keep the sand grains apart and to free them from their pre-earthquake solid state when they were frozen by friction. The soil behaves as a fluid until the excess pressure bleeds off, and the pressurized water can squirt its way to the surface forming sand craters. Harry B. Seed, who as much as anyone developed the basic concept of liquefaction into a set of analytical techniques of practical use in predicting which particular geologic conditions would result in liquefaction during future earthquakes, used to have his students at the University of California at Berkeley draw a cross section at full-scale of the soil sample that was the subject of their homework assignment. His argument was that the geotechnical engineer had to remember that their calculations were just ways of looking at the collection of particles around the structures foundation, and the gross scale of behavior the engineer usually focuses onbearing pressures, settlement, stability, etc.is merely the result of how all the individual particles behave. Among Seeds influential and early works on liquefaction are his paper co-authored with K. L. Lee,Liquefaction of Saturated Sands During Cyclic Loading, (Journal of the Soil Mechanics and Foundations Division, ASCE 92, SM 6, pp. 105-134, 1966), and with I. M. Idriss after the 1964 Niigata Earthquake on Analysis of Soil Liquefaction: Niigata Earthquake, Journal of the Soil Mechanics and Foundations Division, ASCE 93, SM3, pp. 83-108, 1967). Sand vent, Assam, India Earthquake, June 12, 1897
Photo courtesy of: Steinbrugge Collection, Earthquake Engineering Research Center, University of California, Berkeley

1958: A Modern Understanding of Liquefaction


Liquefaction had been observed in many earthquakes prior to 1958, but its causes were not well understood, and techniques for mapping liquefaction susceptibility or relating the hazard to geotechnical and geological investigations did not yet exist. The publication of Professor George Housners paper, The Mechanisms of Sandblows, (Bulletin of the Seismological Society of America, vol. 48, pp. 155-161, April, 1958), is one historic benchmark indicating the modern understanding of liquefaction.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


8

hose who work within the field of earthquake engineering sometimes think the rest of the world focuses on our little discipline as much as we do. A reminder that this view is myopic is provided by the fact that in the USA, if asked to identify the blue book, most people would either name Emily Posts extensively published manual of etiquette or Kellys guide to used car prices. In the field of earthquake engineering, however, the Blue Book is the SEAOC Recommended Lateral Force Requirements and Commentary. The Blue Book was not the first work of its kind in the US, but perhaps the most important. The 1927 UBC contained an optional seismic appendix, the California State Chamber of Commerce published a report by leading engineers recommending a statewide code in 1928, the Field and Riley Acts were passed soon after the 1933 Long Beach Earthquake and legislated code requirements for school and other buildings respectively, and in 1951 Lateral Forces of Earthquake and Wind was published (ASCE Transactions, Vol. 77, April, 1951also called Separate 66 for its ASCE separately printed publication number), which, like the Blue Book, was the effort of a large committee of engineers.

Probably the most quoted section of the Blue Book is its listing of the goals the provisions are intended to fulfill. These words have varied in subtle ways from edition to edition. In the original version, the Blue Book stated that the building codes goal was to provide minimum standards to assure public safety (p. 20) and that for some projects additional damage control, especially to protect nonstructrual components, may be a desirable additional objective. Today we would use the fashionable term performance-based design for such an approach that relates expected performance to expected ground motion severity. Another familiar and oft quoted section of the Blue Book is its base shear formula. The first version in the 1960 edition was V = Z K C W, where Z was the zone (with a factor of 1, 1/2, and 1/4 for Zones III, II, and I respectively); K was the structure type (values ranging from 0.67 for momentresisting frames to 1.33 for shearwall systems); C was the dynamic factor calculated from the period of vibration; and W was the mass of the building. Since the maximum value of C for one- and two-story buildings was set at 0.1, and because Z was 1 for most of California, this resulted in the familiar 13.3% overall base shear coefficient for most box buildings. In the family tree of US seismic codes, a major branching occurred in 1978 with the publication of ATC-3-06, Tentative Provisions for the Development of Seismic Regulations for Buildings (Applied Technology Council, 1978, funded by the National Science Foundation and National Bureau of Standards). Its title may have made it sound meekafter all it was merely a tentative set of provisions that might develop into somethingbut it turned out to be extremely influential. The new International Building Code (IBC) of the year 2000 is formed by the merger of the three organizations promulgating model building codes in the USAthe Uniform Building Code (UBC), the Standard or Southern Building Code (SBCCI), and the Basic Building Code (BOCA). The IBC traces its seismic provisions lineage to FEMAs funding of the Building Seismic Safety Council to obtain consensus backing of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings, the first edition of which was published in 1985. The NEHRP Provisions in turn are derivations of ATC-3-06. The Blue Book was an impressive accomplishment by a small group of SEAOC volunteers who set out to produce their professional societys view of how the building code should deal with earthquakes. Considering that this took place in the era before there was significant federal or state government funding for earthquake research or code development, we must acknowledge the SEAOC accomplishment as even more impressive.

1960: The SEAOC Blue Book


The first edition of the SEAOC Blue Book, nicknamed for its pale blue cover, was published in San Francisco in 1960. Since then, editions of the Structural Engineers Association of California Recommended Lateral Force Requirements and Commentary has formed the almost verbatim earthquake engineering contents of editions of the Uniform Building Code. It has also documented the evolving consensus of Californias engineers concerning seismic design, a body of knowledge and judgement that continues to have worldwide influence.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


9

he first sentence of Effect of Inelastic Behavior on the Response of Simple Systems to Earthquake Motions, authored by Anestis S. Veletsos and Nathan M. Newmark in 1960 (Proceedings of the Second World Conference on Earthquake Engineering, Tokyo and Kyoto, Japan, vol. II) succinctly states a fundamental problem of earthquake engineering: The theoretical data available concerning the response of structures to earthquake motions are, with few exceptions, applicable only to elastic structures, although it is generally recognized that structures subjected to actual earthquakes can undergo deformations of relatively large magnitudes in the inelastic range before failure occurs. Hookes Law conveniently predicts that strain is proportional to stress up to the elastic limit. Along with the predictability of response to vibrations if the structure remains elastic, there are obviously these two large incentives for controlling the buildings response within the elastic range, so that the seismic design can be based on more reliable calculations (not to mention the fact that to the owners and occupants of buildings, inelasticity means damage). However, it is difficult to economically provide enough strength to meet that goal. An approximate measure of the additional strength in the elastic region that buildings would need, if they were to avoid relying on ductility to meet the seismic demand, is provided by the R factors in recent versions of the NEHRP Recommended Provisions for Seismic Regulations for New Buildings, which reduce the seismic design loads by up to eight times. As a practical matter, there has historically been a significant but not insurmountable difficulty in designing a building to resist 1.5 times as much lateral design force (as occurred frequently after the 1976 UBC came into use with an importance factor for some occupancies). Setting the seismic load higher by a factor of several times is extremely ambitious (though it is also true that rarely has the architectural configuration been designed from the start with the goal of maximizing earthquake resistance.) Thus the need for inelastic behavior.

Nathan Newmark (1910-1981)


Photo courtesy of the University of Illinois at Urbana-Champaign

As the twentieth century closes, the search for reliable ways to analyze the inelastic response of a wide variety of structures continues. Testing procedures, building codes, and analysis software are all judged in the earthquake engineering field according to how accurately they consider inelasticity. Nathan Newmark as much as anyone advanced the concepts of the quantification of inelastic demand in his work as a professor at the University of Illinois at Urbana-Champaign. Newmark pioneered many developments in computational techniques for analyzing structures and soils while also consulting on a number of major engineering projects, and one of the major American Society of Civil Engineers medals is named after him. He is also known for many accomplishments outside the earthquake engineering field. In his career, he advanced much of his theoretical work by being in the vanguard of users of the developing state of the art of computer science, while he also established experimental laboratories and emphasized the importance of testing. This combination of investigation methods continues to be fruitful.

An early example of inelastic spectra


Source: Nathan Newmark "Current Trends in the Seismic Analysis and Design of High-Rise Structures" in Earthquake Engineering Prentice-Hall, Inc., Englewood Cliffs, N.J. 1970

1960: Inelastic Earthquake Spectra


Engineers long understood that the earthquake demand on a structure in a major earthquake extends into the inelastic range, and yet reliable and practical means of analyzing that response and the associated need for ductility did not exist. To more accurately represent the effect of the earthquake on a structure and to move the earthquake engineering field ahead required developments in inelastic analysis, which were pioneered by Nathan Newmark.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


10

he rupture of a fault releases strain energy that vibrates the ground, resulting in what is reported as an earthquake. A lesser aspect of earthquake hazard, though devastating in localized instances, materializes when the rupture of the fault extends to the surface and deforms the ground, either horizontally (as in the common California strike-slip case), vertically, or with combinations of the two components. The 1906 San Francisco Earthquake made evident the hazard of surface fault rupture in very graphic fashion, all the way along about 435 km (270 miles) of the San Andreas Fault, with offsets up to 6 m (20 ft). In addition to numerous sightseers visiting the wrenched apart landscape at numerous locales, the State Earthquake Investigation Commission report (A. C. Lawson, ed., The California Earthquake of April 18, 1906, Carnegie Institution, Washington, DC, 1908) mapped the surface fault rupture in detail. However, no legislation or voluntary professional standard at the national, state, or local level was passed to regulate construction astride that fault or other faults with well-delineated surface rupture potential. By comparison with the 1906 earthquake, the surface fault rupture in the 1971 San Fernando Earthquake was minor as measured by amount of displacement, (2.4 m, 8 ft maximum, but in most locations about a third of this amount), or total length (11 km, 7 miles). Nevertheless, the 1971 earthquake came at a time when government regulations were increasingly employed to protect public health and safety, whether it be from automobiles or earthquakes. It was the 1933 Long Beach Earthquake, not the much more devastating 1906 Earthquake, which motivated the passage of the California legislatures Field Act and Riley Acts and the beginning of statewide building code regulations for earthquake ground motions. Similarly, it was the minor surface rupture of the 1971 San Fernando Earthquake, and not the impressive fault displacement of 1906, that spurred the passage of a law regulating construction in surface fault rupture zones in California. In 1972, the AlquistPriolo Special Studies Zones Act was passed. (It was later re-named "Alquist-Priolo Earthquake Fault Zoning Act" to be more blunt about the fact that the law dealt with the hazard of earthquake faults.)

1983 Borah Peak, Idaho Earthquake

The act balanced two countervailing interests: the safety provided by prohibiting construction where the ground might rupture, and the cost to the landowner, society, and local enforcement agencies of implementing restrictions. To deal with the uncertainty concerning the exact areas of ground that would rupture, the Act required that general zones be demarcated by the State Geologist and that within these zones geologists had to be retained by owners to conduct fault rupture studies before projects could proceed. Which faults were sufficiently active to warrant a concern? And if there was a significant potential risk, what should the project be forced to do about it? To make the Acts program workable, clear criteria were set: Holocene Epoch (approximately the last 11,000 years) displacement would define active faults. Zones along the traces of known or suspected traces of active faults would define the parcels of property where special studies would have to be conducted. Implementing criteria of the California Division of Mines and Geology established a base line of fieldwork and other investigation on the part of the person conducting a study in a special study zone, who had to be a registered (licensed) geologist, so that it was a matter of data provided in a report rather than expert opinion alone that was required. If a fault was found on a property, the typical hazard-avoidance measure required was a 50 foot (15 m) setback from the trace. 1971 San Fernando Earthquake
Photo courtesy of: Steinbrugge Collection, Earthquake Engineering Research Center, University of California, Berkeley

Until his retirement, Earl Hart was the California Division of Mines and Geology geologist in charge of the states program. His reports and articles include Zoning for Surface Fault Hazards in California: The New Special Studies Zones Maps, California Geology, October 1974, and several editions of the agencys Special Publication 42, Fault-Rupture Hazard Zones in California.

1972: The Alquist-Priolo Earthquake Fault Zoning Act


A state law to regulate the siting of buildings with regard to surface fault rupture was authored by Alfred Alquist and Paul Priolo of the California legislature and enacted following the 1971 San Fernando Earthquake. The essential features of the Act, such as mapping of zones where a hazard is potentially present and requiring site-specific geologic studies within those zones, have since proven applicable to many other seismic and non-seismic geologic hazards.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


11

t could be argued that various wind design knee-bracing layouts for high rise buildings in the late 1800s and early 1900s were early examples of the eccentric braced frame (EBF). The EBF, however, is not merely any diagonally-braced frame in which both ends of the brace do not connect at beam-column joints in a concentricity of center lines of all the structural members involved. Rather it is a framing system in which the eccentricity is intentionally planned, rather than introduced as an accommodation to fenestration or other architectural layout considerations, and the location of inelastic behavior is strategically and very explicitly designed. To appreciate the degree of innovation represented by this invention, consider the other basic types of lateral force resisting elements. There is only a limited menu from which the structural designer can choose. Walls have been around for several thousand years for the purpose of enclosing space; they were not invented to resist earthquakes, they were merely adapted for that purpose by earthquake engineers who modified the materials and the detailing. Column-beam frames with Charles Roeder and an early physical EBF model joints that are relatively rigid in resisting moments as the frame Photo courtesy of Egor Popov deforms under sidesway were constructed in metal (iron, then steel) with various riveting and bolting techniques in Europe and the Eastern United States without any seismic design in the 1800s, and the first building with extensively welded column-beam joints, the Electric Welding Company of America factory, was built in 1920 in Brooklyn without thought for earthquakes. Reinforced concrete frames were designed and constructed as efficient vertical load resisting elements and only slowly retrofitted in the latter half of the twentieth century to become reliable seismic elements. Earthquake engineers gradually learned from earthquakes such as the 1967 Caracas, 1971 San Fernando, and other earthquakes how to modify non-seismic concrete frames to perform well as seismic load-resisting elements. Concentrically braced frames of K, V or inverted V (chevron), X, or single diagonal configuration have been used for centuries in timber, later in iron, and finally steel, in non-seismic regions. Again, these braced frame layouts were appropriated for seismic use and gradually modified. None of these three basic types of vertical structural elements that resist horizontal seismic forceswalls, braces, frameswas invented because of earthquakes, and all three have had to be modified by earthquake engineers to take into account inelastic behavior and other seismic factors. Starting with a blank piece of paper and devising a new structural element specifically with earthquakes in mind as was done with the eccentric braced frame is thus unusual.

Early prototype analytical EBF model


"Eccentric Seismic Bracing of Steel Frames," Egor Popov Proceedings of the Seventh World Conference on Earthquake Engineering, 1980

In the continuing work of Professor Popov and others, especially as applied to steel frames in the SAC Steel Project or by individuals or firms inventing new steel moment-resisting connections, innovations have been tested that utilize mechanisms to provide controlled frictional slip, to reduce beam sections to more positively control where inelasticity occurs, or other details, which, while quite different from the bracing configuration of the EBF, continue the same concept of attempting to dissipate energy rather than meet it head on with strength.

1972: The Eccentric Braced Frame


Frames with moment-resisting joints advantageously respond to earthquakes in a flexible, load-reducing manner and have the potential for high ductility, while braced frames have greater stiffness and reduced drift-induced nonstructural damage. The eccentric braced frame (EBF) offers some of the virtues of both. The development of this new structural system with earthquakes specifically in mind is an example of the continuing trend toward energy dissipation devices and strategies. H. Fujimoto tested eccentrically braced K braces in 1972, and the EBF in its current form can be dated to 1978 with the experimental work published by Egor Popov (1913 - ) and Charles Roeder (1942 - ).

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


12

he Haicheng Earthquake was proceeded by hundreds of perceptible foreshocks, which is an unusual case. When any or all of the methods used by the Chinese (such as observations of clusters of small earthquakes, unusual animal behavior, well water changes) have been applied to other cases, there has been only a spotty success rate, and studies in the USA to test earthquake predictions accuracy have not proven any precursors to be valid, with the exception of the special case of aftershocks. (Without having an accurate model of how the earth produces earthquakes, it is still possible to rely on the past statistics of aftershock patterns, essentially relating the time after an earthquake to the probability of occurrence of aftershocks of various magnitudes.) Charles F. Richter once observed that most earthquake predictions were merely non-scientific speculations. Journalists and the general public rush to any suggestion of earthquake prediction like hogs toward a full trough. Prediction provides a happy hunting ground for amateurs, cranks, and outright publicity-seeking fakers. The vaporings of such people are from time to time seized upon by the news media who then encroach on the time of men who are occupied in serious research. The prime example of scientific rather than non-scientific investigation into earthquake prediction in the United States has been the Parkfield, California Earthquake Prediction Experiment run by the US Geological Survey. (W. H. Bakun and A. G. Lindh, The Parkfield, California, Earthquake Prediction Experiment, Science, August 16, 1985, Vol. 229, No. 4714, pp. 619-624). Based on several lines of evidence, there was reason to conclude that an earthquake of magnitude 5.5 to 6 on a segment of the San Andreas Fault approximately halfway between San Francisco and Los Angeles was most likely to occur around January, 1988, with a very high probability it would occur by 1993, but the earthquake has not occurred to date. Extensive geophysical monitoring apparatus was put in place to record all the vital signs of a section of the earths crust as it was about to have an earthquake and then to observe the earth at one particular moment at it experienced seismic slip. The basic experimental strategy of the Parkfield Experiment, identifying a place likely to have an earthquake and then using instruments to measure some possible precursors, remains the best hope of eventually developing reliable earthquake predictions sometime in the future. In that effort, producing models that account for two basic rupture phenomena is important: how earthquakes start (how the rupture process begins and why it happens at one region of a fault rather than another, and at one time rather than at another) and how earthquakes continue (how the rupture continues in extent, causing a larger magnitude earthquake, rather than stopping at the point where only a small amount of rupture and therefor a small earthquake is produced). Precise distance-measuring device for observing Earthquake prediction has been compared to weather prediction. The latter is not completely accurate, even though observations can geodetic distortion, Parkfield, California be easily obtained throughout the medium of interest (the atmosphere) via radar, weather balloons, satellite imagery, and other means. The difficulty of applying this process to the lithosphereto rock extending down tens of kilometersis obviously much more difficult. In the process of seeking the goal of earthquake prediction, earth scientists must develop more accurate models of how earthquakes are produced. Thus, whether earthquake prediction is realized as a practical technique in the 21st century or not, it may still prove a useful conceptual stimulus in the development of seismology. Strain-measuring instrumentation, Parkfield Earthquake Prediction Experiment

1975 The Successful Haicheng, China Earthquake Prediction


This prediction of a magnitude 7.3 earthquake dramatized the goal of earthquake prediction and spurred research efforts, but 25 years later as the twentieth century closes, earthquake prediction is not yet a reliable technique. Instead, the less ambitious approach of estimating the probabilities of occurrence of earthquakes is central to the field. The maturity of a science can be judged by how well it can both explain and predict its phenomena: The elusiveness of the latter with regard to the generation of earthquakes indicates how much more knowledge is yet to be learned.

CUREe

2000

CALIFORNIA UNIVERSITIES for RESEARCH in EARTHQUAKE ENGINEERING


13