00064
Journal of Earthquake Engineering, Vol. 6, Special Issue 1 (2002) 43–73
c Imperial College Press
DETERMINISTIC VS. PROBABILISTIC SEISMIC HAZARD
ASSESSMENT: AN EXAGGERATED AND
OBSTRUCTIVE DICHOTOMY
JULIAN J. BOMMER
Department of Civil and Environmental Engineering,
Imperial College, London SW7 2BU, UK
Deterministic and probabilistic seismic hazard assessment are frequently represented
as irreconcilably different approaches to the problem of calculating earthquake ground
motions for design, each method fervently defended by its proponents. This situation
often gives the impression that the selection of either a deterministic or a probabilistic
approach is the most fundamental choice in performing a seismic hazard assessment.
The dichotomy between the two approaches is not as pronounced as often implied and
there are many examples of hazard assessments combining elements of both methods.
Insistence on the fundamental division between the deterministic and probabilistic approaches is an obstacle to the development of the most appropriate method of assessment
in a particular case. It is neither possible nor useful to establish an approach to seismic
hazard assessment that will be the ideal tool for all situations. The approach in each
study should be chosen according to the nature of the project and also be calibrated
to the seismicity of the region under study, including the quantity and quality of the
data available to characterise the seismicity. Seismic hazard assessment should continue
to evolve, unfettered by almost ideological allegiance to particular approaches, with the
understanding of earthquake processes.
Keywords: Seismic hazard; probabilistic seismic hazard assessment; deterministic seismic
hazard assessment; seismic risk.
1. Introduction
Seismic hazard could be defined, in the most general sense, as the possibility of
potentially destructive earthquake effects occurring at a particular location. With
the exception of surface fault rupture and tsunami, all the destructive effects of
earthquakes are directly related to the ground shaking induced by the passage
of seismic waves. Textbooks that present guidance on how to assess the hazard of
strong ground-motions invariably present the fundamental choice facing the analyst
as that between adopting a deterministic or probabilistic approach [e.g. Reiter,
1990; Krinitzsky et al., 1993; Kramer, 1996]. Statements made by proponents of
the two approaches often imply very serious differences between deterministic and
probabilistic seismic hazard assessment and reinforce the idea that the choice between them is one of the most important steps in the process of hazard assessment.
This paper aims to show that this apparently diametric split between the two approaches is misleading and, more importantly, that it is not helpful to those faced
43
March 15, 2002 8:34 WSPC/124-JEE
44
00064
J. J. Bommer
with the problem of assessing the hazard presented by earthquake ground motions
at a site.
2. DSHA VS. PSHA: An Exaggerated Dichotomy
Probabilistic seismic hazard assessment (PSHA) was introduced a little over
30 years ago in the landmark paper by Cornell [1968] and has become the most
widely used approach to the problem of determining the characteristics of strong
ground-motion for engineering design. Some, however, have challenged the approach
and put up vociferous defence of deterministic seismic hazard assessment (DSHA),
in turn soliciting firm responses from those favouring PSHA. The division between
the two camps has been expressed in the scientific and technical literature, sometimes in terms reminiscent of political debate.a In some situations the division
has become public as for example when in October 2000 US newspapers reported
the disagreements between Caltrans and the US Army Corps of Engineers regarding the choice of PSHA or DSHA in assessing design loads for the new eastern
span of the Bay Bridge in San Francisco Bay. Hanks and Cornell [2001] have predicted a similar showdown around the seismic hazard assessment for the Yucca
Mountains nuclear waste repository [Stepp et al., 2001]. All of this points to a
seemingly irreconcilable split between DSHA and PSHA, which warrants closer
examination.
2.1. Determinism and probability in seismic hazard assessment
Reiter [1990] and Kramer [1996], currently the most widely consulted textbooks on
seismic hazard analysis, describe DSHA in the same way. The basis of DSHA is to
develop earthquake scenarios, defined by location and magnitude, which could affect
the site under consideration. The resulting ground motions at the site, from which
the controlling event is determined, are then calculated using attenuation relations;
in some cases, there may be more than one controlling event to be considered in
design.
The mechanics of PSHA are far less obvious than those of DSHA, with the result
that there is often misunderstanding of many of the basic features. The excellent
recent papers by Abrahamson [2000] and by Hanks and Cornell [2001] provide very
useful clarification of many issues that have created confusion regarding PSHA.
The essence of PSHA is to identify all possible earthquakes that could affect a site,
including all feasible combinations of magnitude and distance, and to characterise
the frequency of occurrence of different size earthquakes through a recurrence relationship. Attenuation equations are then employed to calculate the ground-motion
parameters that would result at the site due to each of these earthquakes and hence
a The
reader is referred, for example, to the acknowledgements in the paper by Krinitzsky [1998]
and the “response” by Hanks and Cornell [2001].
March 15, 2002 8:34 WSPC/124-JEE
00064
Deterministic vs. Probabilistic Seismic Hazard Assessment
45
the rate at which different levels of ground motion occur at the site is calculated.
The design values of motion are then those having a particular annual frequency of
occurrence.
Common to both approaches is the very fundamental, and highly problematic,
issue of identifying potential sources of earthquakes. Another common feature is
the modelling of the ground motion through the use of attenuation relationships,
more correctly called ground-motion prediction equations [D. M. Boore, written
communication]. The principle difference in the two procedures, as described above,
resides in those steps of PSHA that are related to characterising the rate at which
earthquakes and particular levels of ground motion occur. Hanks and Cornell [2001]
point out that the two approaches “have far more in common than they do in
differences” and that in fact the only difference is that a PSHA has units of time
and DSHA does not. This is indeed a very fundamental distinction between the
two approaches as currently practised: in a DSHA the hazard will be defined as
the ground motion at the site resulting from the controlling earthquake, whereas
in PSHA the hazard is defined as the “mean rate of exceedance of some chosen
ground-motion amplitude” [Hanks and Cornell, 2001]. At this point it is useful to
briefly define the different terms used to characterise probabilistic seismic hazard:
the return period of a particular ground motion, Tr (Y ), is simply the reciprocal
of the annual rate of exceedance. For a specified design life, L, of a project, the
probability of exceedance of the level of ground motion (assuming that a Poisson
model is adopted) is given by:
−L
q = 1 − e Tr .
(1)
Once a mean rate of exceedance or probability of exceedance or return period is
selected as the basis for design, the output of PSHA is then expressed in terms of
a specified ground motion, in the same way as DSHA.
Another important difference between the two approaches, which is discussed
further in Sec. 3, is related to the treatment of hazard due to different sources of
earthquakes. In PSHA, the hazard contributions of different seismogenic sources are
combined into a single frequency of exceedance of the ground-motion parameter;
in DSHA, each seismogenic source is considered separately, the design motions
corresponding to a single scenario in a single source.
Regarding differences and similarities between the two methods, it is often
pointed out that probabilities are at least implicitly present in DSHA in so far
as the probability of a particular earthquake scenario occurring during the design life of the engineering project is effectively assigned as unity. An alternative interpretation is that within a framework of spatially distributed seismicity,
the probability of occurrence of a deterministic scenario is, mathematically, zero
[Hanks and Cornell, 2001] but in general the implied probability of one is a valid
interpretation of the scenarios defined in DSHA. As regards the resulting ground
motions, however, the probability depends upon the treatment of the scatter in the
strong-motion prediction equation: if the median plus one standard deviation is
as opposed to “deterministic” ground motions obtained by modelling of the fault rupture and wave propagation [Abrahamson. with divisions within each camp being often almost as pronounced as the fundamental ideological split itself. in which each side claims exclusive ownership of the truth. It can equally be pointed out that any PSHA includes many deterministic elements in so much that the definition of nearly all of the input requires the application of judgements to select from a range of possibilities. Reiter. Bommer used. for the particular earthquake specified in the scenario. this will correspond to a motion with a 16-percent probability of exceedance. 2002 8:34 WSPC/124-JEE 46 00064 J.. which is of fundamental importance to the raison d’ˆetre of probabilistic seismic hazard assessment.b In this sense. This issue. with the result that the method is freb Hence parties. The formal beginning of PSHA. 1990]. 4. in which internal conflicts have often taken on a ferocity at least as great as that shown in the confrontation with the Right.March 15. is also defined deterministically: the design probability of exceedance. One only needs to look at the history of the Left in international politics. for a clear example of an apparent dichotomy concealing a multitude of opinions and philosophies. it could also be argued that another parameter. the joke that if there are three Trotskyists gathered in a room there will be four political . In addition to the various parameters that define the physical model that is the basis for any PSHA. can be traced back to the classic paper by Cornell [1968]. 2. 1989. This applies in particular to the definition of the geographical limits of the seismic sources zones and the selection of the maximum magnitude. Barbano et al. J.g. as mentioned before.2. However. the analogy remains useful for the split between the proponents of DSHA and PSHA: the defendants of PSHA often ignore the fact that there are many different methods of analysis that fall under the heading of “probabilistic” and the proponents of each of these methods argue their case by pointing out shortcomings in the others. is explored in Sec. From Cornell to kernel boldemdash which PSHA? The nature of the conflict between proponents of PSHA and DSHA was previously likened to that between opposing political or religious ideologies. which has a pronounced influence on the input to engineering design. 2000]. The deterministic nature of defining seismic source zones and the consequently great differences that can arise amongst the interpretations of different experts are well known [e. scratching the surface of opposing sides in ideological conflicts nearly always reveals the division to be far less clear. Important developments included the development of the software EQRISK by McGuire [1976]. The probabilistic nature of ground-motion levels obtained from scaling relationships is reflected in the fact that in Japan “probabilistic” usually refers to ground motions obtained in this way from a particular scenario.
There have also been proposals to do away with source zones altogether and use the seismic catalogue itself to represent the possible locations of earthquakes. such as Bender and Perkins [1982. [1980] used fuzzy set theory to smooth the transitions between source boundaries. Such “historic” approaches can be non-parametric [Veneziano et al. Makropoulos and Burton [1986] developed an approach using the earthquake catalogue to represent sources and Gumbel distributions. an approach that may have been used before 1968. Two fundamental features of the Cornell–McGuire method are definition of seismogenic source zones. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. For example. There have also been numerous studies published giving short-term hazard estimates based on non-Poissonian seismicity.. 1987] who proposed sources with smoothed boundaries. The four hazard maps. .. The differences amongst these different approaches to PSHA are not simply academic: Bommer et al. subsequently explored by Bender [1984]. to develop either slipdependent [Kiremidjian and Anagnos. both of which have been challenged by different researchers who have proposed alternatives. [1998] produced hazard maps for upper-crustal seismicity in El Salvador determined using the Cornell–McGuire approach. 1998]. 1984] or parametric [Shepherd et al.March 15. to form seismic sources. prepared using exactly the same input. consequently. Many alternatives to uniformly distributed seismicity within sources defined by polygons have been put forward. 1987] estimates of hazard. Similarly divergent results obtained using the Cornell–McGuire and the Frankel [1995] approach have been presented by Wahlstr¨ om and Gr¨ unthal [2000] for Fennoscandanavia. with the edifice of hazard computation built on zonation”. Recent adaptation of these “zone-free” methods include the approach based on spatiallysmoothed historical seismicity of Frankel [1995] and the “kernel” method of Woo [1996]. A significant difference between EQRISK and the original formulation of Cornell [1968] was the inclusion of the influence of the uncertainty or scatter in the strong-motion prediction equation. earthquake epicentres in the catalogue are smoothed. according to criteria related to their magnitude and recurrence interval. Kijko and Graham. two zone-free methods and the kernel method. [2000] for hazard in the Sea of Marmara area following the 1999 Kocaeli and Duzce earthquakes. In an earlier study Peek et al. Probabilistic Seismic Hazard Assessment 47 quently referred to by the name Cornell–McGuire. In these most recent methods. who alludes to “misgivings over zonation and. the cornerstone of the Cornell–McGuire method. the use of Markov renewal chains has been proposed as an alternative probabilistic model for subduction zones with identified seismic gaps. with spatially uniform activity and the assumption of a Poisson process to represent the seismicity. such as the forecast by Parsons et al. 1993. as areas or lines. show very significant differences in the resulting spatial distribution of the hazard and the maximum values of PGA vary amongst the four maps by a factor of more than two. obtained by defining a standard error on earthquake locations. 1984] or time-dependent [Kiremdjian and Suzuki.
3. 2..1. Aoudia et al. developed since the publication of Cornell [1968]. a subject discussed in Sec. Panza et al. from which contours of the peak motions are drawn.. assigning the maximum magnitude earthquake to an active fault. to add to the confusion. The method is based on the generation of synthetic accelerograms at the nodes of a grid. and subsequently applied to many other countries [e.4. Arguably. . such as those produced by the California Division of Mines and Geology for Caltrans [Mualchin.3. but there are examples of deterministic seismic hazard maps. established and universally accepted approach to DSHA. which is hardly a worst-case scenario. is that whereas the latter imply that the ground motions for each scenario should be calculated using median (50-percentile) values from strong-motion scaling relationships. . calculating the resulting ground motions.1. Radulian et al.. Alvarez et al. Krinizsky [2001] proposes the use of the median-plus-one-standard deviation (84-percentile) values. One important difference between DSHA as proposed by Krinitzsky [2001] and DSHA as described by Reiter [1990] and Kramer [1996]. Romeo and Prestininizi [2000] obtain design earthquake scenarios by manipulation of magnitude-distance pairs found from deaggregation and refer to these as “deterministic reference events”. 1996. 1996]. . there is not a single. the recent paper by Krinitzsky [2001]. . 2000. In common with most seismic design codes. in part precisely because of the absence of a classic point of reference. and mapping contours of the highest values of the chosen ground-motion parameter. 2. [1993] for Italy. Within Zone 4. used to anchor a response spectrum.March 15. although it is significantly different from DSHA as described in Sec. the 1997 edition of the Uniform Building Code (UBC97) defines the design earthquake actions on the basis of a zonation map corresponding to a 475-year return period. Bommer 2.. The map is prepared by assigning the maximum credible earthquake (MCE) to each known active or potentially active fault. 2002 8:34 WSPC/124-JEE 48 00064 J. It is sometimes thought that DSHA is mainly applicable to site-specific hazard assessments.g. The hazard mapping method developed by Costa et al. Confusion regarding the meaning of DSHA is created by the use of the word “deterministic” to describe scenarios obtained by deaggregation of PSHA. 1996] has also been called deterministic. could become the standard reference for DSHA that is currently missing from the technical literature. J. 2. the paper also uses the term in the sense defined in Sec. and which DSHA? Just there are many different approaches to PSHA. One notable feature of the approach is that the choice of grid size results in an arbitrary minimum source-to-site distance of about 15 km. Anderson [1997] argues for supplementing probabilistic maps with such deterministic “scenario maps” to provide insight to what might happen if particular faults actually rupture during the design life of a project. Combined DSHA and PSHA The apparently irreconcilable contradiction between PSHA and DSHA has not been sufficient to prevent their combined use in some recent applications. 1999.
. but more importantly because of the very different treatment of uncertainty in the preparation of the two maps. Before continuing.5g and 0. Probabilistic Seismic Hazard Assessment 49 for any site within 15 km of an identified active fault capable of producing an earthquake of M 6. the latter are retained. confusingly. In this section. however. using active fault locations and activities combined with median values from strong-motion scaling relations multiplied by a factor of 1. However. the values from the deterministic map are used instead wherever these are lower than those from the probabilistic map. 3. within these plateaus. Seismic Actions and Seismic Hazard One of the most fundamental differences between DHSA and PSHA is the transparency or otherwise of the underlying features of the earthquake processes and in the treatment of the uncertainties associated with current models of these processes. Earthquakes. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. Wherever the deterministic values are higher than those from the probabilistic map. the near source factors Na and Nv are applied to increase the spectral ordinates for the effects of rupture directivity [e.2 and 1. The validity of this assumption is questionable. . Most importantly — and this is a source for much potential confusion — in the DSHA context.6g for periods of 0. 1997]. This process effectively uses DSHA to cap the results of PSHA by not allowing motions higher than those corresponding to a deterministic scenario. whereas in PSHA “earthquake” now refers to the ground motion at the site. a subject explored further in the next section.5 or greater. Areas are defined on the probabilistic map where the ordinates exceed 1. Arguably there is a probabilistic element in this procedure since the slip rate is used in addition to the maximum magnitude to classify the fault and thus determine the values of the factors. the conclusion is that the probabilistic estimates should not exceed the deterministic estimates. A more explicit combination of DSHA and PSHA has been used in the preparation of the maximum considered earthquake (for which the acronym MCE is.0 s.. Somerville et al.0 s respectively.2 and 1.March 15. 2000]. the other deterministic. also used) maps for the 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings [Leyendecker et al. the application of Na and Nv is essentially deterministic: the implicit assumption is that if there is an active fault within 15 km of the site. hence.5. giving contours of spectral ordinates at 0. the “earthquake” refers to an event defined by magnitude and location. in part due to the possibility that a significant active fault has not been recognised.g. which are effectively considered therefore as an upper bound. it is important to note here that in this context MCE has a different meaning from that within DSHA mentioned above. Two maps are produced. it will rupture during the design life of the structure and furthermore it will rupture in such a way as to produce forward directivity effects at the site. the two methods are considered with respect to their treatment of uncertainty and their relationship with real earthquake processes. one using PSHA for a probability of exceedance of 2% in 50 years.
This is the inevitable result of calculations based on random spatial distribution of earthquakes and a continuous magnitude-frequency relationship. In passing it is worth noting that the validity of the Gutenberg–Richter recurrence relationship. a plot of the values of the annual rate of exceedance. 1993. 2002 8:34 WSPC/124-JEE 50 00064 J. 1996]. the curves imply that the corresponding level of ground motion increases smoothly and continuously with the period of exposure. Speidel and Mattson. The values on a hazard curve convey whether the area is of low. 2000]. For a given exceedance probability. has been questioned by several researchers [Krinitzsky. The results will of course be a series of step functions rather than a smooth curve. Hofmann. 1995.March 15. . a hazard curve for a single ground-motion parameter tells one almost nothing about the nature of earthquakes likely to affect the site. 1. but if the hazard Fig. it will be possible to plot the actual variation of average ground-motion parameters with time to observe how well our hazard curves are modelling what actually may happen at any given site. Earthquake scenarios and seismic actions The first and most fundamental output from a PSHA is a hazard curve. Bommer 3.. Several centuries from now. moderate or high seismic hazard and its slope may indicate if the larger earthquakes have relatively short or long recurrence intervals. Seismic hazard curves obtained for San Salvador (El Salvador) using the median values from the attenuation relationship and including the integration across the scatter [Bommer et al. when some accelerograph stations have been operating for thousands of years. Beyond this.1. return period or the probability of exceedance within a particular design life against a selected ground-motion or response parameter (Fig. J. 1). which forms the backbone of PSHA.
Fig.1 and greater in the Mexican subduction zone [Hong and Rosenblueth.. 1988].March 15. Contours of MMI > VII for upper-crustal earthquakes in and around San Salvador during the last three centuries [Harlow et al. 1993]. Relationship between magnitude and recurrence intervals for earthquakes of M 7. 2. Probabilistic Seismic Hazard Assessment 51 Fig. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. 3. .
1993]. For a given site affected by a single source with a characteristic earthquake. . where earthquakes of magnitude above 8 occur quasi-periodically and there is an absence of activity in the magnitude range 7.. reminiscent of the characteristic model (Fig. J.4 to 8. is clearly not consistent with the idea that hazard varies gradually with time. 1988]. 3).6 in the sequence preceding the 1980 eruption of Mount St. 2001]. [1983] for major events in the Mexican subduction zone. it is not actually possible to validate the results of a probabilistic seismic hazard assessment. sometimes with almost identical locations and characteristics [Ambraseys et al. The model was originally proposed by Singh et al. Helens. however. the ground motion expected at the site will clearly “jump” by a certain increment when the characteristic earthquake occurs. These destructive events occur at irregular intervals.. In the meantime. 1998]. 1931–1994 and 1964–1994 [Bommer et al. as has been done for example in the microzonation study of San Salvador [Faccioli et al. The concept of characteristic earthquakes may not be limited. Magnitude-frequency relationships for the Central American volcanic chain using the seismic catalogue for different periods: clockwise from top left — 1898–1994. Main [1987] identified characteristic earthquakes of magnitude about 4. 2002 8:34 WSPC/124-JEE 52 00064 J. in which major faults produce large earthquakes of similar characteristics at more or less constant intervals. Fig. around the main volcanic centres (Fig. to large events on plate boundaries and major faults. obviously suggests itself for a deterministic treatment. 4. The recurrence of these destructive events. The characteristic earthquake model.0 (Fig. Recurrence relationships derived for the entire volcanic chain clearly indicate a bilinear behaviour. 2). The concept of characteristic earthquakes has been subsequently applied to major crustal faults and reinforced by paleoseismology [Schwartz and Coppersmith. Bommer curve provided a good fit to the recorded values this would vindicate PSHA. Another example may be the upper-crustal seismicity along the Central American volcanic chain [White and Harlow.March 15. 1984]. 4).. often in clusters.
What then is the result of selecting ground motions corresponding to a 475-year return period? Unlike characteristic events on major faults or subduction zones. and consequently a wide range of M -R scenarios are possible. 2. with an average of about 22 years [Harlow et al. such as spectral ordinates at several periods. apart from the fundamental difference related to the units of time. Musson. the most important distinction between the two approaches is in the way the hazard from different sources of seismicity are treated.2. characteristic earthquakes are identified. For now let us assume that there is a sound and rigorous basis for the selection of the design probabilities. many deaggregation techniques have been developed to identify the earthquake scenarios that contribute most significantly to the design motions obtained from PSHA [McGuire. an assumption that will be revisited later. what is the result of this approach? The answer will depend on the relation between the characteristic recurrence interval and the design return period. This is the reason that Romeo and Prestininzi [2000] needed to use an “adjusted” design earthquake. Bazzurro and Cornell.March 15. The total probability theorem requires that all earthquakes contributing to the rate of exceedance of a given ground-motion level be considered simultaneously and therein PSHA creates for itself considerable physical problems for the sake of mathematical rigour. 1995. in order for the resulting spectrum not to fall below the UHS. In recent years.. The consequence is that if the hazard is calculated in terms of a range of parameters. there is evidently a degree of uncertainty associated with the location of future . Let us return to the case of San Salvador (Fig. Seismic hazard from multiple sources of seismicity In the simple descriptions of DSHA and PSHA given in Sec. 1999]. 1993]. 1995. The answer would appear to lie in the primordial importance that PSHA attaches to the rate of exceedance of ground-motion levels. DSHA treats each seismogenic source separately and their influence on the final outcome is entirely transparent. altering the scenario found by deaggregation. 1999. selecting the design seismic actions at a particular annual frequency may be a rational approach.1. PSHA combines the contributions from all relevant sources into a single rate for each level of a particular ground-motion parameter. If many earthquake sources affect a site. 4). The output of these techniques is often that more than one scenario must be defined in order to match different portions of the uniform hazard spectrum (UHS). the historical record over three centuries reveals recurrence intervals of between 2 and 65 years for local destructive earthquakes. Probabilistic Seismic Hazard Assessment 53 3. the final results will generally not be compatible with any physically feasible earthquake scenario. for large magnitude earthquakes on plate boundaries the result could lie somewhere in the no-man’s land above the ground-motion levels from the “background” seismicity and below the ground motions due to the occurrence of the characteristic event. 1999. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. however. If such manipulation of the scenarios found from deaggregation is required or if different scenarios from different sources are to be used. this begs the question of why should the influence of different seismogenic sources be combined in the first place. If. Chapman.
5.. distance and site classification: additional parameters that could be included to reduce the scatter are those related to the directivity . A major component of the uncertainty is due to all of the parameters that are currently not included in simple strong-motion scaling relationships. although this has not always been the case: the US hazard study by Algermissen et al. but it is also important to note here that the different treatments of scatter in the two methods questions the validity of the comparisons that are sometimes made between the two. 2000]. Anderson and Brune. 2. 2002 8:34 WSPC/124-JEE 54 00064 J. must inevitably deal with very considerable uncertainties. [1997]. 2000]. Consider equations for duration based on magnitude. discussed in Sec. for a 2475-year return period.. The source-to-site distance of the hazard-consistent scenario has indeed been found to decrease as the return period grows. whose inclusion would reduce the scatter.3. Both approaches have shortcomings.g. Another very important element of the uncertainty is that associated with the scatter inherent to strong-motion scaling relationships. which again is treated differently in the two methods. In DSHA. the use of the deterministic map to cap the ground motions calculated probabilistically in the NEHRP MCE maps. ignores completely the fact that the deterministic motions are based on median values while the probabilistic values may. discussed in the next section and also in Sec. In particular. PSHA generally now includes integration across the scatter in the attenuation relationship as part of the calculations. [1982] is based on median values.5 standard deviations above the median [Bommer et al. 3.4. Bommer events. may be related to values more than 1. as noted previously. by using median values.March 15. In very simple terms. the epistemic uncertainty is due to incomplete data and knowledge regarding the earthquake process and the aleatory uncertainty is related to the unpredictable nature of future earthquakes [e. hence a probabilistic approach may help to select an appropriate sourceto-site distance (although it cannot guarantee that the distance for any site in the next earthquake will not actually be shorter). a more complete definition of epistemic and aleatory uncertainty is provided by Toro et al. An important development in the understanding of the nature of uncertainty in strong-motion scaling relationships is the distinction between epistemic and aleatory uncertainty. the scatter is either ignored. but the main effect of going to lower and lower probabilities of exceedance is just to add more and more increments of standard deviation to the expected motions from a typical earthquake scenario [Bommer et al. 1999]. J. Fundamental amongst these uncertainties are the location and magnitude of future earthquakes: PSHA integrates all possible combinations of these parameters while DSHA assumes the most unfavourable combinations.3. or accounted for by the addition of one standard deviation to the median values of ground motion. Like is not being compared with like. based on our current knowledge of earthquake processes and strong-motion generation. The issue of uncertainty All seismic hazard assessment.
whereas if only few records were found they could be scaled to account for the uncertainty. the scenario becomes an Ms 6. together with 2. The reduction of the uncertainty in the strong-motion scaling relationship has simply transferred it to the uncertainty in the other parameters of the hazard model. if sufficient number of accelerograms were found. 1997] and velocity of rupture. 1999]. largely because of the representation of the basic design seismic actions in terms of probabilistic maps and uniform hazard spectra [Bommer and Ruggeri. 5.7 standard deviations above the median. and the direction and speed of its propagation. irregular and high ductility structures and the need to define appropriate input has been one of the main motivations for the development of deaggregation techniques such as Kameda and Nojima [1988]. the scenario without scatter is an earthquake of Ms 6. Performing the deaggregations of hazard determined with and without integration across the scatter reveals interesting features of the way PSHA treats the uncertainty: for a 100 000-year return period.6 at 2 km from the site.March 15. and degree to which the rupture is unilateral or bilateral [Bommer and Martinez-Pereira. PSHA would then handle this by considering more and more scenarios to cover all of the possible variations of each feature. are currently almost impossible to estimate for future events. the last of which has been widely adopted into practice. the uncertainty would be represented by their own scatter. 2002]. Hwang and Huo [1994]. Probabilistic Seismic Hazard Assessment 55 [Somerville et al. This would reduce the scatter in the regressions to determine the equations. Bommer et al.4. R and ε. whereas DSHA would simply assume their least favourable combination. R and ε values. That the scatter is inherent in selected suites of accelerograms is reflected by specifications such as those in the 1999 AASHTO Guidelines for Base-Isolated .4 but here the interest is to consider the implications for the selection of hazard-consistent real accelerograms for engineering design. the number of standard deviations above the median. 3.3 earthquake at 6 km. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. the deaggregation must define the earthquake scenario in terms of M . records could be searched that approximately match the M -R pair of 6. [2000] have shown that if a single seismogenic source is considered. but it would not necessarily reduce the uncertainty associated with estimates of future ground motions because the point of rupture initiation. The implications of this for hazard assessment in general are discussed in Sec. The specifications for selecting or generating accelerograms in current seismic design codes are generally unworkable. with scatter. Acceleration time-history representation of seismic hazard The most complete representation of the ground-shaking hazard is an acceleration time-history.. Since the PSHA calculations include integration across all possible combinations of magnitude (M ) and distance (R) and also across the scatter in the strong-motion relationships. In the first case. Chapman [1995] and McGuire [1995]. The requirement of dynamic analysis for critical.6 and 2 km. it is possible to define the hazard-consistent scenario almost exactly rather than as “bins” of M .
Bommer Bridges: if three records are used in dynamic analysis. the target is an “acceptable” level of risk. In the PSHA approach. the maximum response values are used for design. In this context. For planned construction. to provide rapid transportation. There are strong arguments for using a deterministic approach for critical facilities [e. it is permitted to use the average response. to the owner and/or the users. regional or even global hazard studies whose intended purpose is never stated or those that propound the virtues of one particular approach or method as the “best” for all applications. for which probability may be needed. which by . once the design probability of exceedance is chosen. The development of better practice is not always well served by papers that present local.March 15. 1995b] although current DSHA procedures may warrant improvement (Sec. whether that be to safely house people. of course. should be considered critical. 4. The definition of intolerable is. let us consider the seismicity of Great Britain.3 and 6 km. Seismic Hazard and Seismic Risk Seismic hazard outside of the context of seismic risk is little more than an academic amusement.g. emergency medical care or a constant energy supply. In the case of financial loss estimation for the purposes of insurance. Hazard assessment as an element of risk mitigation The seismic risk in the existing built environment may be calculated in order to design financial (insurance and reinsurance) or physical (retrofit and upgrading) mitigation measures. 1998] but such techniques are some way from being adopted into general practice. subjective. The important issue is always what is at stake since what matters is the possibility of loss of function due to an earthquake. although it is unlikely that anyone would contend the adjective being applied to the failure of a nuclear power plant. In the case of seismic design. hazard estimates are required so that appropriate measures can be taken to control the consequent levels of risk through relocation (exposure) or earthquake-resistant design (vulnerability). whereas if more than seven are employed. 5. J.1. Krinitzsky [2001] proposes that any project for which the consequences of failure are intolerable. one is faced with the almost impossible task of finding records that approximately match the M -R pair of 6. or to contain radioactive material. it is assumed that design for the corresponding level of ground motion will provide an acceptable level of risk. of new or existing structures.4). In the case of the second earthquake scenario. Krinitzsky. 2002 8:34 WSPC/124-JEE 56 00064 J. for which all ground-motion parameters are simultaneously almost three standard deviations above the median for this M -R scenario. This problem can be overcome by carrying out the hazard assessment considering the joint probabilities of different strong-motion parameters [Bazzurro. 4. the probability associated with different levels of risk is a vital part of the information required to fix premiums and to guide the purchase of reinsurance.
Magnitudes: a: M = 5. c: 5. British earthquakes during the over a period of more than seven centuries.5 > M > 4.0 > M > 4. . 5.5.0.5 > M > 5. d: 4. b: 5. [Ambraseys and Jackson. 1985]. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs.5.0. Probabilistic Seismic Hazard Assessment 57 Fig.March 15.
no doubt in part because of their use of rather high values of maximum magnitude from 6. The tool proposed for this procedure is the logic-tree. This would allow an informed decision regarding the benefit of the investment compared to the loss that may be prevented. In their treatise on the philosophy of seismic hazard assessment for nuclear power plants in the UK. although this was largely the result of the “unlikely but still credible scenario of a magnitude 6 earthquake directly under a city” [Booth and Pappin. Accounts of losses of life or property due to British earthquakes are hard to come across. Ambraseys and Jackson [1985] review several centuries of seismicity in Britain (Fig.25g at isolated locations. for which elaborate probabilistic hazard assessments have been carried out to define the 10 000-year design ground motions. a device that has become part of the stock-in-trade of PSHA enthusiasts. The real hazard is posed by moderate magnitude [M ∼ 5] earthquakes. but the logic is sometimes harder to detect. 5) and find no earthquake larger than Ms 5.2. that could produce damaging ground motions over a radius of about 5–10 km. 1996] has not prevented PSHA being performed. and actually perform an iterative analysis to determine the reduction in risk from different levels of investment in earthquakeresistant design and construction. occurring close to or even below the power station. A useful. unless there is good scientific reasons to exclude it as unfeasible. J. This approach could be classified as one of conservatism. 4. As applied to seismic hazard assessment. The probabilistic study by Musson and Winter [1997] found 10 000-year PGA values below 0. the etymology of the second part of its name is obvious from its dendritic structure. Mallard and Woo [1993] argue against the use conservatism and in favour of “a systematic methodology for quantifying uncertainty”. The use and abuse of probability At this point. This lack of significant activity and of any “consensus on the principles for zoning this seismotectonically inhomogeneous territory” [Woo. the only rational design basis for seismic safety in such a setting is a DSHA based on an earthquake of about Ms 5. 2002 8:34 WSPC/124-JEE 58 00064 J. whose association with tectonic structures is tenuous.2 to 7. taking into account competing demands on the same resources.0.March 15.5. An earlier study by Ove Arup and Partners [1993] found similar results at selected locations and also carried out a probabilistic risk assessment. but complex and lengthy. It was found that annual earthquake losses were only 10% of those due to meteorological hazards. study would compare these probabilistic losses with the cost of incorporating earthquakeresistant design into UK construction.5 with the additional blessings that these events are very infrequent — recurrence intervals nationally of at least 300 years — and that focal depths tend to increase with magnitude. Bommer anyone’s standards is “low”. which would require rupture on a fault that could very easily escape detection. three principles for seismic hazard assessment can be stated: .10g in most of the country but nudging past 0. 1995]. The issue of seismic safety definitely cannot be dismissed for nuclear power plants built in the UK. In the author’s opinion.
places “an uneasy burden on those who have to use the results”. which as Reiter [1990] rightly points out. without many deterministic elements. Krinitzsky [1995a] presents powerful arguments against the use of logic-tree for hazard assessment. any finite probability of failure may be considered intolerable. exclusively on the best scientific data available. by any method. at least in a relative sense. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. be statistically measured. c For critical structures. These should be based. unlike the aleatory scatter in strong-motion scaling relationships. is essential to the evaluation of riskc (but not necessarily to its calculation). (2) If determinism is taken to mean the assignment of values by judgement. while illustrating how — without the contentious application of weights — it Fig. 1997].March 15. The application of logic-trees to seismic hazard assessment is a mechanism for handling those epistemic uncertainties that cannot. The results allow the determination of confidence bands on the mean hazard curve. it is impossible to perform a seismic hazard assessment. .. Probabilistic Seismic Hazard Assessment 59 (1) Seismic hazard can only be rationally interpreted in relation to the mitigation of the attendant risk. as far as possible. (3) Probability. 6. Contours of 475-year PGA levels for El Salvador produced by independent seismic hazard studies [Bommer et al. The different options at each step are considered and each is assigned a subjective weight that reflects the confidence in each particular value or choice.
Pf . how are the probabilities fixed? Hanks and Cornell [2001] explain that the starting point is a performance target expressed as a probability of failure. which is “set by life safety concerns or perhaps political fiat ”. Returning to the point made in Sec. 1997]. regardless of seismicity.002 as the design annual rate of exceedance! In fact. the logic-tree for hazard assessment appears to be upside down: deterministic judgementsd are turned into numbers and treated together with observational data to calculate probabilities that then fix the design ground motions. Bommer can be a useful tool for comparing risk scenarios. Reiter [1990] notes that “different studies may conflict over whether the likelihood of exceeding 0. which. which was based on an exposure period of 50 years (a typical design life) d Krinitzsky or taste”. (2) can be approximated to determine the level of H(a) to be used as the basis of design.2. does not seem justified. and in terms of the three principles outlined above. a). the available earthquake and ground-motion data for most parts of the world is such that any estimate of the ground motion for a prescribed rate of exceedance will inevitably carry an appreciable degree of uncertainty. all came up with 0. 6). This probability is defined by the integral: Z ∞ Pf = H(a) · F 0 (a) · da (2) 0 where H(a) describes the hazard curve (annual rate of exceedance of different levels of acceleration. to the author. This is fine if an iterative process is carried out in order determine the appropriate fragility curve. this is not how the design levels used in current practice have been fixed — or else it is quite remarkable that nearly every country in the world. this again points to PSHA having a degree of confidence in its probabilities. 2002 8:34 WSPC/124-JEE 60 00064 J. So the big question is. the almost universal use of the 475-year return period in codes can be traced back to the hazard study for the USA produced by Algermissen and Perkins [1976]. 3. this means that for a specified annual rate of exceedance estimates of PGA can vary by factors of two or more (Fig. However.March 15. which would then define the design criteria. building types or construction standards. However. although this begins to get complicated because to obtain F (a) the design must already have been carried out. [1995a] describes these as “degrees of belief that are no more quantifiable than love .3g at site X is 10−3 or 10−4 and whether the likelihood of exceeding the same acceleration at location Y is 10−4 or 10−5 ”. J. Given how PSHA is actually applied. once it has been determined Hanks and Cornell [2001] show how Eq. and F 0 (a) is the derivative of the fragility function (the probability of failure given a particular level of acceleration). It is important to note that a degree of this divergence in results can be removed through application of the procedures proposed by the Senior Seismic Hazard Analysis Committee [SSHAC. to give the desired level of Pf . If the fragility curve is sufficiently steep. In the context of this study. from New Zealand to Ethiopia.
the origin of which is rarely if ever explained. Every code and regulation since has either followed suit or else adopted its own return periods: Whitman [1989] reports proposals by the Structural Engineers Association of Northern California to use a 2000-year return period.1.e In the Vision 2000 proposal for a framework for performance-based seismic design. If one considers that in most structural codes the importance factor increases the design ground motions by a constant factor. since both are essential ingredients.March 15..3 — to create a “deterministic-probabilistic” approach. 2500 years for “critical” bridges. [1993] — discussed in Sec. each of which is assigned an average rate from the e A wonderful example occurred in a recent project — the design return period was finally fixed by the engineer at 2000 years because the bridge was judged to be “almost critical ”! . 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. resulting in design for motions corresponding to 10% in 50 years in San Francisco and 5% in 50 years in Central and Eastern US [Leyendecker et al.2 imply that it is not possible to perform useful seismic hazard assessment that is free from determinism or probability. 2. 4. 5. It is hard not to feel that some of these values have been almost pulled from a hat. The Best of Both Worlds The three principles stated in Sec. chosen “because the committee believed it reflected a probability or risk comparable to other risks that the public accepts in regard to life safety”. 2000]. 5. Orozova and Suhadolc [1999] assign recurrence rates to earthquakes of particular magnitude in order to adapt the method of Costa et al. return periods of 72. Furthermore. Hence it is logical to look for ways that make best use of both these components. this is actually been done intentionally in order to provide a uniform margin against collapse. Hybrid approaches Hanks and Cornell [2001] assert that the main difference between deterministic and probabilistic approaches is that PSHA has units of time and DHSA does not. A review of seismic design regulations around the world reveals a host of design return periods. 244. Probabilistic Seismic Hazard Assessment 61 and a probability of 10% of exceedance. In the development of the NEHRP Guidelines. it is common in loss estimation to model the hazard as a series of earthquake scenarios. whose selection has not been explained. This is true in the brief descriptions of the two methods presented at the beginning of the paper but it does not mean that time and probabilities cannot be attached to deterministic scenarios. this means that the actual return period of the design accelerations will be different from 475 years and will probably vary throughout a country. 1995]. The AASHTO Guidelines specify 500 years for “essential” bridges. 475 and 974 years have been specified as the basis for fixing demand for different performance levels [SEAOC. all the more so for the 10 000-year return periods specified for safety-critical structures such as nuclear power plants.
Frankel and Safak.D. narrow philosophy in stifling religious — and scientific — thought in later centuries are known only too well. Kiremidjian. Such arguments carry the danger of establishing orthodoxies. for example. with all their attendant perils and limitations. Nonetheless. Irenaeus of Lyons penned a multi-tome treatise entitled Adversus Haereses — Against Heresies. “Against orthodoxies”f How should the method for seismic hazard assessment be chosen? Reiter [1990] rightly argues that “the analysis must fit the needs”. It has in fact already become well recognised that there is not a mutually exclusive dichotomy between DSHA and PSHA. Bommer recurrence relationships [e.g. however. opening up the possibility of exploring combined methods. or which may be appropriate now and yet not be in a few years time as understanding of the earthquake process advances. probabilism is not a bivariate choice but a continuum in which both analyses are conducted. at a time when the early Christian church had gone beyond only having enemies outside its camp.March 15. and continues to do so [e. Reiter [1990] says that in many situations the choice “has been rephrased so that the issue is not “whether ” but rather “to what extent ” a particular approach should be used”. [1997]. St. McGuire [2001] asserts that “determinism vs. 2002].g. a mere two years after their publication. Jackson. What are the bases for the orthodoxies of PSHA and DSHA? For the PSHA church. have had to be revised not only numerically but also conceptually following the 1999 earthquakes in Turkey and Taiwan.. the cornerstone of its creed seems to be the overriding importance of the probability of exceedance of particular levels of ground motion. Hazard assessment should be chosen and adapted not only to the required output and objectives of the risk study of which it forms part. it should also be adapted to the characteristics of the area where it is being applied and the level and quality of the data available. interesting to note that both Reiter [1990] and McGuire [2001].2. but more emphasis is given to one over the other”. 2001]. The effects of such rigid insistence on a particular. It is. . 1999. The ability to locate and identify active geological faults in continental areas has also advanced by leaps and bounds over recent years. There is no need to establish standard approaches that may be well suited to some settings and not to others. J. Bommer et al. point towards deaggregation of PSHA as an important tool in hybrid approaches. leading to rapid evolution. 5.. Every major earthquake that is well recorded by accelerographs throws up new answers and poses new questions. The near-field directivity factors proposed by Somerville et al. despite the fact that teams of experts working with the same data can easily come up with answers f In the period 182 to 188 A. 2002 8:34 WSPC/124-JEE 62 00064 J. who make positive contributions to moving away from the dichotomy. a panacea for all situations. 1998. there is a tendency amongst many in the field of earthquake engineering who would argue for the superiority of one approach or the other as the “ideal”.
1997]. The Oxford Dictionary goes on to define the word also to mean “not independent-minded” and “unoriginal ”. local earthquakes and larger. The uniform hazard spectra of seismic codes do not represent motions that might occur during a plausible future earthquake. 5. particularly those affected by both small. Donovan [1993] states that “the ground motion portion of codes represents an attempt to produce a best estimate of what ground motions might occur at a site during a future earthquake”. from the Greek. a return period which holds only for the zero period spectral ordinate. particularly because of the practice of combining the hazard from various sources of seismicity. Current code descriptions of earthquake actions clearly do not fulfil this objective for many reasons. Probabilistic maps combine the influence and effects of different sources of seismicity. has exactly that paradoxical and unscientific meaning: correct (ortho) opinion (doxa). The difference between deterministic and probabilistic hazard maps highlights one extremely important difference between current practice of the two methods. especially when the calculation of that probability . hence the spectral shape does vary with hazard level. and in particular the NEHRP provisions [FEMA. The fact is that one cannot get away from the fact that with current levels of knowledge of earthquake processes a major component of seismic hazard assessment is judgement and risk decisions are governed not only by scientific and technical data but also by opinion. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs.4. Here it is hard to justify this by insisting on the importance of the total probability when codes are generally based on the arbitrary level of the 475-year return period and. but the above comments still apply to some extent. use two parameters to define the spectrum. Probabilistic Seismic Hazard Assessment 63 that differ by an order of magnitude. as discussed in the Sec. For the DSHA temple. quite apart from the debatable issue of whether or not units of time are included. It is important to note here that several codes. the application of spectral modal analysis using the code spectrum can amount to designing a long-period structure for two different types of earthquake occurring simultaneously. If it is accepted that the total probability theorem is not an inviolable principle in defining earthquake actions. In fact.March 15. whence the difficulties of codes to specify acceleration time-histories. Alternative approaches to seismic hazard mapping One area in which PSHA currently enjoys almost total domination is in seismic hazard mapping. distant earthquakes. the sacred cow is the worst-case scenario. particularly for seismic design codes. The word “orthodoxy”. even more importantly. the actual return period will often not correspond to the 475-year level even at the spectral anchor point.3. because of the use of discrete zones to represent continuously varying hazard over geographical areas. even though this is not what they actually produce. often into a single map showing contours of PGA that are then used to anchor spectral shapes that relate only to the site conditions and which somehow try to cover all possible earthquake scenarios. 5. In certain regions.
marking the dawn of the age of curve fitting to clouds of data to find scaling relationships. The hazard resulting from different sources of seismicity can be treated individually and their resulting spectra defined separately. In PSHA the untruncated lognormal scatter results in “the probable maximum ground motion at a particular site increasingly indefinitely as the time window of the PSHA increases. The problem still remains as to how to select an appropriate return period at which to perform the deaggregation. 2002 8:34 WSPC/124-JEE 64 00064 J. The hazard could then be represented by maps showing contours of M and R for different types of seismicity.. but in many cases it will be possible to define the M -R pairs deterministically [Bommer and White. due to the increasing influence of the tail of the Gaussian distribution on the probabilistic values” [Anderson and Brune. the San Fernando earthquake more than doubled number of strong-motion accelerograms available. 1999]. It is curious that the curves fitted severely underestimated the most significant ground-motion recorded in the earthquake (Fig. The scatter in these relationships is generally assumed to be lognormal and is invariably large: for spectral ordinates. including ordinates of spectral acceleration and displacement in both the vertical and horizontal directions. Upper bounds: the missing piece In 1971. This is. 2000]. This would differ from the maps of magnitude and distance presented by Harmsen et al. This has given rise to different mechanisms for truncating the scatter at a certain number of . the 84-percentile values are typically 80– 100% higher than the median. every required parameter of the ground motion. J. 5. in fact. 7). [1999] because a single pair of maps would cover all strong-motion parameters rather than just the spectral ordinate at a single response period. Another issue is how to incorporate rationally the uncertainty without going to very complex representations of hazard in terms of M -R-ε contours [Bazzurro et al. it will usually be found that at most sites one source dominates.March 15. This scatter creates difficulties for both DSHA and PSHA.4. The important point is that once the hazard is characterised by pairs of magnitude and distance. 2001]. Within the framework of performance-based seismic design it is already envisaged that several levels of earthquake action will be considered in structural design. Bommer is subject to such uncertainty and presented so approximately. already done: the seismic design codes of both China and Portugal define one spectrum for local events and another for more distant large magnitude earthquakes. and duration. more imaginative procedures can be used. Once different types of seismicity are separated. If the objective is to obtain improved control over earthquake response to expected seismic actions. then it is surely a step in the right direction to begin by separating different types of earthquake action. 1998]. can be computed directly and the required criteria for selecting or generating acceleration time-histories are provided [Bommer. One possibility is then to deaggregate the hazard associated with the dominant source for each site and hence characterise the hazard in terms of actual M -R scenarios.
There is also debate regarding the actual level at which the truncation should be applied: proposals range from 2 to 4. standard deviations above the median. 7. Probabilistic Seismic Hazard Assessment Fig. it is not the worst case. 65 Attenuation relationship derived for the 1971 San Fernando earthquake data [Dowrick. Romeo and Pristininzi.5 standard deviations [Reiter. Firstly. the worst case would be the magnitude at which the probability distribution is truncated. which assumes that for all magnitude and distance combinations. 1990. 1996]. it establishes a maximum credible earthquake (MCE) as the basis for the design. Abrahamson [2000] points out that if the MCE is determined as the mean value from an empirical relationship between magnitude and rupture dimensions. 1987]. at least for critical structures. 2000] and some widely used software codes employ cut-offs at 6 standard deviations above the median. the physical upper bound on the ground motion is always at a fixed ratio of the median amplitudes. Untruncated lognormal distributions create even more difficulties for DSHA. especially because the “regressions are supported by very few data for large magnitudes” [Jackson. This truncation requires a physical rather than statistical basis: upper bounds are required. its basis should be to identify the worst-case scenario. 2000. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs.March 15. since. . Abrahamson.
It is also interesting to note that before sufficient strong-motion accelerograms were available for regression analyses. Bazzurro. which clearly has very significant implications for design. Newmark. with associated confidence intervals for each of these. 1983] to ground-motion prediction models [Restrepo-V´elez. 2001].9 percentiles respectively.g. Time-dependent estimates of seismic hazard In order to make full use of the output from a PSHA. is taking PSHA to new limits [Smit et al.5.7 and 99. The Pegasos project. 5. although in terms of spectral amplitudes the effect of the third standard deviation thrown on can increase the amplitudes by factors of two. For example. Newmark and Hall. in order to avoid physically unrealisable ground-motion amplitudes. Ambraseys. The ground-motion estimates will be defined by median values. Here again. a number of studies considered possible upper bounds on groundmotion parameters [e. Ambraseys and Hendron. standard deviations and truncations. Can upper bounds be fixed for ground-motion parameters? Reiter [1990] argues that there must be a physical limit on the strength of ground motion that a given earthquake can generate. The problem is where between these two limits to fix the worst case? Probabilistically they are close. J. does it then make sense to add on two or three standard deviations when a significant component of the scatter has already been accounted for? There are many ways that the scatter in attenuation equations can be truncated. 1998]. becomes more important. Such approaches are now being developed.g. corresponding to the 97. including the adaptation of models developed for log-normal distributions with finite maxima [Bezdak and Solomon. it is necessary that the engineering design also follow a probabilistic approach. Again Abrahamson [2000] points out that this is not the worst case but then goes on to state “the worst case ground motion would be 2 to 3 standard deviations above the mean”. there are many obstacles to the adoption of these approaches in routine engineering practice simply because there are few areas of activity where the saying “time is money” is as true as it is in the construction industry. 1974. if a deterministic scenario has included forward directivity effects. DSHA calculates design ground motions as the median or median-plus-one-standard deviation values from strong-motion scaling relationships. physical upper bounds are needed. 2002 8:34 WSPC/124-JEE 66 00064 J. for a wide range of magnitude-range distance pairs. Upper bounds need to be established and procedures for their application developed that take into account current understanding of epistemic uncertainty. Ida.March 15. including estimates of uncertainty and the expected levels of ground motion for a range of return periods. 1969]. However. 1965. 2002]. particularly through the outstanding work of Allin Cornell and his students at Stanford University [e. 1973. 1967. currently underway to assess seismic hazard at nuclear power plant sites in Switzerland down to very low rates of exceedance. One could actually conclude . As PSHA pushes estimates to longer return periods the issue of truncating the influence of the scatter. Bommer After fixing the MCE..
2001] and now the Pegasos project in Switzerland [Smit et al. and especially where the design considers the seismic performance at only one limit state. need to be defined. In engineering practice it is not unusual for a hazard assessment for a site to be carried out in a few weeks. The limited time made available for even major engineering projects is reflected in the very tight deadlines often set for the execution of seismic hazard assessments. which the client or engineer often passes. which may not be the case for a probabilistic assessment where so many input parameters and assumptions. regardless of whether the probabilistic or deterministic approach is adopted. dimensions and detailing — is seeking a single number to characterise the earthquake action. . required to perform such a study is beyond the means and the schedule of most engineering projects: to date the SSHAC Level 4 method has only be applied in the Yucca Mountains project [Stepp et al. Conclusions There is not a simple dichotomy between probabilistic and deterministic approaches to seismic hazard assessment — many different probabilistic approaches exist and there is not a standard methodology for deterministic approaches. in situations where the design engineer — for whom the provision of earthquake resistance is just one of a series of considerations that affect siting. indeed a Level 4 hazard study as recommended by SSHAC [1997] provides a very well crafted framework for exactly such a process. however. the analyst can carry out sensitivity analyses even when working to a very strict timetable. quite incorrectly. with weights assigned by one analyst. generally making it inevitable that the study will be based largely on available data published in papers and reports. DSHA can represent an attractive option to define a level of ground motion that is unlikely to be severely exceeded. Since the actual calculations involved in DSHA are simple. Under such circumstances. One advantage of DSHA is that it immediately releases the hazard analyst from the responsibility of defining the design return period. the time scale. and hence cost. 2002].March 15. is questionable. One cannot escape from the fact that expert opinion is of overriding importance. Probabilistic Seismic Hazard Assessment 67 that probabilistic assessment of design earthquake actions is far more advanced than probabilistic approaches to seismic design and for this reason the results of PSHA are generally not used to their full advantage. DSHA is generally more dependent on the data that is likely to be determined most reliably: the characteristics of the largest historical earthquakes and the location of the largest geological faults. to the Earth scientist. However. a peer-reviewed DSHA may be far more useful to the project engineer and ultimately to the provision of an adequate level of earthquake resistance. peer review can be carried out swiftly and easily. When working to a very short time scale. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. The value of a logic-tree formulation. This is not to say that careful use of multiple expert opinions is not to be encouraged. because of the transparency of the deterministic approach. whose individual influences are often difficult to distinguish. 6. There is not. More importantly.. Furthermore..
Mart´ınez-Pereira. is to identify upper bounds on ground-motion parameters for different combinations of magnitude. and the required output. . One application in which the arguments for strict adherence to the total probability theorem cannot be defended is in the derivation of earthquake actions for code-based seismic design. distance and rupture mechanism. 2001]. There is no reason why a single approach should be ideal for drafting zonation maps for seismic design codes. after establishing the needs and conditions set by the engineer. The existing database of strongmotion accelerograms can provide some insight into this issue [e. a method that can be applied as a panacea in all situations. wherever there is sufficient data for these to be defined [McGuire. The currently very crude definition of earthquake actions in seismic codes could be greatly improved if it was accepted that it is not necessary to use representations that attempt to simultaneously envelope all of the possible ground motions that may occur at a site. The analyst. Bommer nor does there need to be. for emergency planning. should adapt the assessment both to these criteria and to the region under study. in order to obtain estimates of the likely range of the upper bounds on some parameters. 2001]. that will be of benefit to seismic hazard assessment in general and perhaps in particular to deterministic approaches. one area in which research is required. since the available time and resources. McGuire [2001] presents an interesting scheme for selecting the relative degrees of determinism and probabilism according to the application. The reality is that both of these views are partially correct and hence both criteria need to be used simultaneously: the selection of the appropriate method must fit the requirements of the application and also consider the nature of the seismicity in the region and its correlation — or indeed lack thereof — with the tectonics. Casting aside simplistic choices between DSHA and PSHA will help the best approach to be found and will also allow the best use to be made of the considerable and growing body of expertise in engineering seismology around the world. It is not possible to define a set of criteria that can then be blindly applied to all types of hazard assessment in all regions of the world and for this reason no such attempt has been made in this paper.March 15. Finally. Advances in finite fault models for numerical simulation of ground motions could be employed to perform large numbers of runs. J. will vary for different applications.g. 1999. with a wide range of combinations of physically realisable values of the independent parameters. 2002 8:34 WSPC/124-JEE 68 00064 J. The confidence with which seismic probabilities can be calculated does not generally warrant their rigid use to define design ground motions and wherever possible the results should at least be checked against deterministic scenarios. for loss estimation studies in urban areas. and for site-specific assessments for nuclear power plants. Restrepo-V´elez. Woo [1996] states that the method of (probabilistic) analysis should be decided on the merits of the regional data rather than the availability of particular software or the analyst’s own philosophical inclination.
Nick Ambraseys and Sarada Sarma all provided very useful feedback on the first draft of this manuscript. Robin McGuire. S. S. Rome. B. A. pp. [2000] “Seismogenic potential and earthquake hazard assessment in Tell Atlas of Algeria. 156. N. [1985] “Long-term seismicity of Britain.” GeoEng 2000. 43–57. J. G. 23–39. M. 5(1).” Proceedings Fifth World Conference on Earthquake Engineering.. E. 1. D. [1999] “Probabilistic seismic hazard analysis without the ergodic assumption. T. Probabilistic Seismic Hazard Assessment 69 Acknowledgements The author wishes to thank David M. Res. and Zonno. Amr Elnashai. Buforn. Algermissen.” Earthquake Engineering in Britain. A. (eds. Hanson. and particularly to Ellis Krinitzsky and Tom Hanks for providing pre-prints of papers. G. cxxvi-cxlviii. Sarma and the students of the ROSE School in Pavia. John Douglas. Boore. Alvarez. D. A. Bommer. Mayer-Rosa. [1997] “Benefits of scenario ground motion maps. D. particularly Norman A. Abrahamson. J. N. and Ud´ıas A.. N.” Engrg. and Megrahoui.” US Geological Survey Open-File Report 82–1033.. Rui Pinho. Thenhaus. I have probably done so to the detriment of the paper. N. Vaccari. 48.. M. [2001] “The earthquake sequence of May 1951 at Jucuapa. and Zienkiewicz. T. 79–98. N. The author has enjoyed and benefited from discussing many of the issues in this paper with others.” J. [1989] “Assessment of seismic hazard for the Sannio-Matese area of southern Italy — a summary. Anderson. and Panza. G. Ambraseys. References Abrahamson. Lett. John Douglas and John Berrill for “heads up” regarding some key references.. Anderson. K. in the few cases where my determined stubbornness has led me to ignore his counsel.” Seism. [1974] “Dynamics and response of foundation materials in epicentral regions of strong earthquakes. Vaccari. Kijko. Suhadolc. L. [1999] “Deterministic seismic zoning of eastern Cuba.. Nigel Priestley. Melbourne. J. A. N.. Stagg.” Natural Hazards 2. to whom I also extend my gratitude.. and Hendron. El Salvador. Garc´ıa Fern` andez. and Bender. The second draft of the paper has further benefited from thorough reviews by John Berrill and two anonymous referees.” Pure Appl. Aoudia. Schenkova. Geophys. S. 70(1). O. Seism. Egozcue. J. P.. detailed and challenging review of the first draft. and Brune. [1976] “A probabilistic estimate of maximum accelerations in rock in the contiguous United States. [1967] Dynamic Behaviour of Rock Masses. F. Slejko. L. Bel´en Benito. P. Luis Fernando Restrepo-V´elez. A. Algermissen. and Perkins. Ambraseys. N. 217–228. Ambraseys. C. 19–24 November. . V. M. N. F. S. Schenk. 203–236.. 49–65. J. 469–486. Z. M. F. pp. 4. The author is particularly indebted to Dave Boore for an extremely thorough. vol. L. Lapajne.. Geol.. L.). Perkins. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. J.” J.. Seism. 19–28. D. Ambraseys. J. Sarada K. J. and Jackson. [1982] “Probabilistic estimates of maximum acceleration and velocity in rock in the contiguous United States... John Wiley. G. Australia. [2000] “State of the practice of seismic hazard evaluation. Rock Mechanics in Engineering Practice.” US Geological Survey Open-File Report 76–416. Barbano. N. Thomas Telford. M.. N.March 15.
. Paris. Donovan. 937–942.. S. Cortina. Hasbun. A. [2000] “Hazard-consistent earthquake scenarios... J. Stanford University. [1995] “Seismic design requirements for structures in the United Kingdom. [2000] “Seismic zonation for comprehensive definition of earthquake actions. 149–160. Soc.. 1772. F. M. Engrg.. G. H. C. Su´ arez. A computer program for seismic hazard estimation. [1999] “Disaggregation of seismic hazard. Booth. Cepeda. E. M. N. J.. C. Earthq. G.” Proceedings Sixth International Conference on Seismic Zonation. 16–19 October. 145–166.” Bull. Tabuchi. [1983] “Upper limit lognormal distribution for drop size data. K. R. P. 19. C. Aydinoglu.. [1998] “A case study of the spatial distribution of seismic hazard (El Salvador). P.. Eleventh European Conference on Earthquake Engineering. P. M. Seism. Bender. Earthq. [1995] “A probabilistic approach to ground-motion selection for engineering design. 127–172. Appl. 16(1). Scott.” Bull. M` endez. D. [2002] “Development of an earthquake loss model for Turkish Catastrophe Insurance. and Cornell. [2001] “Una propuesta para un mtodo alternativo de zonificaci´ on s´ısmica en los pa´ıses de Iberoam´erica. Bezdek.” Bull.” Bull.” Segundo Congreso Iberoamericano de Ingenier´ıa S´ısmica. Bommer. 109(1). and Pappin. [2002] “The specification of acceleration time-histories in seismic design codes. M. S. C. and Mart´ınez-Pereira. Bazzurro.” Seism. Soc.” Earthq. N. Salazar. S. J. [1998] “Seismic contours: a new characterization of seismic hazard. and Perkins.. J. N. [1999] “On the use of elastic input energy for seismic hazard analysis. 74. R. M. O. Bommer Bazzurro. [1993] “Zoning of the Italian territory in terms of expected peak ground acceleration derived from complete synthetic seismograms. Am.” European Seismic Design Practice. P. and Papastamatiou.. [1993] “Relationship of seismic hazard studies to seismic codes in the United States. J. in press.. Bommer. A.” PhD Dissertation. Winterstein. [1998] “Probabilistic seismic demand analysis. J. 1451–1462. B. C. and Solomon. A. J. Ud´ıas. C.” US Geol. del Re. Chapman.” Natural Hazards 18. 30. J. Costa. 426–437. Bommer. Spectra 15.. accepted for publication. J. and Peterken.. S. Seism. J. J. D. Balkema. W.” Eur. B.. D. 85. [1987] “SEISRISK III. A computer program for seismic hazard estimation. Palm Springs. Bommer.. [1997] “A new digital accelerograph network for El Salvador.” Soil Dyna. Bommer. 68. W. 72–88. C. K. 3(2). Salazar. [1982] “SEISRISK II. Soc. [1968] “Engineering seismic risk analysis. Cornell. G. 58. S. and White. 1583–1606. Spence.. Madrid.” J. C. Earthq.. P. J. Bommer. and Sarma. Mezcua. Surv. Bender.” J. Suhadolc. Bommer. Am. and Perkins. Madariaga. CA. McQueen. M. 89. E. [1999] “The effective duration of earthquake strong motion. Booth. J. M.). 607–635. D. E. A. 1–20. J. Buforn.” ASCE J. . N. Bommer.. A. Res.March 15. Seism. 501–520.” Tectonophysics 218. W. R. Seism. J. Scott. A. J.. J.” Proc. C. J. J. and Vaccari. D. S. Erdik. [1984] “Incorporating acceleration variability into seismic hazard analysis. G. Irrigation and Drainage Engrg.” J. Geophys. J.. Bender. F. Engrg. and Cornell. Elnashai (ed. J. 219–231.. Bull. Engrg. 2002 8:34 WSPC/124-JEE 70 00064 J.. Soc. and Woo. A. and Ruggeri. Bazzurro. Lett. Am. Ambraseys. 133–140.. Seism. Chapman. B. N.” US Geological Survey Open-File Report 82–293... Am. 257–271. Panza.
Seism. Hong. and Frankel. [1996] “Individual faults can’t produce a Gutenberg-Richter earthquake recurrence. [1973] “The maximum acceleration of seismic ground motion. Engrg.. H. J. S.” Proceedings Eleventh European Conference on Earthquake Engineering. 1–13.” Earthq. [2001] “Living with earthquakes: know your faults. Kiremidjian. Krinitzsky. and Graham. T. Y. Harmsen.” Bull. Ida. 1. C. and Anagnos. N.” Seism. Krinitzsky. P. [2001] “Probabilistic seismic hazard analysis: a beginner’s guide.” Bull. L. Soc. Am. Geol. Battistella. 91–115. Earthq. [1995] “Mapping seismic hazard in the Central and Eastern United States. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. Spectra 4(3). C.” Earthq. J. and Safak. Kiremidjian. 43. Seism.” Bull. Res. J.. 5–9. submitted. 1. Second edition. D. S. A. 63(3). Res. 77. A. 1143–1154. Geol. 8–21. G. Kramer. R. [1993] “The San Salvador earthquake of 10 October 1986 and its historical context.” Proceedings of the Eleventh European Conference on Earthquake Engineering: Invited Lectures. Prentice Hall. Lett. 39. E. Jackson. Hanks. John Wiley & Sons.” FEMA 302. [1996] Geotechnical Earthquake Engineering. L. Probabilistic Seismic Hazard Assessment 71 Dowrick. H. [1988] “The Mexico earthquake of September 19. Frankel. [1988] “Seismic microzoning investigations in the metropolitan area of San Salvador. Soc. [1996] “The case for huge earthquakes. A. Engrg. [1995a] “Problems with Logic Trees in earthquake hazard evaluation. D. Kameda. Seism. and Rosenblueth. Kiremidjian. Dyn. 1110–1126. 5–123.. ASCE Geotechnical Special Publication 75. R. S. Hwang. A. R. A. 66(4). 36. Jackson. FEMA [1997] “NEHRP recommended provisions for seismic regulations for new buildings and other structures. 83(4). E. and Alvarado. A. El Salvador. [1987] Earthquake Resistant Design for Engineers and Architects. and Huo. L. 89.March 15. and Suzuki. T. H.” Earthq. 1–3.” Engrg.. and Tibaldi. 3–5. A. [1998] “Recent trends and future prospects in seismic hazard analysis. Alemani. 67(1). [1993] “Earthquake probability in engineering — Part 2: Earthquake recurrence and limitations of Gutenberg-Richter b-values for the engineering of critical structures. 1007–1019.” Soil Dyn. 377–386. A. Washington DC. S. A. E. [1988] “Simulation of risk-consistent earthquake motion. G. 13. 1985 — model for generation of subduction earthquakes. 74. [1998] “‘Parametric-historic’ procedure for probabilistic seismic hazard analysis. D. Spectra. 959–968. Soc. 5. [1987] “A stochastic model for site ground motions from temporally independent earthquakes. S. Special Issue No. C. White. [1994] “Generation of hazard-consistent ground motion.” Geotechnical Earthquake Engineering & Soil Dynamics III. E. J. pp. H. D. A. Am. Am. P.” Engrg. E. Paris. B. Rymer. Balkema.. Seism. 481–498.” Engrg. Federal Emergency Management Agency. Perkins.” Seism. 1–52. [1999] “Multiple earthquake event loss estimation methodology. Innsbruck. 739–755. [1984] “Stochastic slip-predictable models for earthquake occurrence. 16. Geol. Am. [1999] “Deaggregation of probabilistic ground motions in the Central and Eastern United States. and Cornell. . Frankel. 151–160. Harlow.” Proceedings International Seminar on Earthquake Engineering. S. Struct. Earthq.” Bull. H. M. and Nojima. A.” Bull. Am. Kijko. Seism. following the destructive earthquake of 10 October 1986. Engrg. Lett. S. M. Soc. D.” J. Soc. Hofmann. 28–65. Faccioli.
G. D. Krinitzsky. J. and Winter. [2000] “Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation... 2002 8:34 WSPC/124-JEE 72 00064 J. Earthq. B. Musson. [1986] “HAZAN: a FORTRAN program to evaluate seismic hazard parameters using Gumbel’s theory of extreme value statistics. P. Barka. Earthq. London. [1999] “A deterministic-probabilistic approach for seismic hazard assessment. A. 12.” Earthq. Spectra 16.. Geol. 355–364. Mallard. Newmark. S. J.” Earthq. A. P. E.. S. H. Mandrescu. Makropoulos. W. F. P. R. [1995] “Probabilistic seismic hazard analysis and design earthquakes: closing the loop.March 15. J. Soc. 529–566. V. Krinitzsky.” Her Majesty’s Stationery Office. P. P. W. [1998] “The hazard in using probabilistic seismic hazard assessment for engineering.” Pure Appl. [1995b] “Deterministic versus probabilistic seismic hazard analysis for critical structures. M. [1993] “Uncertainty and conservatism in UK seismic hazard assessment. R. E. Geosci. L. R. S. [1999] “Determination of design earthquakes in seismic hazard analysis through Monte Carlo simulation. 199–205. [2001] “How to obtain earthquake ground motions for engineering design. .. and Dietrich. Berrill. Radulin. J. [1996] “Development of Caltrans deterministic fault and earthquake hazard map of California. Parsons. C. Vaccari. Panza. Engrg. T. Peek. M.” Tectonophysics 312. Earthq.” Comput. Bommer Krinitzsky.. Panza.” Bull. 139–160. G. R.” Natural Hazards 14. K. Geosci. N. Seism. and Fah. New Zealand Nat. 157. D. M. C. Engrg.1–B5. [1965] “Effects of earthquakes on dams and embankments. E. 1275–1284. F. Stein.. Krinitzsky. F.. N. N.” US Geological Survey Open-File Report 76–67.12.. 21. [1969] “Seismic design criteria for nuclear reactor facilities. Am. 3(4). [2000] “Seismic hazard of Romania: deterministic approach. R. and Woo.” Phys. Mualchin. 283–293. 661–665. 377–384. R. Engrg. M. F.” G´eotechnique 15(2). G. Musson. G. Roda. Santiago de Chile. 4(4). [2001] “Deterministic vs. 85. M. L. [1980] “A seismicity model for New Zealand. and Moldoveanu. submitted. McGuire. 42. W. Soc. Orozova. [1976] “FORTRAN computer program for seismic risk analysis.” Nuclear Energy 32(4). Helens on 18 May 1980.. Gould. E. 29–46. P. 217–222.” J. Geophys. 191–202.. Newmark. [1997] “Seismic hazard maps for the UK. [2000] “Development of maximum considered earthquake ground motion maps. Frankel. Wiley. Geosci. K. and Hall. E. 463–474. Costa. R. McGuire. B5. W. H. and Burton. 40. Earth Planetary Interiors 49. and Rukstales. 2. and Davis. M.” Engrg. 13(4). K. J. J.” Proceedings Fourth World Conference on Earthquake Engineering. Spectra 12. [1987] “A characteristic earthquake model of the seismicity preceding the eruption of Mount St.. Main. Suhadolc. probabilistic earthquake hazard and risks.” Bull. L. J. K. Hunt. [1993] Fundamentals of Earthquake Resistant Construction. L. and Suhadolc. 141–154. L. I. R. K. I. G. R. Leyendecker. vol. Engrg. [1996] “Seismic input modelling for zoning and microzoning. 1–7. Geol. 425–443..” Engrg. McGuire. L.” Engrg. 28 April.” Env. and Edinger.” Soil Dyn.” Science 288. 21–40. 221–248. O. Vaccari. D. Ove Arup and Partners [1993] “Earthquake hazard and risk in the UK. W.
Seism. [1990] Earthquake Hazard Analysis: Issues and Insights. J.” Engrg. Seism. 353–362. K. G. A. Rodriguez. R. J. Soc. 89. L. 68(1). [2000] “Probabilistic versus deterministic hazard analysis: an integrated approach for siting problems.” Bull. 41–57. Toro. Romeo. [1983] “Statistics of small earthquakes and frequency of occurrence of large earthquakes along the Mexican subduction zone. Engrg.” Technical Report NCEER-89-0038. and Prestininzi. I. and Graf. Probabilistic Seismic Hazard Assessment 73 Reiter. and Esteva. G. Palo Alto.. [1996] “Kernel estimation methods for seismic hazard area source modelling. P. Am. [1993] “Destructive upper-crustal earthquakes of Central America since 1900. 1115–1142.. and Gr¨ unthal. Columbia University Press. N..” Soil Dyn. Earthq. [1997] “Modification of empirical strong ground motion attenuation relations to include the amplitude and duration effects of rupture directivity. Sacramento. N. T. Tinic. A. D.” Caribbean Conference on Earthquakes. Restrepo-V´elez..” Soil Dyn.. Sullivan. A. Abrahamson. A. and Scheider. Whitman. Birkhauser. Wahlstr¨ om. and Prockter. J. and Coppersmith. 20. Res. SEAOC [1995] Vision 2000 — A framework for performance based design. London. P. Nevada. R. and Harlow. Schwartz.” Twelfth European Conference on Earthquake Engineering. 11–15 October. 83(4). H. P. and Mattson. D. White. Soc. Finland and Denmark using different logic tree approaches. Toro.” Earthq. 199–222. Singh.. [1984] “Fault behaviour and characteristic earthquakes: examples from the Wasatch and San Andreas faults.” Seism.. DC. Lett. Speidel. C. Torri. D. Savy. R.. National Center for Earthquake Engineering Research. Smith. 86. P. 75–84... Seism. 1779–1796. [1984] “Historical method of seismic hazard analysis. A. G. Abrahamson. B. ROSE School. Engrg.. and O’Hara. F. P. 9–27. H. Quittmeyer. W. Somerville. Res. Woo. K. 45–58. Veneziano. Res. 40. H.. [1995] “Questions on the validity and utility of b-values: an example from the Central Mississippi Valley. SSHAC [1997] “Recommendations for probabilistic seismic hazard analysis: guidance on uncertainty and the use of experts. Wong. F. . and Abrahamson. G. Washington.” Seism. R. D. R. 68.March 15. California. N. R. Lett.” Electrical Power Research Institute Report NP-3438. L. Whitney. Smit. Graves.. Geol. L. Trinidad. 73.” NUREG/CR-6372. S. [1997] “Model of strong ground motions from earthquakes in Central and Eastern North America: best estimates and uncertainties. G. T. State University of New York at Buffalo. 20. K. 113–151. Tanner. Volcanoes. Earthq. Sprecher. Soc. (ed.” J. J.. Am. Windstorms and Floods. G. R.. S. Shepherd. 2002 8:34 WSPC/124-JEE 00064 Deterministic vs. J. [2000] “Probabilistic seismic hazard assessment (horizontal PGA) for Sweden.. Port of Spain. J. N. R. C. R. Structural Engineers Association of California. Cornell. Coppersmith.. Spectra 17(1).” Bull. F.. and Yucca Mountain PSHA Project Members [2001] “Probabilistic seismic hazard analyses for ground motions and fault displacements at Yucca Mountain.” Master Dissertation.” Bull. Geophys. Youngs. Universita di Pavia. C. [1993] “Revised estimates of the levels of ground acceleration and velocity with 10% probability of exceedance in any 50-year period for the Trinidad and Tobago region. L. Am. J. 5681–5698.. [2001] “Explorative study of the scatter in strong-motion attenuation equations for application to seismic hazard assessment. A. M. [2002] “Pegasos — a comprehensive probabilistic seismic hazard assessment for nuclear power plants in Switzerland. Stepp.) [1989] “Workshop on ground motion parameters for seismic hazard mapping. Senior Seismic Hazard Analysis Committee.