Vous êtes sur la page 1sur 35

Hub

ScienceDirect
Scopus
Applications
Unili Moris

Logout

Go to SciVal Suite

Home + Recent Actions

Browse

Search

My settings

My alerts

Help

Export citation

PDF (882 K)

More options...

Email article

Alert me about new volumes / issues

Show thumbnail images

Search ScienceDirect
Search

Articles Autho Advanced


All fields
r search

Images Journal/Book Volu Issu Pag Search


Search tips
title me e e

Show thumbnails in outline

Abstract

Keywords

Nomenclature

1. Introduction

2. Rank probability
3. Binomial probability

4. Difference in Rank and Binomial probabilities

5. Examples

5.1. Analysis of fatigue life data


Table 1.

5.2. Analysis of cleavage fracture toughness data


Table 2.
6. Discussion

7. Summary and conclusions


Acknowledgements

References

Engineering Fracture Mechanics

Volume 78, Issue 9, June 2011, Pages 2070–2081

Simple distribution-free statistical assessment of structural integrity material


property data

Kim R.W. Wallin

VTT Materials for Power Engineering, P.O. Box 1000, FI-02044 VTT, Espoo, Finland

Received 10 June 2010. Revised 29 November 2010. Accepted 6 April 2011. Available online 12 April 2011.

http://dx.doi.org/10.1016/j.engfracmech.2011.04.002, How to Cite or Link Using DOI

Permissions & Reprints

Abstract
The assessment of structural integrity data requires a statistical assessment. However, most statistical
analysis methods make some assumption regarding the underlying distribution. Here, a new distribution-
free statistical assessment method based on a combination of Rank and Bimodal probability estimates is
presented and shown to result in consistent estimates of different probability quantiles. The method is
applicable for any data set expressed as a function of two parameters. Data for more than two
parameters can always be expressed as different subsets varying only two parameters. In principle, this
makes the method applicable to the analysis of more complex data sets. The strength in the statistical
analysis method presented lies in the objectiveness of the result. There is no need to make any
subjective assumptions regarding the underlying distribution, or of the relationship between the
parameters considered.

Keywords

Rank probability;

Binomial probability;

Statistical assessment;

Objective estimates;

Lower bound estimates

Nomenclature
B0

normalisation thickness defined in standard ASTM E1921

C( T )

compact tension specimen

order number

order number
KJC

elastic plastic fracture toughness based on J-integral


Kmin

theoretical lower bound fracture toughness defined in standard ASTM E1921

specimen measuring capacity criterion defined in standard ASTM E1921

total number of data values

nfail

number of values failing run-out criterion for a specific stress level

nrunout

number of values fulfilling run-out criterion for a specific stress level

Nc

run-out criterion for fatigue life

Nf

fatigue life

Nf_5%

fatigue life corresponding to 5% lower bound

cumulative probability

PB

Binomial probability

PB0.5
median Binomial probability estimate

Pconf

confidence level

Prank

cumulative Rank probability corresponding to Pconf


Prank0.5

median Rank probability estimate

P{X = i}

probability that number of failures (X) is equal to i


s

fatigue stress level

s5%

fatigue stress level corresponding to 5% lower bound

S–N

fatigue life data

temperature

T0

transition temperature defined in standard ASTM E1921

xi

parameter value corresponding to location i


δj

censoring parameter

Σn
sum of Σnrunout and Σnfail
Σnfail

total number of values failing run-out criterion

Σnrunout

total number of values fulfilling run-out criterion

1. Introduction

Structural integrity assessment relies on different material property data like fracture toughness, fatigue
and creep strength, stress corrosion and corrosion fatigue crack growth rate. Common to all these
properties is that they can always be expressed in terms of two parameters. Fracture toughness, in the
case of brittle fracture, is usually expressed as a function of the temperature; fatigue strength is usually
expressed in terms of the stress amplitude for a given number of cycles to failure; creep is usually
expressed in terms of time to rupture for a specific constant stress, etc.

A metallic material is not a continuum. It contains imperfections constituted by dislocation pile-ups, grains,
precipitates and inclusions. For a macroscopically homogeneous material, these imperfections are
approximately randomly distributed in the material, making the material microscopically inhomogeneous,
but macroscopically homogeneous. However, there is often an additional variation in the material (weld
beads, heat affected zones, texture, etc.) and then the material is more or less also macroscopically
inhomogeneous. All the different microstructural features may affect the fracture process by causing
scatter. The macroscopic inhomogeneity can often be accounted for by careful specimen sampling, but
the microscopic inhomogeneity leads to fracture mechanism specific scatter. In order to be able to assess
the materials fracture toughness properties properly, with regard to structural integrity assessment,
knowledge of the nature of the scatter combined with some statistical methods are required, by which
both the scatter due to microscopic inhomogeneity as well as those due to macroscopic inhomogeneity
can be estimated.

Different micromechanisms will respond to the microscopic inhomogeneity differently. As a rule, the more
local the event controlling the fracture event is, the larger the scatter will be. In a normal tensile test, the
whole specimen is loaded rather evenly and a large material volume is involved in the deformation
process. This causes the scatter in tensile properties generally to be small for macroscopically
homogenous materials. In the case of fracture toughness, the situation is different. The material volume
involved in the fracture process is much smaller than in a tensile test and, in an extreme case, the
fracture may be controlled by a single material imperfection in front of the macroscopic crack. In the case
of cleavage fracture, the size of a volume element acting as crack initiator may be of the order of one
micrometer. This leads to a very pronounced micromechanism related scatter.

The assessment of structural integrity data thus requires a statistical treatment. However, most statistical
analysis methods make some assumption regarding the underlying distribution. The use of simple least-
square fitting makes the assumption that the scatter follows a normal (or log-normal) distribution. The
maximum likelihood method (of which the least square fit is a special case) always requires an
assumption of the underlying distribution. If the underlying distribution is unknown the use of some
distribution free assessment method is preferable to standard fits. Here the combination of two different
distribution free assessment methods is shown to produce good descriptions of the data scatter. The
methods can be used to determine which kind of distribution is most appropriate for the data or they can
be used directly to develop statistically defined lower and upper bound estimates without requiring
knowledge of the actual distribution. The two methods are the Rank probability and Binomial probability
methods. The Rank probability, shown in Fig. 1, corresponds to the probability that, for a data set ordered
by rank, the ith value is equal to or less than xi. The Binomial probability, shown in Fig. 2, corresponds to
the probability that, for a data set ordered by rank, the lowest i values are equal to or less than xi. The
definition may appear similar, but they do contain two significant differences. First, the Rank probability is
connected to a specific test result, whereas the Binomial probability is connected to some freely chosen
criterion xi. Second, the Rank probability exists for the values i = 1 … n, whereas the Binomial probability
exists for the values i = 0 … n. These differences make the methods suitable for producing two
independent descriptions of the data scatter as will be described in the following examples.

Fig. 1. Rank probability represents the probability that the ith rank ordered value is equal to or less than xi.
Fig. 2. Binomial probability represents the probability that the lowest i values are equal to or less than xi.

2. Rank probability

Rank probability is a popular way of analyzing intermediate size data sets visually. Since all test results
represent individual random probabilities, they follow the rules of order statistics (see e.g. [1]).

When test results are ordered by rank based on e.g. toughness, they can be designated Rank
probabilities, which describe the cumulative probability distribution. The Rank probability estimates are
not real measured values, but estimates of the cumulative probability based on order statistics. Each data
point corresponds to a certain cumulative failure probability with a certain confidence. This can be
expressed in a mathematical form based on the binomial distribution as discussed e.g. in [2]. The
equation for the probability distribution of individual Rank probability estimates can, according to Johnson
[3], be expressed in the form of Eq. (1).

(1)

In Eq. (1), Pconf is the probability that the rank estimate corresponds to the cumulative probability Prank,
where n is the number of points and i is the rank number. Eq. (1) can be used to calculate e.g. rank
confidence estimates. The estimation requires the solving of Prank for a specific Pconf and this makes the
estimation somewhat cumbersome. An example of the Rank probability estimate based on Eq. (1) is
presented in Fig. 3 for a data set consisting of 15 values. The estimates corresponding to 5%, 50% and
95% confidence levels are plotted. The figure shows well the degree of uncertainty in the Rank probability
for small data sets. Due to the slight inconvenience in using Eq. (1), people usually prefer to use simple
approximations of the median (Pconf = 0.5) or the mean Rank probability estimate. One of the most
accurate of the simple analytical median Rank probability estimates has the form given in Eq. (2) (see
e.g. [2]).
(2)

Fig. 3. Example of Rank probability estimates based on Eq. (1).

Eq. (2) can only be used, as such, for data sets where all results correspond to failure. It can also be used
with data sets where all values above a certain value have been censored e.g. due to non-failure or
exceeding the measuring capacity limit, but in this case the data set size, n, must refer to the total data
set including the censored data.

If the data set contains non-censored failure results at higher values than the lowest censored value, a
method of random censoring (often called the suspended items concept) is needed. In this case the order
number, i, in the rank estimation does not remain an integer. The effective order number can be
expressed in the form of Eq. (3)[2]. The censoring parameter δj is zero for censored data and one for non-
censored data. Even though Eq. (3) is used on all values, only the non-censored values may be used in
the resulting analysis.
(3)

3. Binomial probability
The binomial distribution shown in Eq. (4) is often used in proof type testing, where a certain fraction of
results fail a certain value. It gives the probability that there are exactly i events in a set of size n, when
the discrete probability of the event is equal to PB.
(4)

The problem with Eq. (4) is that the probability of the event ( PB) is assumed to be known. In a situation
where n tests have been made and r events have been found, the question is reversed to asking what the
discrete probability (PB) may be with some confidence ( Pconf). A cumulative probability expression for the
confidence can be written in the form of Eq. (5). Pconf, represents the desired confidence level, similarly to
the Rank probability expression, Eq. (1).
(5)

The median Binomial probability estimate PB0.5 can also be expressed in a simple analytical form
analogous to the median Rank probability estimate in the form of Eq. (6). The equation has been obtained
through a simple curve fit to the median Binomial probability estimates from Eq. (5) accounting also for
the limiting values n = 1 and n = ∞.
(6)

Censored data values can be used as un-censored when the censored value is higher than the criterion
used for PB, otherwise the value is disregarded. This is described in more detail in the examples.

4. Difference in Rank and Binomial probabilities

The Rank probability corresponds to the actual measured value, whereas the Binomial probability only
contains the information that i values are less than or equal to x and (n − i) values are higher than x. The
Rank probability thus contains more information about the event. The two probabilities for a set of n = 15
are compared in Fig. 4. For large probabilities where nearly all results are less than x, the two
probabilities are nearly the same, but for small probabilities, where the information of a single event has a
higher impact, the Rank probability is less than the Binomial probability.
Fig. 4. Comparison of Rank probability and Binomial probability for a sample size of n = 15.

The Rank probability should be used to analyse data generated at a fixed value of some parameter, like
temperature or stress level. The Binomial probability is best suited to analyse data with respect to some
predefined criterion, like number of fatigue cycles or some desired level of toughness.

5. Examples

The use of the two assessment methods to provide a combined single statistical estimate is highlighted
through the assessment of two quite different data sets. The first example describes the analysis of a
fatigue life data set (S–N), whereas the second example describes the analysis of a brittle fracture
toughness data set. It is shown that, despite the completely different nature of the data sets, the
methodology is equally applicable for them both.

5.1. Analysis of fatigue life data

A comparatively large S–N data set on a SAE 1050 steel, generated by Epremian and Mehl [4], has been
tabulated in [5]. The data set is shown graphically in Fig. 5. The original run-out limit was set at Nc = 5·
107 cycles.
Fig. 5. Fatigue life data for a SAE 1050 steel [5].

The Rank probability analysis is quite straightforward. The fatigue lives ( Nf) for each stress levels (s) are
ordered by rank and prescribed probabilities with Eq. (1) or Eq. (2). The resulting Rank probability
description is shown in Fig. 6, for the median Rank probability estimate. For each stress level it is simple
to determine from the figure Nf values corresponding to desired probabilities. This way one obtains s − Nf
data pairs corresponding to a specific probability, without having to make any assumptions regarding the
underlying distribution. It is of course also possible to fit a specific distribution to the whole data set shown
in Fig. 6 or to find a relation that would collapse the different probability traces onto one curve. Since the
present work is not focussed on the underlying fatigue mechanism, only three different probability levels
are considered (5%, 50% and 95%).

Fig. 6. Rank probability diagram for the data set in Fig. 5.

The estimation of the Binomial probability is somewhat more complicated than that of the Rank
probability. For the estimation, it is best to write the data in a tabular form as shown in Table 1. In the
table, the columns nrunout and nfail are the individual numbers of run-outs (as defined by selected fatigue
criterion Nc) and failures corresponding to stress level s. The column Σnrunout contains all run-outs
corresponding to s or a higher stress level and the column Σnfail all failures corresponding to a stress level
s or lower. The column Σn is the sum of Σnrunout and Σnfail. The columns Σnfail and Σn are the basis for the
Binomial probability estimates so that Σnfail represents i and Σn represents n in Eq. (6). The use Σnrunout and
Σnfail makes the treatment of censored values simple. The censored value may contribute to Σnrunout but not
to Σnfail. The resulting probability diagram is presented in Fig. 7. Similarly, as for the Rank probability
diagram, it is possible to fit a specific distribution to the data or to try to collapse the data into a single
trend. Again, this is outside the scope of the present work, and here it is considered sufficient to estimate
Nf − s data pairs corresponding to the same probabilities as for the Rank probability.

Table 1. Median Binomial probability estimates for data shown in Fig. 5.

Nc s (MPa) nfail nrunout Σnfail Σnrunout Σn PB0.5


5·107 276 4 13 4 41 45 0.101
5·107 279 10 11 14 28 42 0.338
5·107 283 14 7 28 17 45 0.619
5·107 286 13 7 41 10 51 0.796
5·107 290 16 2 57 3 60 0.940
5·107 293 19 1 76 1 77 0.979
5·107 296 20 0 96 0 96 0.992
1·107 276 4 13 4 43 47 0.097
1·107 279 8 13 12 30 42 0.292
1·107 283 14 7 26 17 43 0.601
1·107 286 13 7 39 10 49 0.788
1·107 290 16 2 55 3 58 0.938
1·107 293 19 1 74 1 75 0.978
1·107 296 20 0 94 0 94 0.992
5·106 276 4 13 4 48 52 0.088
5·106 279 7 14 11 35 46 0.247
5·106 283 12 9 23 21 44 0.522
5·106 286 13 7 36 12 48 0.743
5·106 290 14 4 50 5 55 0.899
5·106 293 19 1 69 1 70 0.976
5·106 296 20 0 89 0 89 0.992
2·106 276 1 16 1 85 86 0.019
2·106 279 5 16 6 69 75 0.087
2·106 283 4 17 10 53 63 0.166
2·106 286 5 15 15 36 51 0.299
2·106 290 10 8 25 21 46 0.542
2·106 293 13 7 38 13 51 0.739
Nc s (MPa) nfail nrunout Σnfail Σnrunout Σn PB0.5
2·106 296 16 4 54 6 60 0.891
2·106 303 19 2 73 2 75 0.965
2·106 310 18 0 91 0 91 0.992
1·106 286 0 20 0 78 78 0.009
1·106 290 2 16 2 58 60 0.044
1·106 293 5 15 7 42 49 0.152
1·106 296 4 16 11 27 38 0.297
1·106 303 12 9 23 11 34 0.670
1·106 310 16 2 39 2 41 0.937
5·105 290 0 18 0 83 83 0.009
5·105 293 2 18 2 65 67 0.039
5·105 296 0 20 2 47 49 0.055
5·105 303 1 20 3 27 30 0.117
5·105 310 11 7 14 7 21 0.657

Fig. 7. Binomial probability diagram for the data set in Fig. 5.

If only a simple lower bound estimate is desired, the data can be expressed e.g. in the form of Fig. 8,
which combines the Rank and Binomial s − Nf data pairs corresponding to a 5% lower bound probability.
A simple average fit to the individual estimates is sufficient, because, even though the individual data
pairs include a significant uncertainty, the mean value theorem shows the average fit to follow closely the
mean (or median) estimate. The main value in the present method lies in that it is fully objective. No
subjective assumptions regarding the underlying mechanism or distribution are needed. The lower bound
can of course represent any desired probability level. The 5% lower bound is only given here as an
example.
Fig. 8. Lower bound estimate for the data set in Fig. 5.

By repeating the above procedure for other probabilities, it is possible to obtain a full description of the
scatter. This is highlighted by Fig. 9, where, besides the 5% lower bound, the 50% and 95% estimates
are also shown. As seen from the figure, the Rank and Binomial probability estimates complement each
other very well. This way, it is possible to obtain a full statistical characterisation of the data and the
relation between the parameters, without the need to make any subjective assumptions.

Fig. 9. Full statistical assessment of the data set in Fig. 5.

5.2. Analysis of cleavage fracture toughness data

One of the best documented cleavage fracture toughness data sets is the so called EURO fracture
toughness data set [6]. The material is a forged, quenched and tempered pressure vessel steel DIN
22NiMoCr37, which is widely used in nuclear power plants. Out of this data set, those data corresponding
to 25 mm thick C(T) specimens where selected for an example assessment. The data are shown in Fig.
10. The Rank probability analysis is again quite straightforward. The data for each separate temperature
is simply ordered by rank and the Rank probability is obtained from Eq. (1) or Eq. (2).
Fig. 10. 25 mm C(T) specimen data from the EURO fracture toughness data set [6].

The resulting Rank probability description is shown in Fig. 11 for the median Rank probability estimate.
For each temperature, it is simple to determine from the figure KJC values corresponding to desired failure
probabilities. This way, one obtains T − KJC data pairs corresponding to a specific probability, without
having to make any assumptions regarding the underlying distribution. It is of course also possible to fit a
specific distribution to the whole data set shown in Fig. 11, or to find a relation that would collapse the
different probability traces onto one curve. An exercise like this has been previously done for the EURO
data set [7]. In the present work, however, only three different probability levels are considered (5%, 50%
and 95%).

Fig. 11. Rank probability diagram for the data set in Fig. 10.

As with the fatigue data, estimation of the Binomial probability is also more complicated than that of the
Rank probability. For the estimation, it is again best to write the data in a tabular form as shown in Table
2. In the table, the column Σnrunout contains all results above the selected fracture toughness level at or
below the specific temperature, and the column Σnfail all values corresponding to fracture toughness below
the selected fracture toughness level at or above the specific. The column Σn is the sum of Σnrunout and Σ
nfail. The columns Σnfail and Σn are the basis for the Binomial probability estimates so that Σnfail represents i
and Σn represents n in Eq. (6). The use Σnrunout and Σnfail makes the treatment of censored values
straightforward. The censored value may contribute to Σnrunout but not to Σnfail. The resulting probability
diagram is presented in Fig. 12. Similarly as for the Rank probability diagram, it is possible to fit a specific
distribution to the data or to try to collapse the data into a single trend. Here, only three different
probability levels are considered (5%, 50% and 95%).

Table 2. Median Binomial probability estimates for data shown in Fig. 10.

KJC (MPa√m) T (°C) Σnfail Σnrunout Σn PB0.5


600 20 1 48 49 0.033
600 0 6 39 45 0.144
600 −10 8 3 11 0.702
600 −20 38 0 38 0.983
500 20 0 56 56 0.011
500 0 2 46 48 0.054
500 −10 3 7 10 0.324
500 −20 30 3 33 0.893
500 −40 60 0 60 0.989
400 20 0 61 61 0.011
400 0 1 51 52 0.032
400 −10 1 11 12 0.126
400 −20 25 6 31 0.793
400 −40 58 0 58 0.988
300 −10 0 22 22 0.029
300 −20 14 17 31 0.454
300 −40 45 1 46 0.964
300 −60 78 0 78 0.991
200 −10 0 52 52 0.013
200 −20 3 47 50 0.072
200 −40 17 20 37 0.461
200 −60 48 2 50 0.948
200 −91 78 0 78 0.991
KJC (MPa√m) T (°C) Σnfail Σnrunout Σn PB0.5
100 −60 0 50 50 0.013
100 −91 13 17 30 0.436
100 −154 39 0 39 0.983

Fig. 12. Binomial probability diagram for the data set in Fig. 10.

Fig. 13 shows the combined estimates for the 5% 50% and 95% failure probabilities. As seen from the
figure, the two different estimates complement each other very well. Free hand fits to the three estimates
are compared with the result of a standard ASTM E1921 Master Curve fit to the data shown in Fig. 14.
Within the validity window of the Master Curve, the Rank and Binomial probability-based estimates have
nearly perfect correspondence with the Master Curve fit. Above the measuring capacity limit for the
25 mm C(T) specimen the Rank and Binomial probability-based estimates for the 50% and 95%
probability levels rise above the Master Curve. This is due to a loss of constraint in the specimens. The
5% estimate rises above the Master Curve at a lower fracture toughness level. This is indicative that, in
this region, the simple assumption of a constant lower bound fracture toughness, close to 20 MPa√m, is
no longer valid.
Fig. 13. Full statistical assessment of the data set in Fig. 10 compared to a standard Master Curve representation of the

data.

Fig. 14. Standard ASTM E1921 analysis of the data in Fig. 10.

6. Discussion

The Rank probability is a well known and much used method for the statistical description of data sets.
The Binomial probability, however, has not previously been used in a similar fashion. The combination of
these two probability estimates must therefore be considered unique.

The strength in the above described statistical analysis method lies in the objectiveness of the result.
There is no need to make any subjective assumptions regarding the underlying distribution or the
relationship between the parameters. The combination of Rank and Binomial probability estimates
doubles the number of independent point estimates, and is considered to make a mean fit to the data
more accurate with respect to the true values. This is especially important for smaller data sets, where the
uncertainty of an individual estimate may be considerable. The above examples cover only two possible
applications, but the method is applicable to any data set expressed as a function of two parameters.
Data for more than two parameters can always be expressed as different subsets varying only two
parameters. In principle this makes the method applicable for more complex data sets.

The method can either be used as a simple tool to determine objectively desired lower bound estimates,
or it can be used to provide a full statistical description of the data. In this latter case, the result can be
used as such, or it can be used as a basis for the development and validation of micromechanical
theoretical models describing the event. An example of this is shown by the second example, where the
method may be used to help validate the Master Curve in the temperature regime covered by the ASTM
E1921 standard. Besides validation, the method also provides a quantitative description of fracture
toughness data in the regime where the 25 mm C(T) specimens are affected by loss of constraint, and at
higher temperatures, where the assumption of a constant lower bound fracture toughness ( Kmin), close to
20 MPa√m, is no longer valid. The result would also enable an estimation of Kmin in the second region,
and by comparing estimates from different sizes of specimens it would also be possible to quantify the
loss of constraint resulting a way that would be fully objective.

The above examples only consider three different probability levels, but any number of probability levels
may be used to obtain the desired accuracy of description of the complete distribution.

It should be pointed out that the method is only applicable to sufficiently large data sets where the data
describes the underlying distribution. The method, however, decreases the required data set size
effectively by a factor of 2. The average number of specimens both for the Rank and Binomial probability
should preferable be n (Σn) ⩾ 8. An example of a sufficiently large data set is e.g. the fatigue data set
used as example in the standard ASTM E468-90 [8]. The standard presentation is shown reproduced in
Fig. 15. This presentation contains two subjective assumptions. It assumes a specific functional
dependence between stress amplitude and the fatigue life and it also assumes a specific scatter
distribution for the fitting of the data. Fig. 16 shows the 5%, 50% and 95% predictions for same data set
analyzed according to the new method presented here. The result is compared with the standard ASTM
E468-90 presentation in Fig. 17. For this data set, the median estimate coincides closely with the
standard presentation, but the difference is that for the new method, no subjective assumptions are
required. Objective estimates of the scatter bounds are also obtained. This is not possible with the
standard presentation, where a subjective assumption of the underlying distribution has to be made.
Fig. 15. Standard presentation of constant amplitude fatigue test results according to ASTM E468-90 [8].

Fig. 16. Objective estimation of the median and 5% and 95% scatter bands for the ASTM E468-90 data set shown in Fig.

15.

Fig. 17. Comparison of new objective analysis with the standard ASTM E468-90 presentation.

7. Summary and conclusions


A distribution-free statistical assessment method based on a combination of Rank and Bimodal probability
estimates has been presented and shown to result in consistent estimates of different probability
quantiles. The method is applicable for any data set expressed as a function of two parameters. Data for
more than two parameters can always be expressed as different subsets varying only two parameters. In
principle, this makes the method applicable to the analysis of more complex data sets.

The Rank probability is a well known and much used method for the statistical description of data sets.
The Binomial probability, however, has not previously been used in a similar fashion. The combination of
these two probability estimates is therefore considered to be unique.

The strength in the statistical analysis method presented lies in the objectiveness of the result. There is
no need to make any subjective assumptions regarding the underlying distribution, or of the relationship
between the parameters considered.

Acknowledgements

This work is part of the authors Academy Professorship and is funded by the Academy of Finland
decision 117700.

References

[1]

David HA, Nagaraja HN. Order statistics, 3rd ed. Hoboken, New Jersey: John Wiley & Sons; 2003.

[2]

Lipton C, Sheth NJ. Statistical design and analysis of engineering experiments. New York: McGraw-Hill;
1973.
[3]

L.G. Johnson

The median ranks of sample values in their population with an application to certain fatigue
studies

Ind Math, 2 (1951), pp. 1–9

[4]

Epremian E, Mehl RF. The statistical behavior of fatigue properties and the influence of metallurgical
factors. ASTM STP 137. West Conshohocken, PA: ASTM International; 1953. p. 25–57.

[5]

I. Yoshimoto

Fatigue test by staircase method with small samples

Bull JSME, 5 (1962), pp. 211–221

[6]

J. Heerens, D. Hellmann
Development of the Euro fracture toughness dataset

Engng Fract Mech, 69 (2002), pp. 421–449

[7]

K. Wallin

Master curve analysis of the “Euro” fracture toughness dataset

Engng Frac Mech, 69 (2002), pp. 451–481

[8]

ASTM E468-90. Standard practice for presentation of constant amplitude fatigue test results for metallic

materials. Annual book of ASTM standards, vol. 03.01. West Conshohocken, PA: ASTM
International; 2007. p. 601–6.

Copyright © 2011 Elsevier Ltd. All rights reserved.

Related articles
Objective assessment of scatter and size eff...
Procedia Engineering

Objective assessment of scatter and size effects in the Eurofracture toughness data set Original Research
Article
Procedia Engineering, Volume 10, 2011, Pages 833-838
Kim R.W. Wallin
Abstract
One of the largest and best characterized fracture toughness data sets is the so called Euro data set

developed in aEuropean co-operation project some time ago. Probably the most objective analysis of the

data set was made using adistribution comparison method. Recently, an improvement of the method has

been achieved by combining the Rankprobability estimates with Binomial probability estimates. The

combination of Rank and Binomial probabilityestimates double the number of independent individual point

estimates, making the overall estimate more accuratewith respect to the true value. The strength in this

statistical analysis method lies in the objectiveness of the result.This new statistical assessment method

is here applied to analyze the Euro fracture toughness data set once more. Theresults form a new basis

for micro-mechanistic modeling of cleavage fracture.

Statistical uncertainty in the fatigue thres...


International Journal of Fatigue

Statistical uncertainty in the fatigue threshold staircase test method Original Research Article
International Journal of Fatigue, Volume 33, Issue 3, March 2011, Pages 354-362
Kim R.W. Wallin

Abstract
A popular method of estimating a materials fatigue threshold is the so called staircase test, where a

relatively small number of test specimens are used to estimate the materials fatigue strength. Usually the

test results are analysed using the maximum likelihood method (MML), either directly or by using the

approximation by Dixon and Mood. There has been several studies looking at the bias and confidence of

both the mean estimate as well as the standard deviation, but a comprehensive study of the reliability of

the estimate has been missing. Here, the accuracy of the MML estimate is studied in detail. It is shown

that the MML method is not suitable to estimate the scatter of the fatigue strength from a staircase test.

An optional analysis method allowing for a better estimate of confidence bounds, based on binomial
probability is presented. Even this new analysis method suffers from similar problems as the MML

estimate. The conclusion is that the staircase test cannot be used to estimate the scatter in fatigue

strength.

PDF (782 K)
Structural integrity assessment methods, wit...
Engineering Failure Analysis

Structural integrity assessment methods, with particular reference to high-duty materials Original Research
Article
Engineering Failure Analysis, Volume 1, Issue 2, June 1994, Pages 119-134
J.F. Knott

Abstract
The paper seeks to provide a general framework for treatment of structural integrity issues and directs

attention to the questions that have to be answered if a new material is to be used with confidence in

engineering applications. Some exemplary figures are given for high-duty metallic, non-metallic and

composite systems, but the main thrust of the paper is to highlight the general importance of initial defect

population and damage accumulation.

Detection of linear objects in GPR data


Signal Processing

Detection of linear objects in GPR data Original Research Article


Signal Processing, Volume 84, Issue 4, April 2004, Pages 785-799
Andrea Dell'Acqua, Augusto Sarti, Stefano Tubaro, Luigi Zanzi

Abstract
In this paper, we propose a semi-automatic approach to the detection of linear scattering objects in geo-

radar data sets, based on the 3D radon transform. The method that we propose is iterative, as each

detected object is removed from the data set before the next iteration, in order to avoid mutual
interference or masking. In addition, the algorithm is able to further analyze the data set in a local fashion

in order to eliminate spurious targets from the set of lines of maximum consensus.

Our algorithm proved robust and reliable even in the presence of data affected by heavy noise, artifacts

and other undesired scattering objects.

Although the application scenario of the proposed algorithm is that of the analysis of data sets generated

by a ground penetrating radar, the method is general enough to apply to any problems where linear

objects needs to be identified and localized in volumetric data.

A lazy bagging approach to classification


Pattern Recognition

A lazy bagging approach to classification Original Research Article


Pattern Recognition, Volume 41, Issue 10, October 2008, Pages 2980-2992
Xingquan Zhu, Ying Yang

Abstract
In this paper, we propose lazy bagging (LB), which builds bootstrap replicate bags based on the

characteristics of test instances. Upon receiving a test instance x k , LB trims bootstrap bags by taking

into consideration x k 's nearest neighbors in the training data. Our hypothesis is that an unlabeled

instance's nearest neighbors provide valuable information to enhance local learning and generate a

classifier with refined decision boundaries emphasizing the test instance's surrounding region. In

particular, by taking full advantage of x k 's nearest neighbors, classifiers are able to reduce classification

bias and variance when classifying x k . As a result, LB, which is built on these classifiers, can

significantly reduce classification error, compared with the traditional bagging (TB) approach. To

investigate LB's performance, we first use carefully designed synthetic data sets to gain insight into why

LB works and under which conditions it can outperform TB. We then test LB against four rival algorithms
on a large suite of 35 real-world benchmark data sets using a variety of statistical tests. Empirical results

confirm that LB can statistically significantly outperform alternative methods in terms of reducing

classification error.

View more related articles

MostDownloaded

Table Download

Gadget timed out while loading


Add appsManage Apps Help

Cited by (0)
This article has not yet been cited.
Provided by Scopus

Related reference work articles


e.g. encyclopedias
2.27 - Common Clustering Algorithms
Comprehensive Chemometrics Chemical and Bioc...
10.06 - Statistical Methods
Comprehensive Structural Integrity
1.05 - Case Histories in the Application of ...
Comprehensive Structural Integrity
Reduction of Dimensionality
Encyclopedia of Physical Science and Technol...
7.06 - Flaw Characterization
Comprehensive Structural Integrity
More related reference work articles

View Record in Scopus

Consensus ProtocolLearn how to get better results in protein production & purification. View best
practices here.Learn More

MBP-Tagged Protein PurificationLearn the best practices on affinity purification for Maltose-
Binding ProteinsLearn More

Integrated Microanalysis SystemsGet EDS & WDS detectors, analyzers & software that are easy-
to-use & eliminate operator biasLearn More

Structural BiologyEfforts have to be undertaken for adequate sample preparation to get unbiased &
pristine results!Learn More

About ScienceDirect

What is ScienceDirect
Content details
Set up
How to use
Subscriptions
Developers
Contact and Support

Contact and Support


About Elsevier

About Elsevier
About SciVerse
About SciVal
Terms and Conditions
Privacy policy
Information for advertisers
Copyright © 2012 Elsevier B.V. All rights reserved. SciVerse® is a registered trademark of Elsevier Properties S.A., used
under license. ScienceDirect® is a registered trademark of Elsevier B.V.
Simple distribu

Objective assessment of scatter and size effects in the Eurofracture toughness ... view details

Statistical uncertainty in the fatigue threshold staircase test method view details

Effect of residual stresses on the crack-tip constraint in a modified boundary ... view details

Don't suggest related articles

Cancel

Vous aimerez peut-être aussi