Académique Documents
Professionnel Documents
Culture Documents
www.elsevier.com/locate/eswa
Abstract
Since the publication of the initial consultative proposal of a new Basel capital accord in June 1999 (and latest proposal from summer 2004) the
influence of the proposed changes in bank management has been discussed intensively. Especially, the possibility to forecast insolvencies is one of
the most relevant questions in many empirical studies. In this paper, we present an evaluation methodology for quantitative rating systems. As an
example, we use the well known logistic regression model in order to demonstrate the evaluation methodology proposed and we discuss the results
obtained in detail. Any other method (statistical or artificial intelligence methods, e.g. neural networks, fuzzy logic) can be evaluated in the same
manner. As a side effect, the approach proposed might lead to improved forecasting results.
q 2005 Elsevier Ltd. All rights reserved.
historical data in order to optimise their risk management. In the resulting scores whether a borrower should be viewed in
principle, thereare three techniques available to forecast a detail or not. To minimise this effort, banks need a high
rating grade: a qualitative, a quantitative and a combined quality statistical approach. This implies that a rating system
approach. Moody’s and Standard and Poor’s mainly use has to provide accurate dichotom classification, so that it
qualitative approaches, which means that they use the know- minimises additional analysis of borrowers.
how of credit risk experts instead of statistical methods. Yet the Beside the dichotom classification another characteristic of
qualitative approach has crucial disadvantages. It is very time rating systems has also a significance importance: the
consuming and expensive. Therefore such an approach cannot forecasted rating grade. It is quite a difference whether a
be applied to rate the large number of small and medium-sized borrower gets the rating grade AAA or BBB. The individual
enterprises. Banks are interested in an automated process to rating grade (in conjunction with the mapping process from a
forecast rating grades. Some banks use a statistical approach to rating grade to a probability of default) is the important input
calculate a rating grade from historical accounting data only. parameter to portfolio credit risk models. A classification
Other banks apply a combined approach. They use statistical system with also an excellent polytom classification rate is
methods to get a quantitative based rating grade which is finally essential. Overestimating or underestimating correct grades is
refined by experts resulting in an improved rating grade. expensive and includes a bias to risk judgements and the whole
The major disadvantage of a pure quantitative approach is risk management process.
the lack of an additional judgement by credit risk experts which Therefore, in a first step banks need a powerful approach to
might improve the forecast. Therefore, a quantitative rating forecast the dichotom classification, because this decides about
grade forecast must be of high quality. Banks must be able to an acceptance, a rejection or a further detailed analysis of the
rely on the rating grade forecasting system, because the borrowers through credit experts. In a second step banks
resulting probability of default is a crucial input parameter require a powerful system to forecast detailled rating grades for
to credit risk portfolio models (see Carey and Hrycay, correct capital allocation and risk management. Unfortunately,
2001, p. 197). there is no possibility to observe true rating grades in a
Many statistically based methods used to generate a rating historical data sample. Only the event of insolvency is
grade are only asymptotically consistent (for logistic observable. If a solvent borrower gets a rating grade e.g.
regression see, e.g. Fahrmeir and Kaufmann, 1985; Fahrmeir BBB, it is impossible to find out whether this (forecasted) grade
and Kaufmann, 1986). Therefore, analysts need a large will prove to be correct or not (as long as the borrower stays
database of information about insolvencies and solvencies to solvent). A validation of the forecasted rating grades by using a
achieve nearly consistent forecasts of rating grades. Very real data sample is impossible. Nevertheless, forecasted rating
often, a large database is not available. So it is questionable, grades will affect credit conditions, the credit portfolio
whether such statistical methods are unbiased and it is optimisation and the banks success. In the following, we
unclear, which effects may result. Furthermore, the whole present an evaluation system to assess the quality of statistical
portfolio credit risk management system of a bank is based on methods based on a given (artificial) data sample. Based on this
these probabilities of default. The influence of a bias in the system a bank is able to decide which method should be used to
forecasting process of rating grades might be huge. Because forecast rating grades.
of this small sample problem, the quality of statistical
methods is a crucial question concerning modern portfolio 3. Logistic regression and bootstrapping
credit risk management systems.
The major objective of many empirical studies is to In this section, we introduce the classical logistic
compare different statistical methods to classify the default regression and the integration of the bootstrap method. The
risk of borrowers (see, i.e. Altman, 1968; Desai, Crook, & logistic regression serves as an example in order to
Overstreet, 1996; Frerichs & Wahrenburg, 2003; Huang, demonstrate the Monte-Carlo based evaluation system later
Che, Hsu, Chen, & Wu, 2004; Srinivasan & Kim, 1987). By on. The integration of the bootstrap method in the logistic
using accounting data and other historical information about regression is an example for a possible improvement of
solvent and insolvent companies, a statistically based model classical methods. The evaluation system can be applied to
enables experts to forecast whether a company will likely modern methods like neural networks or support vector
become insolvent or not. A possible qualitative assessment of machines. But the evaluation of such modern methods is
such a model is e.g. the proportion of correct classified beyond the scope of this paper.
borrowers with respect to solvent vs. insolvent firms in a out- In logistic regression, a single outcome variable Yk (wherein
of-sample or out-of-time sample test (see e.g. Altman, 1968; kZ1,.,N indicates the N objects) follows a Bernoulli
Anders, 1998; Desai et al., 1996; Frerichs & Wahrenburg, distribution. It becomes one with probability pk and zero
2003; Huang et al., 2004; Leker & Schewe, 1998; Srinivasan with 1Kpk. In the case of modelling insolvencies, the event
& Kim, 1987). Borrowers are frequently classified with high one may describe the default of a company and the probability
accuracy in the cited empirical studies. Nevertheless, some pk is then called the probability of default (shortly: PD) of the
borrowers must be analysed in detail in order to get an kth company. pk varies over the different objects as described
adequate classification. For example, the Deutsche Bundes- by an inverse logistic function (also called link function) of a
bank (see Deutsche Bundesbank, 1999) decides based on linear predictor ðxTk bÞ. Here, xk contains all explanatory
A. Oelerich, T. Poddig / Expert Systems with Applications 30 (2006) 437–447 439
1.Phase
original dataset dataset for prediction
Data management
random selection of
200 runs
2.Phase subsets
Generating of
resamplings
Prediction of PD
Empirical expected
Empirical distribution
3.Phase value
Classification
Predictions
variables with respect to the kth company and b all model particular Efron & Tibshirani, 1993). We use this method to
parameters (including the cutpoint). reduce the variability of predictions with the logistic regression
model.
expðxTk bÞ In many studies, the bootstrap is used to quantify, e.g. a
pk Z (1) standard error or confidence interval. The bootstrap method-
1 C expðxTk bÞ
ology is based on the following idea: A variation of a given
The model parameters are estimated by the maximum- data set allows an estimation of an empirical distribution of a
likelihood approach (see, e.g. Agresti, 1996). The maximum- random variable. Fig. 1 shows the bootstrap process.
likelihood analysis works by finding the value of b that returns The first phase includes all steps necessary to prepare the
the maximum value of the log-likelihood function. This value data set for the analysis. These steps include the reduction of
is labelled as b. ^ The estimated value b^ has important multicollinearity and standardisation. The second phase
characteristics: it is asymptotically consistent and asymptoti- includes the generation of the empirical distribution function.
cally normal distributed (see, e.g. Fahrmeir & Kaufmann, We use the bootstrap method to generate an empirical
1985, 1986; for more details and discussions see, e.g. Hosmer distribution of the probabilities of default ðp ^ k Þ as following.
& Lemeshow, 2000). The design matrix X includes all design vectors xk (with kZ1,
As discussed above, banks are limited by small data samples .,N). These design vectors xk might represent real data (e.g.
in insolvency analysis regarding different sectors of borrowers. accounting information in an observed sample of real data) or
The asymptotic properties of the logistic regression might artificial data. To generate an empirical distribution of the
cause a bias in forecasting models. In some studies, the logistic forecasted probabilities of default, a number of subsets of X is
regression showed a bias even in very simple models (one or needed. We take a random selection out of all N design vectors.
two explanatory variable) when the sample size was small (see, In every selection, we choose a sample size of N. It is possible
e.g. Matin, 1994). Banks often use very complex models which that a company appears more then once in a subset or might not
have various numbers of explanatory variables (see, e.g. be selected at all. We generate 200 subsets and estimate for any
Hayden, 2002). The number of variables is usually reduced by subset the probability of default for all companies. Finally, an
applying a selection algorithm that eliminates insignificant empirical distribution of probabilities of default for every
variables. Nevertheless, the sample size usually remains small company results. We use the mean of this empirical
in relation to model complexity. As a result, a bias in distribution for each company as prediction of probability of
predictions may result. This influences the entire risk manage- default (see Fig. 1) instead of the classical PD estimated by the
ment and decision process. simple logistic regression model (see Eq. (1)). The variation of
In order to eliminate such an inaccuracy in logistic the data set might help to reduce the errors due to the small
regression facing such data problems, we propose to integrate sample size problems in classical logistic regression. In order
the bootstrap method in the forecasting process to stabilise the to investigate whether boostrappingimprove the classical
predictions. The bootstrap method is developed to generate an logistic regression, we compare the bootstrap method of
empirical distribution of a variable in those cases in which forecasting insolvencies and rating grades with the classical
proof of an analytical distribution is impossible (see in logistic regression model (without bootstrapping). We run
440 A. Oelerich, T. Poddig / Expert Systems with Applications 30 (2006) 437–447
distinguishs between two errors. First, any statistical method Empirical distributions
Evaluation of predictions
includes a typical statistical error, which is called unsystematic
error. The judgement of any method should not be based on this
error, because a quality judgement should describe a structural Fig. 3. Evaluation process of statistical methods.
A. Oelerich, T. Poddig / Expert Systems with Applications 30 (2006) 437–447 441
as sector, legal form of acompany or quality of management. Given vector xk and a vector b (and the cutpoint a), we
This vector can be written as xk Z ðxQk ; xk Þ whereas xk contains
M Q
calculate the probability pk Z ð1C expðKaKxTk bÞÞK1 for each
M
codings for qualitative information and xk includes quantitat- company. A realisation of Yk can be simulated by a Bernoulli
ive information about the kth company. experiment based on these probabilities (pk). Therefore, we
Sometimes the information of xM k will be standardised only need a given cutpoint parameter (a) and a vector (b) of
before it is used in a model (see, e.g. Moody’s Investors model parameters to define an artificial data generating
Service, 2001). The advantage of standardisation is to compare process. We calculate for each company its probability of
different information measured on different scales. A default using the parameters discussed above (b) (see also
standardised variable has an expected value of zero and a Table A1) and the design vectors xk. These probabilities are the
variance of one. For this reason, we generate artificial data by inputs to generate Bernoulli random variables. Thereby, an
using standard normal distributed random variables artificial data sample corresponding to a given equation results.
ðxM M
k wNð0; VÞÞ. Here, V : Zcovðxk ÞZ I is assumed as the Based on this artificial data, we know all information about
identity matrix. For reasons of simplification, we used every company (the table X), the model Eq. (a and vector b),
uncorrelated data. In reality the data will not be perfectly and the true probabilities of default pk. Inorder to investigate
uncorrelated. Therefore, orthogonality methods might be used the small sample size properties of different statistical methods
for data preprocessing to minimise or remove the correlation. with respect to forecast rating grades, different in-sample and
Despite these problems concerning real data our framework out-of sample sizes are generated.
fullfills the theoretical assumption of many statistical methods, To generate the true rating grade for any company, we
e.g. the logistic regression model or the discriminant analysis. orientate on the scale in (Rolfes & Emse, 2000). We divide
To generate data more similar to real world data, an empirical the interval from 0 to 1 into nine classes (see Table 1). This
covariance matrix should be used instead of the identity- rating scale is based on nine rating grades, whereas the grades
matrix. In this case, the covariance matrix V has to be replaced 1–8 symbolise solvent grades and the grade 9 describe
by the empirical matrix V. ^ Our framework works with both: a insolvent ratings. Banks might use other definitions of rating
diagonal and non-diagonal covariance matrix. To simplify the grades and/or add more rating classes. Our focus is not to look
simulation system, we focus on a diagonal covariance matrix for a perfect definition of rating grades, but a system to
assuming that multicollinearity is removed by data evaluate statistical methods. Typically, rating agencies use up
preprocessing. to 21 rating grades. Such complex splittings can also be
Qualitative information like legal form, sector, management integrated into our framework, but they are not regarded here
quality and so on allow to group companies. To simulate this in detail.
kind of information, we define the number of qualitative The evaluation system is based on four indexes. First, we
factors, the corresponding number of levels and all resulting define an index to quantify the quality of the probability
combinations. Then, we choose the number of companies for estimation. We call this index upsilon ðYÞ. The upsilon index is
each combination. The data generating process including based on the idea of mean squared errors defined by:
qualitative information is described in Oelerich and Poddig
(2004). An artificial design vector might look like Eq. (2) 1 XN
MSEp Z ^ Kpk Þ2
ðp (4)
N kZ1 k
xk Z ð1; 2; 0:06; 0:98;K0:876Þ (2)
Here, the first element 1 describes the first level of the first Here, pk describes the true probability of default of the kth
qualitative factor noted as A. The second element 2 company and p ^ k the corresponding prediction (by any kind of
corresponds to the second level of the second factor (noted model). The index is equal to zero if all probabilities of defaults
as B). These information has to be coded with dummy variables are forecasted correctly. The upper limit is 1 ð0% MSEpN % 1Þ.
by applying reference or effect coding preprocessing (see, e.g. The well known hit rate (proportion of correct classification) is
Hosmer & Lemeshow, 2000). In our simulation system, we use interpreted in a similar, but reversed manner. So we use the
reference coding for qualitative variables, because this coding
is easier to interpret than the effect coding (see Hosmer &
Table 1
Lemeshow, 2000). The vector further includes three additional Probabilities of default and corresponding rating grades
variables representing quantitative information. To summarise,
Grades Lower limit Upper limit
we generate quantitative information as standard normal
distributed random variables and use qualitative information 1 0.00000 0.00025
as grouping variables. 2 0.00025 0.00055
3 0.00055 0.00115
Let Yk wBðpk Þ be the dependent variable, where Bðpk Þ 4 0.00115 0.00405
describes the Bernoulli distribution with pk as probability for 5 0.00405 0.01335
the event of bankruptcy. The logistic regression model is based 6 0.01335 0.07705
on the model Eq. (3) (see, e.g. Hosmer & Lemeshow, 2000) 7 0.07705 0.16995
8 0.16995 0.20000
pk 9 0.20000 1.00000
ln Z a C xTk b: (3)
1Kpk A similar definition can be found in (Rolfes and Emse, 2000).
442 A. Oelerich, T. Poddig / Expert Systems with Applications 30 (2006) 437–447
Table 2
Overview of evaluation indexes for rating systems
Index Yp YD YP YR
Desciption Quantify the quality of Quantify the quality of binary Quantify the quality of rating grade Quantify the quality of rating grade
probability forecasts classification (hit rate) predictions (hit rate) predictions (distances)
1 XN 5. Results
YD Z 1K ðD^ KDk Þ2 (6)
N kZ1 k
Before presenting the results obtained, we describe three
describes the quality of forecasts with respect to a binary data generating models used in our simulation studies. Table
classification. The squared difference in Eq. (6) ðD^ k KDk Þ2 of A1 shows the parameters of different data generating models
the distance between the true and the estimated class is similar used in the simulation. A qualitative (main) factor is denoted
to the case of probabilities discussed above. In the binary case, by A, B, C, a quantitative factor by M1 to M10. The
this index is identical to the classical hit rate (proportion of parameters b1 to b5 correspond to the five levels of factor A in
correctly classified companies), because the distance equals model0. Sectors or branches, legal forms or ratings for
one if the classification is wrong and zero if the classification is management quality are typical examples for such factors.
correct. Interaction terms are denoted by the corresponding main
Rating grades are more difficult to evaluate than a binary factors, e.g. AB denote the interaction between the main
classification in real world applications because the true rating factors A and B. For simplifications, no interaction between
grades are unknown and unobservable. To evaluate a statistical qualitative factors is assumed, so all parameters hABij are set to
rating system using artificial data, we define a similar index to zero. In model0, the quantitative factors M1 to M5 have
quantify the predictive power of a method as usually used for different effect sizes, M6 to M10 are assumed to have no
binary classifications. We calculate the proportion of correctly effect. These artificial parameter settings have the same
classified companies as discussed for the binary case. Due to important characteristics as real data, e.g. some factors (e.g.
artificial data, the true rating grade is known. Table 1 shows AB, M4 and M5) have no effect and other have different effect
how the probability of default is mapped to rating grades in our sizes. The symbol ‘***’ means that this factor (e.g. C, M5) is
simulations. The YP -index is defined as not used in the specific data generating model, e.g. the model0
does not contain a qualitative factor C.
1 XN
In order to investigate the small sample size properties of
YP Z 1K P; (7)
N kZ1 k different statistical methods to forecast rating grades, different
sample sizes are generated. Table 3 shows these sample sizes
with Pk being zero if the estimated rating grade is equal to the for all models. The proportion of solvent and insolvent
true rating grade and one otherwise. The sum counts the companies in an artificial data set depends on the specific
A. Oelerich, T. Poddig / Expert Systems with Applications 30 (2006) 437–447 443
Fig. 4. Empirical distribution of the YD -index (left side) and the YP -index (right side) for data generating model1 with the smallest sample size of 225 companies. The
forecasting model is a logistic regression using a forward selection algorithm based on the Wald-test and the reference coding for qualitative factors. This figure
shows the hit rates (x-axis) and their corresponding relative frequency in percent (y-axis).
444 A. Oelerich, T. Poddig / Expert Systems with Applications 30 (2006) 437–447
sample size the quality of rating grade forecasts becomes To summarise the major results, the bootstrap method
better. If a bank uses 21 rating classes, the quality will likely be shows the highest average of all four indexes in our
lower than the results presented in this paper, because we use simulations in most cases. The variability of forecasts is
only nine classes. In many empirical studies the sample sizes extremely different. The variance of the VM is up to fifteen
correspond to the sample sizes used inour simulation studies. times higher than the variance obtained by the bootstrap
We do not regard extremely small or extremely large sample method. The main advantage of the bootstrap method is its
sizes (see Table A3). robustness. The results seem to be more reliable. However,
To quantify the difference of predicted rating grades and with increasing sample size the classical methods becomes
true rating grades, we propose the YR index (see Table A3). more powerful. For the largest sample size used, the
Here, we observe similar results as seen for the Yp index. For variance of the bootstrap method is approximately only
the smallest sample sizes, the variablility of the VM is two times lower than that of the classical methods (for the
extremely large. Using model1 and model2 as data generating Yp and YR indexes).
processes, the variance is nearly ten times higher as observed Such simulation studies show the power of different
applying the WM. methods or the improvements of modifications to
These results shows that the variability of forecasts differ for traditional statistical methods. For example, the compari-
different techniques within the same statistical model. Our son of classical logistic regression with the integration
simulations studies show the advantage of forecasting rating of bootstrapping suggests the use of resampling
grades using a selection algorithm. Nevertheless, for small approaches.
sample size the WM is biased. All in all, even the WM seems to
provide unreliable forecasts when the sample size is small. The
bootstrap method might result in improved forecasted 6. Conclusion
probabilities because its integration into the WM should reduce
the variability of forecasts. This should lead to a more reliable In this paper we discuss the possibility to assess the
predictive model. We discuss this for data generating model1 in quality of statistical methods to forecasts insolvencies and
detail. rating grades. Whereas an evaluation of statistical methods
In the second simulation study, we apply the data predicting insolvencies does not pose difficult problems, it is
generating model1 with the smallest (225 companies) and impossible forrating grades, because they are unknown and
the largest (900 companies) sample size due to the extremely unobservable. To solve this problem, we propose a
time consuming simulation runs. We find two important simulation system using artificial data, wherein all necessary
results in this second study. First, the bootstrap method has information are known, the information about companies,
the highest average in all predictive models (in our studies) the probability of default, the insolvencycoding and a the
in general. The forecast using the simple WM model is based rating grade. Based on such data we are able to evaluate
on asymptotic consistent estimators (see also Hocke, 1974; rating processes, which are based on statistical methods. We
Matin, 1994), but it seems to be more incorrect than the apply this simulation system to several logistic regression
bootstrap method when the sample size is small. The models and the bootstrap method for one specific logistic
bootstrap method seemingly stabilise the predictions of regression model. There are two major results: First, a
probabilities. Note, that the sample size applied in our prototype of an evaluation system to quantify the quality of
studies is similar to such used in empirical studies (e.g. rating grade forecasting models could be studied. Second,
Anders, 1998; Huang et al., 2004; Leker & Schewe, 1998). we find that the integration of bootstrap results in more
Second, the variance is an important property of forecasting confident forecasts. We call such a rating process ‘robust’,
models in real world applications. The bootstrap method has because it reduces the variability of predictions of
in all cases the smallest variances (see Table A4). For ratings insolvencies and rating grades.
generated by statistical methods, this robustness is very In order to reflect real world data in a more realistic
important, because ratings should be conservative and stable approach, it is recommended to use an empirical
with respect to time (see Bank for International Settlement, covariance matrix in the data generating process when
2003). Unreliable information in the data base might have a building up the design vectors (the matrix X). This
significant influence on the forecast. For example, changes in empirical covariance matrix could reflect interdependencies
the accounting data of one company could influence the between real companies in the artificial data. The
model equation in such a way that another company gets a simulation system allows to assess different rating models
different estimation of probability of default and thereby and to identify optimal rating models regarding a given
another rating grade is assigned. The bootstrap method might real data sample.
reduce this variation in rating grade forecasts. Note, that the
integration of bootstrap is easy with respect to most
statistical methods (e.g. logistic regression) and can be Appendix A
realised by the use of many statistical software solutions (e.g.
SAS or SPSS). See Tables A1–A4
A. Oelerich, T. Poddig / Expert Systems with Applications 30 (2006) 437–447 445
Table A1
Parameters in three different data generating models
Table A2
Empirical distribution of the Yp - and YD -Index for the VM and the WM models. The YD -Index is given in percent. All indexes are based on the data generating models
shown in Table A1 in the out-of sample test
Table A2 (continued)
Table A3
Empirical distribution of the YP and YR -Index for the VM and the WM models
The YP -index is given in percent. All indexes are based on the data generating models shown in Table A1 in the out-of sample test.
A. Oelerich, T. Poddig / Expert Systems with Applications 30 (2006) 437–447 447
Table A4
Empirical distribution of all indexes for three different statistical models
VM is the logistic regression with all factors (without a selection algorithm). WM means the logistic regression using a forward selection algorithm based on the
Wald-test and the reference coding for qualitative factors. BM is the WM with bootstrapping. All indexes are based on data generating model1 in Table A1.
References Frerichs, H., & Wahrenburg, W. (2003). Evaluating internal rating systems
depending on bank size. Working Paper 115. Frankfurt: Universität
Frankfurt.
Agresti, A. (1996). An introduction to categorical data analysis. New York:
Hayden, E. (2002). Modelling an accounting-based rating system for austrian
Wiley.
firms. Dissertation. Fakultät für Wirtschaftswissenschaften und Informatik
Altman, E. I. (1968). Financial ratios, discriminant analysis and the prediction
der Universität Wien.
of corporate bankcruptcy. Journal of Finance, 4, 589–609.
Hocke, J. (1974). Der Einfluss der Multikollinearität auf die Kleinstichprobe-
Anders, U. (1998). Prognose von Insolvenzwahrscheinlichkeiten mit Hilfe
neigenschaften diverser ökonometrischer Schätzmethoden—Eine Monte
logistischer neuronaler Netzwerke. Zeitschrift für betriebswirtschaftliche
Carlo-Studie. Dissertation. Universität München.
Forschung, 50, 892–915.
Hosmer, D. W., & Lemeshow, S. (2000). Applied Logistic Regression (2nd ed.).
Bank for International Settlement. (2003). Consultative document: The new
New York: Wiley.
basel capital accord. Zürich.
Huang, Z., Chen, H., Hsu, C., Chen, W., & Wu, S. (2004). Credit rating analysis
Carey, M., & Hrycay, M. (2001). Parameterizing credit risk models with rating
with support vector machines and neural networks: A market comparative
data. Journal of Banking & Finance, 25, 197–270. study. Decision Support Systems, 37, 543–558.
Desai, V. S., Crook, J. N., & Overstreet, A. (1996). A comparison of neural Leker, J., & Schewe, G. (1998). Beurteilung des Kreditausfallrisikos im
networks and linear scoring models in the credit union environment. Firmenkundengeschäft der Banken. Zeitschrift für betriebswirtschaftliche
European Journal of Operational Research, 95, 24–37. Forschung, 50, 877–891.
Deutsche Bundesbank. (1999). Zur Bonitätsbeurteilung von Wirtschaftsunter- Matin, M. A. (1994). Small-sample properties of different tests and estimators
nehmen durch die Deutsche Bundesbank. Deutsche Bundesbank Mon- of the parameters in the logistic regression model. Research Report 4.
atsbericht 1999 (pp. 51–64). Schweden, Uppsala: Uppsala Universitet.
Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. London: Moody’s Investors Service. (2001). Moody’s RiskCalcTM für nicht börsenno-
Chapman & Hall. tierte Unternehmen: Das deutsche Modell.
Fahrmeir, L., & Kaufmann, H. (1985). Consistency and asymptotic normality Oelerich, A., & Poddig, T. (2004). Modified wald statistics for generalized
of the maximum likelihood estimators in generalized linear models. The linear models. Allgemeines Statistisches Archiv, 1, 23–34.
Annals of Statistics, 13(1), 342–368. Poddig, T., & Oelerich, A. (2004). Evaluierung quantitativer Ratingverfahren.
Fahrmeir, L., & Kaufmann, H. (1986). Correction: Consistency and asymptotic In D. Bayer, & C. Ortseifen (Eds.), SAS in Hochschule und Wirtschaft (pp.
normality of the maximum likelihood estimators in generalized linear 195–212). Aachen: Shaker Verlag.
models. The Annals of Statistics, 14(4), 1643. Rolfes, B., & Emse, C. (2000). Rating basierte Ansätze zur Bemessung der
Foreman, R. D. (2003). A logistic analysis of bankruptcy within the US local Eigenkapitalunterlegung von Kreditrisiken. ecfs—Forschungsbericht, Vol. 3.
telecommunicaion industry. Journal of Economics and Business, 55, 135– Srinivasan, V., & Kim, H. (1987). Credit granting: A comparative analysis of
166. classification procedures. Journal of Banking and Finance, XLII(3), 665–683.