Vous êtes sur la page 1sur 22

Issues in Credit Scoring

Model Development and Validation


Dennis Glennon
Risk Analysis Division Economics Department The Office of the Comptroller of the Currency

The opinions expressed are those of the author and do not necessarily reflect those of the Office of the Comptroller of the Currency. 1

Model Development and Validation


Outline
1. Credit Risk vs. Model Risk 2. Model Risk Analysis 3. Model Purpose
i. Classification ii. Prediction

Model Review
Scope of a Review
I. Credit Risk
The risk to earnings or capital of an obligor's failure to meet the terms of any contract with the bank or otherwise fail to perform as agreed.

II. Model Risk


Although model risk contributes to the overall portfolio or credit risk, it represents a conceptually distinct exposure that emerges from an overly broad interpretation or application of a model beyond that for which it was developed.
3

Model Review
Scope of a Review
I. Credit Risk Analysis
i. ii. evaluate strategies assess current portfolio performance

II.

Model Risk Analysis


i. evaluate model validity, reliability, and accuracy
4

Model Review
Model Risk Analysis
I. Are the models developed using valid statistical or industry-accepted methods? i. Appropriate sample design a. truncated/censored samples b. over-sample

Model Review
Model Risk Analysis (continued)
ii. Valid model design a. satisfy minimum statistical requirements b. in-sample performance (including holdout sample) c. out-of-time performance

Model Review
Model Risk Analysis (continued)
II. Are the models used in ways that are consistent with the original purpose for which the model was developed? i. Model purpose a. b. classification prediction

Model Purpose
Model Purpose
The underlying objective of a classification-based model is different from that of a prediction model. As such, a model should be evaluated within the scope of its primary objective.

Model Purpose
Models as Classification Tools
Banks are developing or purchasing models that are designed as classification tools. That is, the models are developed for the purpose of partitioning populations or portfolios into groups by their expected relative performance.
Modeling Objective: Maximize the divergence or separation between the distributions of good and bad accounts.

Classification Design: Example


A Comparison of Model Performance
Performance Distribution
6000 5000 4000 3000 2000 1000 0
0 0 0 0 0 0 0 0 15 17 19 21 23 25 27 29

P erfo rm an ce D istrib u tio n


250000 200000 150000 100000 50000 0

6000 4000 2000 0

250000 200000 150000 100000 50000 0

0 27

17

15

19

23

25

21

score bads goods

score b ad s g o o d s

K-S = 64.0

K-S = 26.5
10

29

Model Purpose
Classification Objective
Interpretation: If, for example, the good/bad odds ratio associated with the score interval between 200-210 is 30:1, then the odds ratio for the intervals above (below) 200-210 will be greater (less) than 30:1. Result: A model that maintains its ability to rank-order performance is considered to be reliable.

11

Model Purpose
Classification-Based Models
Valid Purpose: models developed under this criteria are valid as decision tools if the objective is to simply identify segments of the population that, as a group, perform poorly.
Appropriate for identifying and excluding specific segments of the population -- a strategy that, in practice, often improves average portfolio performance relative to a random-selection method.
12

Classification Model
Log Odds Curve
7 6 5

Development (K-S = 32.1) Validation (K-S = 34.3)

ln(20/1) = 3 bad rate = .05

ln(good/bad)

4 3 2 1 0 644 653 665 675 684 693 706 715 725 739 753

ln(4/1) = 1.39 bad rate = .20

Score Bands

13

Model Purpose
Alternative Purpose: Predicting Performance.
Banks want models for risk-based pricing/re-pricing and profitability analysis -- models that are designed specifically to address the issue of trading risk for margin (i.e., return). For that purpose, banks need models that are accurate predictors of performance.
14

Model Selection: Which model is better?


obs. good (G) - y=0 obs. bad (B) - y=1

K-S = 48
1 3

Model 1
5 7 9

K-S = 48
1 5

Model 2
4 4 11

1
[0.1] [0.3] [bad rate] [0.5] [0.7] [0.9]

1
[0.08] [0.45] [bad rate] [0.44] [0.67] [0.92]

11

0 10 20 30 40 50 60 70 80 90 100 Score (quintiles)

10

20 30 40 50 60 70 80 90 100 Score (quintiles)


15

[#B / (#G + #B)]

Model Purpose: Prediction


Models as Prediction Tools
Purpose: to predict the expected frequency at which accounts with similar attributes perform (e.g., respond, attrite, default). For example, predict the probability of default.
Modeling Objective: Minimize the difference between the predicted and actual percentage of defaults within each score range (i.e., maximize the goodness-of-fit).
16

Model Purpose: Prediction


Prediction-Based Models
Interpretation: If within the interval 200-210 the risk model predicts a probability of default of .04, then for every 100 account that score within that range, four should default. A model that satisfies this condition is considered to be accurate.
This is a much stronger condition than that associated with a classification objective (i.e., reliable).
17

Model Purpose: Prediction


Prediction-Based Models
Valid Purpose: models developed under this approach are valid as actuarial tools; as such, they are appropriate in situations in which the actual, not just the relative, measure of performance is required.

18

Model Purpose: Prediction


Limitations of a Prediction-Based Model
The model-development process is significantly more complex especially when data across all aspects of the behavior decision (i.e., individual, market, and industry) are limited.

19

Model Purpose
Conclusion: Models are developed for different purposes -e.g., classification or prediction. As such, the choices of: sample design, modeling technique, and validation procedures

are driven by the intended purpose for which the model will ultimately be used.

20

Model Purpose
Observation: The choice of modeling objective is important not only because it defines how we assess its validity, but also because it defines a full set of technical estimation procedures that are used to select the best model under the chosen objective.

21

Issues in Credit Scoring


Model Development and Validation

The End

22

Vous aimerez peut-être aussi