À propos de cet article

Citez

Models for estimating the probability of default are widely used in the business throughout the lending process, starting as early as the application stage, where they play an important role in loan approval status. For model soundness and performance, ensuring adequate data quality is essential. Identifying outliers, analyzing their impact and choosing the right method to treat them is a necessary stage of preprocessing, which is often overlooked in practice for a variety of reasons, an important one being insufficient data. Given the inherent imbalance of the loan portfolio with regard to default status, elimination of outliers is seldom feasible. The current widely accepted approach is based on binning and weight of evidence. Usually two types of binning are tested, namely bucket and quantile. While the latter is robust to outlier presence, the former is not. Both approaches lead to the discretization of the continuous variable they are applied on. This causes information loss both in terms of variation given by individual values and in terms of distance between the various observation points on a certain variable. In the present paper, we explore the opportunity of using other methods for dealing with outlier presence and we describe their advantages and disadvantages in the context of probability of default estimation for credit risk. We conclude that, aside from quantile binning, not dealing with outliers in case of very large datasets or winsorizing are also effective. More importantly, several methods should be considered and tested for each variable in order to find the optimal balance between altering the data and reducing variance.

eISSN:
2558-9652
Langue:
Anglais