Szepannek, Gero and Lübke, Karsten (2021) Facing the Challenges of Developing Fair Risk Scoring Models. Frontiers in Artificial Intelligence, 4. ISSN 2624-8212
pubmed-zip/versions/3/package-entries/frai-04-681915-r2/frai-04-681915.pdf - Published Version
Download (1MB)
Abstract
Algorithmic scoring methods are widely used in the finance industry for several decades in order to prevent risk and to automate and optimize decisions. Regulatory requirements as given by the Basel Committee on Banking Supervision (BCBS) or the EU data protection regulations have led to an increasing interest and research activity on understanding black box machine learning models by means of explainable machine learning. Even though this is a step into a right direction, such methods are not able to guarantee for a fair scoring as machine learning models are not necessarily unbiased and may discriminate with respect to certain subpopulations such as a particular race, gender, or sexual orientation—even if the variable itself is not used for modeling. This is also true for white box methods like logistic regression. In this study, a framework is presented that allows analyzing and developing models with regard to fairness. The proposed methodology is based on techniques of causal inference and some of the methods can be linked to methods from explainable machine learning. A definition of counterfactual fairness is given together with an algorithm that results in a fair scoring model. The concepts are illustrated by means of a transparent simulation and a popular real-world example, the German Credit data using traditional scorecard models based on logistic regression and weight of evidence variable pre-transform. In contrast to previous studies in the field for our study, a corrected version of the data is presented and used. With the help of the simulation, the trade-off between fairness and predictive accuracy is analyzed. The results indicate that it is possible to remove unfairness without a strong performance decrease unless the correlation of the discriminative attributes on the other predictor variables in the model is not too strong. In addition, the challenge in explaining the resulting scoring model and the associated fairness implications to users is discussed.
Item Type: | Article |
---|---|
Subjects: | European Repository > Multidisciplinary |
Depositing User: | Managing Editor |
Date Deposited: | 15 Mar 2023 08:53 |
Last Modified: | 12 Jun 2024 11:58 |
URI: | http://go7publish.com/id/eprint/1014 |