Kernel-based methods such as SVMs and LS-SVMs have been successfully used for solving various supervised classification and pattern recognition problems in machine learning. Unfortunately, they are heavily dependent on the choice of the optimal kernel function and from tuning parameters. Their solutions, in fact, suffer of complete lack of interpretation in terms of input variables. That is not a banal problem, especially when the learning task is related with a critical asset of a business, like credit scoring, where deriving a classification rule has to respect an international regulation. The following strategy is proposed for solving problems using categorical predictors: replace the predictors by components issued from MCA, choice of the best kernel among several ones (linear ,RBF, Laplace, Cauchy, etc.), approximation of the classifier through a linear model. The loss of performance due to such approximation is balanced by better interpretability for the end user, employed in order to understand and to rank the influence of each category of the variables set in the prediction. This strategy has been applied to real risk-credit data of small enterprises. Cauchy kernel was found the best and leads to a score much more efficient than classical ones, even after approximation.

Liberati, C., Camillo, F., Saporta, G. (2012). Kernel Discrimination and Explicative Features: an Operative Approach. In Proceedings of COMPSTAT 2012 (pp.507-518). International Statistical Institute (ISI).

Kernel Discrimination and Explicative Features: an Operative Approach

LIBERATI, CATERINA;
2012

Abstract

Kernel-based methods such as SVMs and LS-SVMs have been successfully used for solving various supervised classification and pattern recognition problems in machine learning. Unfortunately, they are heavily dependent on the choice of the optimal kernel function and from tuning parameters. Their solutions, in fact, suffer of complete lack of interpretation in terms of input variables. That is not a banal problem, especially when the learning task is related with a critical asset of a business, like credit scoring, where deriving a classification rule has to respect an international regulation. The following strategy is proposed for solving problems using categorical predictors: replace the predictors by components issued from MCA, choice of the best kernel among several ones (linear ,RBF, Laplace, Cauchy, etc.), approximation of the classifier through a linear model. The loss of performance due to such approximation is balanced by better interpretability for the end user, employed in order to understand and to rank the influence of each category of the variables set in the prediction. This strategy has been applied to real risk-credit data of small enterprises. Cauchy kernel was found the best and leads to a score much more efficient than classical ones, even after approximation.
paper
Credit scoring, Kernel machines, Disqual, Model selection
English
COMPSTAT 2012
2012
Colubi, A; Fokianos, K; Gonza ́lez-Rodrıguez, J; Kontoghiorghes, EJ
Proceedings of COMPSTAT 2012
978-90-73592-32-2
2012
507
518
http://cedric.cnam.fr/fichiers/art_2627.pdf
none
Liberati, C., Camillo, F., Saporta, G. (2012). Kernel Discrimination and Explicative Features: an Operative Approach. In Proceedings of COMPSTAT 2012 (pp.507-518). International Statistical Institute (ISI).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/36109
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact