Kernel logistic regression (KLR) is a widely used supervised learning method for binary and multi-class classification, which provides estimates of the conditional probabilities of class membership for the data points. Unlike other kernel methods such as Support Vector Machines (SVMs), KLRs are generally not sparse. Previous attempts to deal with sparsity in KLR include a heuristic method referred to as the Import Vector Machine (IVM) and ad hoc regularizations such as the ℓ1/2-based one. Achieving a good trade-off between prediction accuracy and sparsity is still a challenging issue with a potential significant impact from the application point of view. In this work, we revisit binary KLR and propose an extension of the training formulation proposed by Keerthi et al., which is able to induce sparsity in the trained model, while maintaining good testing accuracy. To efficiently solve the dual of this formulation, we devise a decomposition algorithm of Sequential Minimal Optimization type which exploits second-order information, and for which we establish global convergence. Numerical experiments conducted on 12 datasets from the literature show that the proposed binary KLR approach achieves a competitive trade-off between accuracy and sparsity with respect to IVM, ℓ1/2-based regularization for KLR, and SVM while retaining the advantages of providing informative estimates of the class membership probabilities.

Consolo, A., Manno, A., Amaldi, E. (2026). Binary kernel logistic regression: A sparsity-inducing formulation and a convergent decomposition training algorithm. COMPUTERS & OPERATIONS RESEARCH, 188(April 2026) [10.1016/j.cor.2025.107363].

Binary kernel logistic regression: A sparsity-inducing formulation and a convergent decomposition training algorithm

Consolo, Antonio
Primo
;
2026

Abstract

Kernel logistic regression (KLR) is a widely used supervised learning method for binary and multi-class classification, which provides estimates of the conditional probabilities of class membership for the data points. Unlike other kernel methods such as Support Vector Machines (SVMs), KLRs are generally not sparse. Previous attempts to deal with sparsity in KLR include a heuristic method referred to as the Import Vector Machine (IVM) and ad hoc regularizations such as the ℓ1/2-based one. Achieving a good trade-off between prediction accuracy and sparsity is still a challenging issue with a potential significant impact from the application point of view. In this work, we revisit binary KLR and propose an extension of the training formulation proposed by Keerthi et al., which is able to induce sparsity in the trained model, while maintaining good testing accuracy. To efficiently solve the dual of this formulation, we devise a decomposition algorithm of Sequential Minimal Optimization type which exploits second-order information, and for which we establish global convergence. Numerical experiments conducted on 12 datasets from the literature show that the proposed binary KLR approach achieves a competitive trade-off between accuracy and sparsity with respect to IVM, ℓ1/2-based regularization for KLR, and SVM while retaining the advantages of providing informative estimates of the class membership probabilities.
Articolo in rivista - Articolo scientifico
Decomposition methods; Kernel logistic regression; Machine learning; Nonlinear programming; Sparsity;
English
24-dic-2025
2026
188
April 2026
107363
open
Consolo, A., Manno, A., Amaldi, E. (2026). Binary kernel logistic regression: A sparsity-inducing formulation and a convergent decomposition training algorithm. COMPUTERS & OPERATIONS RESEARCH, 188(April 2026) [10.1016/j.cor.2025.107363].
File in questo prodotto:
File Dimensione Formato  
Consolo et al-2025-Computers and Operations Research-VoR.pdf

accesso aperto

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 3.64 MB
Formato Adobe PDF
3.64 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/590166
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
Social impact