A crucial role in the security of modern networks is played by Intrusion Detection Systems (IDSs), security devices designed to identify and mitigate attacks to networks structure. Data-driven approaches based on Machine Learning (ML) have gained more and more popularity for executing the classification tasks required by signature-based IDSs. However, typical ML models adopted for this purpose do not properly take into account the uncertainty associated with their prediction. This poses significant challenges, as they tend to produce misleadingly high classification scores for both misclassified inputs and inputs belonging to unknown classes (e.g. novel attacks), limiting the trustworthiness of existing ML-based solutions. In this paper, we argue that ML-based IDSs should always provide accurate uncertainty quantification to avoid overconfident predictions. In fact, an uncertainty-aware classification would be beneficial to enhance closed-set classification performance, would make it possible to carry out Active Learning, and would help recognize inputs of unknown classes as truly unknowns, unlocking open-set classification capabilities and Out-of-Distribution (OoD) detection. To verify it, we compare various ML-based methods for uncertainty quantification and open-set classification, either specifically designed for or tailored to the domain of network intrusion detection. Moreover, we develop a custom model based on Bayesian Neural Networks that stands out for its OoD detection capabilities and robustness, with a lower variance in the results over different scenarios, compared to other baselines, thus showing how proper uncertainty quantification can be exploited to significantly enhance the trustworthiness of ML-based IDSs.

Talpini, J., Sartori, F., Savi, M. (2024). Enhancing trustworthiness in ML-based network intrusion detection with uncertainty quantification. JOURNAL OF RELIABLE INTELLIGENT ENVIRONMENTS, 10, 501-520 [10.1007/s40860-024-00238-8].

Enhancing trustworthiness in ML-based network intrusion detection with uncertainty quantification

Talpini J.
Primo
;
Sartori F.
Secondo
;
Savi M.
Ultimo
2024

Abstract

A crucial role in the security of modern networks is played by Intrusion Detection Systems (IDSs), security devices designed to identify and mitigate attacks to networks structure. Data-driven approaches based on Machine Learning (ML) have gained more and more popularity for executing the classification tasks required by signature-based IDSs. However, typical ML models adopted for this purpose do not properly take into account the uncertainty associated with their prediction. This poses significant challenges, as they tend to produce misleadingly high classification scores for both misclassified inputs and inputs belonging to unknown classes (e.g. novel attacks), limiting the trustworthiness of existing ML-based solutions. In this paper, we argue that ML-based IDSs should always provide accurate uncertainty quantification to avoid overconfident predictions. In fact, an uncertainty-aware classification would be beneficial to enhance closed-set classification performance, would make it possible to carry out Active Learning, and would help recognize inputs of unknown classes as truly unknowns, unlocking open-set classification capabilities and Out-of-Distribution (OoD) detection. To verify it, we compare various ML-based methods for uncertainty quantification and open-set classification, either specifically designed for or tailored to the domain of network intrusion detection. Moreover, we develop a custom model based on Bayesian Neural Networks that stands out for its OoD detection capabilities and robustness, with a lower variance in the results over different scenarios, compared to other baselines, thus showing how proper uncertainty quantification can be exploited to significantly enhance the trustworthiness of ML-based IDSs.
Articolo in rivista - Articolo scientifico
Active learning; Network intrusion detection; Out-of-distribution detection; Trustworthy machine learning; Uncertainty quantification;
English
4-set-2024
2024
10
501
520
open
Talpini, J., Sartori, F., Savi, M. (2024). Enhancing trustworthiness in ML-based network intrusion detection with uncertainty quantification. JOURNAL OF RELIABLE INTELLIGENT ENVIRONMENTS, 10, 501-520 [10.1007/s40860-024-00238-8].
File in questo prodotto:
File Dimensione Formato  
Talpini-2024-JRIE-VoR.pdf

accesso aperto

Descrizione: CC BY 4.0 This article is licensed under a Creative Commons Attribution 4.0 International License To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 1.85 MB
Formato Adobe PDF
1.85 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/509767
Citazioni
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
Social impact