This work presents an evolutionary approach for assessing the robustness of a system trained in the detection of software vulnerabilities. By applying a Grammatical Evolution genetic algorithm, and using the output of the system being assessed as the fitness function, we show how we can easily change the classification decision (i.e. vulnerable or not vulnerable) for a given instance by simply injecting evolved features that in no wise affect the functionality of the program. Additionally, by means of the same technique, that is by simply modifying the program instances, we show how we can significantly decrease the accuracy measure of the whole system on the dataset used for the test phase. Finally we remark that these methods can be easily customized for applications in different domains and also how the underlying ideas can be exploited for different purposes, such as the exploration of the behaviour of a generic neural system.

Ferretti, C., Saletta, M. (2021). Deceiving neural source code classifiers: Finding adversarial examples with grammatical evolution. In GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion (pp.1889-1897). Association for Computing Machinery, Inc [10.1145/3449726.3463222].

Deceiving neural source code classifiers: Finding adversarial examples with grammatical evolution

Ferretti C.;Saletta M.
2021

Abstract

This work presents an evolutionary approach for assessing the robustness of a system trained in the detection of software vulnerabilities. By applying a Grammatical Evolution genetic algorithm, and using the output of the system being assessed as the fitness function, we show how we can easily change the classification decision (i.e. vulnerable or not vulnerable) for a given instance by simply injecting evolved features that in no wise affect the functionality of the program. Additionally, by means of the same technique, that is by simply modifying the program instances, we show how we can significantly decrease the accuracy measure of the whole system on the dataset used for the test phase. Finally we remark that these methods can be easily customized for applications in different domains and also how the underlying ideas can be exploited for different purposes, such as the exploration of the behaviour of a generic neural system.
paper
adversarial examples; computer security; deep learning; grammatical evolution; security assessment
English
2021 Genetic and Evolutionary Computation Conference, GECCO 2021
2021
GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion
9781450383516
2021
1889
1897
reserved
Ferretti, C., Saletta, M. (2021). Deceiving neural source code classifiers: Finding adversarial examples with grammatical evolution. In GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion (pp.1889-1897). Association for Computing Machinery, Inc [10.1145/3449726.3463222].
File in questo prodotto:
File Dimensione Formato  
3449726.3463222.pdf

Solo gestori archivio

Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Tutti i diritti riservati
Dimensione 875.81 kB
Formato Adobe PDF
875.81 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/324422
Citazioni
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
Social impact