The classification of EEG during motor imagery (MI) represents a challenging task in neuro-rehabilitation. In 2016, a deep learning (DL) model called EEGNet (based on CNN) and its variants attracted much attention for their ability to reach 80% accuracy in a 4-class MI classification. However, they can poorly explain their output decisions, preventing them from definitely solving questions related to inter-subject variability, generalization, and optimal classification. In this paper, we propose vEEGNet, a new model based on EEGNet, whose objective is now two-fold: it is used to classify MI, but also to reconstruct (and eventually generate) EEG signals. The work is still preliminary, but we are able to show that vEEGNet is able to classify 4 types of MI with performances at the state of the art, and, more interestingly, we found out that the reconstructed signals are consistent with the so-called motor-related cortical potentials, very specific and well-known motorrelated EEG patterns. Thus, jointly training vEEGNet to both classify and reconstruct EEG might lead it, in the future, to decrease the inter-subject performance variability, and also to generate new EEG samples to augment small datasets to improve classification, with a consequent strong impact on neuro-rehabilitation.

Zancanaro, A., Zoppis, I., Manzoni, S., Cisotto, G. (2023). vEEGNet: A New Deep Learning Model to Classify and Generate EEG. In Proceedings of the 9th International Conference on Information and Communication Technologies for Ageing Well and e-Health, ICT4AWE 2023, Prague, Czech Republic, April 22-24, 2023 (pp.245-252). Setúbal : Science and Technology Publications [10.5220/0011990800003476].

vEEGNet: A New Deep Learning Model to Classify and Generate EEG

Zoppis, IF;Manzoni, SL;Cisotto, G
2023

Abstract

The classification of EEG during motor imagery (MI) represents a challenging task in neuro-rehabilitation. In 2016, a deep learning (DL) model called EEGNet (based on CNN) and its variants attracted much attention for their ability to reach 80% accuracy in a 4-class MI classification. However, they can poorly explain their output decisions, preventing them from definitely solving questions related to inter-subject variability, generalization, and optimal classification. In this paper, we propose vEEGNet, a new model based on EEGNet, whose objective is now two-fold: it is used to classify MI, but also to reconstruct (and eventually generate) EEG signals. The work is still preliminary, but we are able to show that vEEGNet is able to classify 4 types of MI with performances at the state of the art, and, more interestingly, we found out that the reconstructed signals are consistent with the so-called motor-related cortical potentials, very specific and well-known motorrelated EEG patterns. Thus, jointly training vEEGNet to both classify and reconstruct EEG might lead it, in the future, to decrease the inter-subject performance variability, and also to generate new EEG samples to augment small datasets to improve classification, with a consequent strong impact on neuro-rehabilitation.
poster + paper
AI, deep learning, variational autoencoder, EEG, machine learning, brain, classification, latent space, inter-subject variability
English
9th International Conference on Information and Communication Technologies for Ageing Well and e-Health (ICT4AWE)
2023
Lozano Pérez, MD; Mulvenna, MD; Ziefle, M
Proceedings of the 9th International Conference on Information and Communication Technologies for Ageing Well and e-Health, ICT4AWE 2023, Prague, Czech Republic, April 22-24, 2023
978-989-758-645-3
2023
2023-April
245
252
open
Zancanaro, A., Zoppis, I., Manzoni, S., Cisotto, G. (2023). vEEGNet: A New Deep Learning Model to Classify and Generate EEG. In Proceedings of the 9th International Conference on Information and Communication Technologies for Ageing Well and e-Health, ICT4AWE 2023, Prague, Czech Republic, April 22-24, 2023 (pp.245-252). Setúbal : Science and Technology Publications [10.5220/0011990800003476].
File in questo prodotto:
File Dimensione Formato  
Zancanaro2023-ICT4AWE-VoR.pdf

accesso aperto

Descrizione: Intervento a convegno
Tipologia di allegato: Publisher’s Version (Version of Record, VoR)
Licenza: Creative Commons
Dimensione 397.33 kB
Formato Adobe PDF
397.33 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/423178
Citazioni
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
Social impact