Learning and training processes are starting to be affected by the diffusion of Artificial Intelligence (AI) techniques and methods. AI can be variously exploited for supporting education, though especially deep learning (DL) models are normally suffering from some degree of opacity and lack of interpretability. Explainable AI (XAI) is aimed at creating a set of new AI techniques able to improve their output or decisions with more transparency and interpretability. Deep attentional mechanisms proved to be particularly effective for identifying relevant communities and relationships in any given input network that can be exploited with the aim of improving useful information to interpret the suggested decision process. In this paper we provide the first stages of our ongoing research project, aimed at significantly empowering the recommender system of the educational platform”WhoTeach” by means of explainability, to help teachers or experts to create and manage high-quality courses for personalized learning. The presented model is actually our first tentative to start to include explainability in the system. As shown, the model has strong potentialities to provide relevant recommendations. Moreover, it allows the possibility to implement effective techniques to completely reach explainability3

Marconi, L., Matamoros Aragon, R., Zoppis, I., Manzoni, S., Mauri, G., Epifania, F. (2020). Approaching explainable recommendations for personalized social learning the current stage of the educational platform”WhoTeach”. In Italian Workshop on Explainable Artificial Intelligence 2020 (pp.104-111). CEUR-WS.

Approaching explainable recommendations for personalized social learning the current stage of the educational platform”WhoTeach”

Marconi L.;Matamoros Aragon R. A.;Zoppis I.;Manzoni S.;Mauri G.;Epifania F.
2020

Abstract

Learning and training processes are starting to be affected by the diffusion of Artificial Intelligence (AI) techniques and methods. AI can be variously exploited for supporting education, though especially deep learning (DL) models are normally suffering from some degree of opacity and lack of interpretability. Explainable AI (XAI) is aimed at creating a set of new AI techniques able to improve their output or decisions with more transparency and interpretability. Deep attentional mechanisms proved to be particularly effective for identifying relevant communities and relationships in any given input network that can be exploited with the aim of improving useful information to interpret the suggested decision process. In this paper we provide the first stages of our ongoing research project, aimed at significantly empowering the recommender system of the educational platform”WhoTeach” by means of explainability, to help teachers or experts to create and manage high-quality courses for personalized learning. The presented model is actually our first tentative to start to include explainability in the system. As shown, the model has strong potentialities to provide relevant recommendations. Moreover, it allows the possibility to implement effective techniques to completely reach explainability3
slide + paper
Graph Attention Networks; Social Networks; Social Recommendations; WhoTeach;
English
Italian Workshop on Explainable Artificial Intelligence 2020 25-26 november
2020
Cataldo Musto, Daniele Magazzeni, Salvatore Ruggieri, Giovanni Semeraro
Italian Workshop on Explainable Artificial Intelligence 2020
2020
2742
104
111
none
Marconi, L., Matamoros Aragon, R., Zoppis, I., Manzoni, S., Mauri, G., Epifania, F. (2020). Approaching explainable recommendations for personalized social learning the current stage of the educational platform”WhoTeach”. In Italian Workshop on Explainable Artificial Intelligence 2020 (pp.104-111). CEUR-WS.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/300662
Citazioni
  • Scopus 1
  • ???jsp.display-item.citation.isi??? ND
Social impact