Nowadays, learning and training processes are beginning to be affected by the diffusion of Artificial Intelligence (AI) techniques and methods. Despite its potentialities, AI and in particular deep learning (DL) models are normally suffering from some degree of opacity and lack of interpretability. Explainable AI (XAI) is aimed at creating a set of new AI techniques able to improve their output or decisions with more transparency and interpretability. Among these techniques, deep attentional mechanisms provide the possibility to improve the performances of the output of the models and especially they allow to explain the reason why a specific output is given. In this paper we describe the current stage of explainability for a collaborative-filtering recommender system (RS) of the “WhoTeach” educational platform, which is aimed at designing of new didactic programs and courses, particularly by the means of the results of an experimentation that has been performed on a selected set of users, by means of the cooperative evaluation approach. A relevant result of the experimentation shows the need for improving explainability and suggested a development direction towards attentional mechanisms for WhoTeach functionalities that are devoted to suggest educational resources according to user needs and profile. This type of models allows to justify the chosen recommendations provided by the model by means of attention weights, which have also been statistically tested.

Marconi, L., Matamoros Aragon, R., Fossati, S., Zoppis, I., Actis Grosso, R., Manzoni, S., et al. (2020). Towards Explainable AI for Personalized Teaching: results on experimental activities on the “ WhoTeach ” educational platform. In Atti Convegno Nazionale DIDAMATiCA 2020 (pp.186-197).

Towards Explainable AI for Personalized Teaching: results on experimental activities on the “ WhoTeach ” educational platform

Marconi, L;Matamoros Aragon, R;Zoppis, I;Actis Grosso, R;Manzoni, S;Mauri, G;Epifania, F
2020

Abstract

Nowadays, learning and training processes are beginning to be affected by the diffusion of Artificial Intelligence (AI) techniques and methods. Despite its potentialities, AI and in particular deep learning (DL) models are normally suffering from some degree of opacity and lack of interpretability. Explainable AI (XAI) is aimed at creating a set of new AI techniques able to improve their output or decisions with more transparency and interpretability. Among these techniques, deep attentional mechanisms provide the possibility to improve the performances of the output of the models and especially they allow to explain the reason why a specific output is given. In this paper we describe the current stage of explainability for a collaborative-filtering recommender system (RS) of the “WhoTeach” educational platform, which is aimed at designing of new didactic programs and courses, particularly by the means of the results of an experimentation that has been performed on a selected set of users, by means of the cooperative evaluation approach. A relevant result of the experimentation shows the need for improving explainability and suggested a development direction towards attentional mechanisms for WhoTeach functionalities that are devoted to suggest educational resources according to user needs and profile. This type of models allows to justify the chosen recommendations provided by the model by means of attention weights, which have also been statistically tested.
paper
Social Networks, WhoTeach, Social Recommendations, Graph Attention Networks
English
DIDAMATiCA 2020 "Smarter School for Smart Cities"
2020
Atti Convegno Nazionale DIDAMATiCA 2020
978-8-89-809161-4
2020
186
197
none
Marconi, L., Matamoros Aragon, R., Fossati, S., Zoppis, I., Actis Grosso, R., Manzoni, S., et al. (2020). Towards Explainable AI for Personalized Teaching: results on experimental activities on the “ WhoTeach ” educational platform. In Atti Convegno Nazionale DIDAMATiCA 2020 (pp.186-197).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/325225
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
Social impact