Children with Autism Spectrum Disorder (ASD) often face the challenge of detecting and expressing emotions. I.e., it's hard for them to recognize happiness, sadness and anger in other people and to express their own feelings. This difficulty produces severe impairments in communication and social functioning. The paper proposes a spoken educational game, exploiting Machine Learning techniques, to help children with ASD to understand how to correctly identify and express emotions. The game focuses on four emotional states (happiness, sadness, anger and neutrality) and is divided in two levels with increasingly difficulty: the first step is to learn how to recognize and express feelings and in the second phase emotional skills by the user are examined and evaluated. The application integrates a multilingual emotion recognizer from the pitch of the voice.
Rouhi, A., Spitale, M., Catania, F., Cosentino, G., Gelsomini, M., Garzotto, F. (2019). Emotify: Emotional game for children with autism spectrum disorder based-on machine learning. In IUI '19 Companion: Companion Proceedings of the 24th International Conference on Intelligent User Interfaces (pp.31-32). Association for computing machinery [10.1145/3308557.3308688].
Emotify: Emotional game for children with autism spectrum disorder based-on machine learning
Garzotto F.
2019
Abstract
Children with Autism Spectrum Disorder (ASD) often face the challenge of detecting and expressing emotions. I.e., it's hard for them to recognize happiness, sadness and anger in other people and to express their own feelings. This difficulty produces severe impairments in communication and social functioning. The paper proposes a spoken educational game, exploiting Machine Learning techniques, to help children with ASD to understand how to correctly identify and express emotions. The game focuses on four emotional states (happiness, sadness, anger and neutrality) and is divided in two levels with increasingly difficulty: the first step is to learn how to recognize and express feelings and in the second phase emotional skills by the user are examined and evaluated. The application integrates a multilingual emotion recognizer from the pitch of the voice.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


