Emotion Recognition from EEG and Facial Expressions: A Multimodal Approach
Fecha
2018-01-01
Autores
Chaparro V.
Gomez A.
Salgado A.
Quintero O.L.
Lopez N.
Título de la revista
ISSN de la revista
Título del volumen
Editor
Institute of Electrical and Electronics Engineers Inc.
Resumen
The understanding of a psychological phenomena such as emotion is of paramount importance for psychologists, since it allows to recognize a pathology and to prescribe a due treatment for a patient. While approaching this problem, mathematicians and computational science engineers have proposed different unimodal techniques for emotion recognition from voice, electroencephalography, facial expression, and physiological data. It is also well known that identifying emotions is a multimodal process. The main goal in this work is to train a computer to do so. In this paper we will present our first approach to a multimodal emotion recognition via data fusion of Electroencephalography and facial expressions. The selected strategy was a feature-level fusion of both Electroencephalography and facial microexpressions, and the classification schemes used were a neural network model and a random forest classifier. Experimental set up was out with the balanced multimodal database MAHNOB-HCI. Results are promising compared to results from other authors with a 97% of accuracy. The feature-level fusion approach used in this work improves our unimodal techniques up to 12% per emotion. Therefore, we may conclude that our simple but effective approach improves the overall results of accuracy. © 2018 IEEE.