Logotipo del repositorio
  • English
  • Español
  • Français
  • Português
  • Iniciar sesión
    ¿Has olvidado tu contraseña?
Logotipo del repositorio
  • Comunidades
  • Listar por
  • English
  • Español
  • Français
  • Português
  • Iniciar sesión
    ¿Has olvidado tu contraseña?
  1. Inicio
  2. Examinar por materia

Examinando por Materia "Databases, Factual"

Mostrando 1 - 2 de 2
Resultados por página
Opciones de ordenación
  • No hay miniatura disponible
    Ítem
    Emotion Recognition from EEG and Facial Expressions: A Multimodal Approach
    (Institute of Electrical and Electronics Engineers Inc., 2018-01-01) Chaparro V.; Gomez A.; Salgado A.; Quintero O.L.; Lopez N.; Villa L.F.; Chaparro V.; Gomez A.; Salgado A.; Quintero O.L.; Lopez N.; Villa L.F.; Universidad EAFIT. Departamento de Ciencias; Modelado Matemático
    The understanding of a psychological phenomena such as emotion is of paramount importance for psychologists, since it allows to recognize a pathology and to prescribe a due treatment for a patient. While approaching this problem, mathematicians and computational science engineers have proposed different unimodal techniques for emotion recognition from voice, electroencephalography, facial expression, and physiological data. It is also well known that identifying emotions is a multimodal process. The main goal in this work is to train a computer to do so. In this paper we will present our first approach to a multimodal emotion recognition via data fusion of Electroencephalography and facial expressions. The selected strategy was a feature-level fusion of both Electroencephalography and facial microexpressions, and the classification schemes used were a neural network model and a random forest classifier. Experimental set up was out with the balanced multimodal database MAHNOB-HCI. Results are promising compared to results from other authors with a 97% of accuracy. The feature-level fusion approach used in this work improves our unimodal techniques up to 12% per emotion. Therefore, we may conclude that our simple but effective approach improves the overall results of accuracy. © 2018 IEEE.
  • No hay miniatura disponible
    Ítem
    Recognition and regionalization of emotions in the arousal-valence plane
    (Institute of Electrical and Electronics Engineers Inc., 2015-01-01) Bustamante, P.A.; Lopez Celani, N.M.; Perez, M.E.; Quintero Montoya, O.L.; Bustamante, P.A.; Lopez Celani, N.M.; Perez, M.E.; Quintero Montoya, O.L.; Universidad EAFIT. Departamento de Ciencias; Modelado Matemático
    The emotion recognition systems have become important for the diversity of its applications. Several methodologies have been proposed based on how emotions are reflected in biological systems, such as facial expressions, the activity of the nervous system or the prosody of voice. The detection of emotions by voice processing is an approach that involves a noninvasive procedure that produces results with an acceptable rate of detection. In this work an algorithm for features extraction was developed, that efficiently classify different emotional states. Thus, emotions that have not been trained can be associated with a trained emotion both belonging to the same region of the valence-arousal plane.

Vigilada Mineducación

Universidad con Acreditación Institucional hasta 2026 - Resolución MEN 2158 de 2018

Software DSpace copyright © 2002-2025 LYRASIS

  • Configuración de cookies
  • Enviar Sugerencias