International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
Volume 179 - Issue 49 |
Published: Jun 2018 |
Authors: Mona M. Elamir, Walid Al-Atabany, Mohamed A. Aldosouky |
![]() |
Mona M. Elamir, Walid Al-Atabany, Mohamed A. Aldosouky . Phase Space Density Matrix for Emotion Recognition. International Journal of Computer Applications. 179, 49 (Jun 2018), 37-41. DOI=10.5120/ijca2018917289
@article{ 10.5120/ijca2018917289, author = { Mona M. Elamir,Walid Al-Atabany,Mohamed A. Aldosouky }, title = { Phase Space Density Matrix for Emotion Recognition }, journal = { International Journal of Computer Applications }, year = { 2018 }, volume = { 179 }, number = { 49 }, pages = { 37-41 }, doi = { 10.5120/ijca2018917289 }, publisher = { Foundation of Computer Science (FCS), NY, USA } }
%0 Journal Article %D 2018 %A Mona M. Elamir %A Walid Al-Atabany %A Mohamed A. Aldosouky %T Phase Space Density Matrix for Emotion Recognition%T %J International Journal of Computer Applications %V 179 %N 49 %P 37-41 %R 10.5120/ijca2018917289 %I Foundation of Computer Science (FCS), NY, USA
Emotion detection of human from physiological signals is one of the active research areas on developing intelligent human-machine interface systems. Emotions can be expressed either verbally through emotional vocabulary, or by non-verbal such as intonation of voice, facial expressions, gestures and physiological signals. This paper aims to recognize human emotions from electroencephalogram (EEG) signals based on studying the nonlinear behavior of brain signals. Phase space density matrix has been generated from the reconstructed EEG phase space then some features have been extracted using gray level co-occurrence matrix (GLCM) method. One-way ANOVA test has been used to select the most significant features contributing to emotion classification. Three supervised classifiers (KNN, SVM, and CART classifier) have been used in this study to classify emotions into three cases along the two-basic emotional dimensions. Results show promising preliminary results with average accuracy 95.8% for arousal dimension and 93.9% for valence dimension that confirms the robustness of the proposed approach as practical classifier tool for emotion recognition.