International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
Volume 177 - Issue 24 |
Published: Dec 2019 |
Authors: Mohammad Almasi, Hamed Fathi, Sayed Adel Ghaeinian, Samaneh Samiee |
![]() |
Mohammad Almasi, Hamed Fathi, Sayed Adel Ghaeinian, Samaneh Samiee . Human Action Recognition through the First-Person Point of view, Case Study Two Basic Task. International Journal of Computer Applications. 177, 24 (Dec 2019), 19-23. DOI=10.5120/ijca2019919703
@article{ 10.5120/ijca2019919703, author = { Mohammad Almasi,Hamed Fathi,Sayed Adel Ghaeinian,Samaneh Samiee }, title = { Human Action Recognition through the First-Person Point of view, Case Study Two Basic Task }, journal = { International Journal of Computer Applications }, year = { 2019 }, volume = { 177 }, number = { 24 }, pages = { 19-23 }, doi = { 10.5120/ijca2019919703 }, publisher = { Foundation of Computer Science (FCS), NY, USA } }
%0 Journal Article %D 2019 %A Mohammad Almasi %A Hamed Fathi %A Sayed Adel Ghaeinian %A Samaneh Samiee %T Human Action Recognition through the First-Person Point of view, Case Study Two Basic Task%T %J International Journal of Computer Applications %V 177 %N 24 %P 19-23 %R 10.5120/ijca2019919703 %I Foundation of Computer Science (FCS), NY, USA
In this study, a human motion dataset is built and developed based on indoors and outdoors actions through a bounded-on-head camera and Xsens for tracking the motions. The key point here to structuring the dataset is utilized to set the sequence of a Deep Neural Network and order an arrangement of frames in the performed task (washing, eating, etc.). As a final point, a 3D modeling of the person suggested at every frame centered with the comparable structure of the first network. More than 120,000 frames constructed the dataset, taken from 7 different people, each one acting out different tasks in diverse indoor and outdoor scenarios. The sequences of every video frame were 3D synchronized and segmented 23 parts.