International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
Volume 122 - Issue 7 |
Published: July 2015 |
Authors: Inkyu Sa, Ho Seok Ahn |
![]() |
Inkyu Sa, Ho Seok Ahn . Visual 3D Model-based Tracking toward Autonomous Live Sports Broadcasting using a VTOL Unmanned Aerial Vehicle in GPS-Impaired Environments. International Journal of Computer Applications. 122, 7 (July 2015), 1-7. DOI=10.5120/21709-4825
@article{ 10.5120/21709-4825, author = { Inkyu Sa,Ho Seok Ahn }, title = { Visual 3D Model-based Tracking toward Autonomous Live Sports Broadcasting using a VTOL Unmanned Aerial Vehicle in GPS-Impaired Environments }, journal = { International Journal of Computer Applications }, year = { 2015 }, volume = { 122 }, number = { 7 }, pages = { 1-7 }, doi = { 10.5120/21709-4825 }, publisher = { Foundation of Computer Science (FCS), NY, USA } }
%0 Journal Article %D 2015 %A Inkyu Sa %A Ho Seok Ahn %T Visual 3D Model-based Tracking toward Autonomous Live Sports Broadcasting using a VTOL Unmanned Aerial Vehicle in GPS-Impaired Environments%T %J International Journal of Computer Applications %V 122 %N 7 %P 1-7 %R 10.5120/21709-4825 %I Foundation of Computer Science (FCS), NY, USA
This paper presents a novel approach for autonomous live sports broadcasting using visual 3D model-based tracking and a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) such as a quadcopter or hexacopter in GPS-impaired environments. To achieve this level of autonomy, position estimation is essential and is a highly challenging problem using a monocular camera due to the scale ambiguity. In this paper, we track a tennis court, that is standard in dimension, using a moving edge-based tracker, and recover the scale with the prior knowledge of the fixed playing field. Experimental results are demonstrated in 3 different environments including static scenes, real broadcast video, and indoor flying. We also evaluate the proposed approach with the ground truth provided by a motion capture system and achieve a position estimation with less than 0:02m standard deviation in the error.