|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 130 - Issue 16 |
| Published: November 2015 |
| Authors: Jai Vardhan Singh, Girijesh Prasad |
10.5120/ijca2015907194
|
Jai Vardhan Singh, Girijesh Prasad . Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry. International Journal of Computer Applications. 130, 16 (November 2015), 16-22. DOI=10.5120/ijca2015907194
@article{ 10.5120/ijca2015907194,
author = { Jai Vardhan Singh,Girijesh Prasad },
title = { Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry },
journal = { International Journal of Computer Applications },
year = { 2015 },
volume = { 130 },
number = { 16 },
pages = { 16-22 },
doi = { 10.5120/ijca2015907194 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2015
%A Jai Vardhan Singh
%A Girijesh Prasad
%T Enhancing an Eye-Tracker based Human-Computer Interface with Multi-modal Accessibility Applied for Text Entry%T
%J International Journal of Computer Applications
%V 130
%N 16
%P 16-22
%R 10.5120/ijca2015907194
%I Foundation of Computer Science (FCS), NY, USA
In natural course, human beings usually make use of multi-sensory modalities for effective communication or efficiently executing day-to-day tasks. For instance, during verbal conversations we make use of voice, eyes, and various body gestures. Also effective human-computer interaction involves hands, eyes, and voice, if available. Therefore by combining multi-sensory modalities, we can make the whole process more natural and ensure enhanced performance even for the disabled users. Towards this end, we have developed a multi-modal human-computer interface (HCI) by combining an eye-tracker with a soft-switch which may be considered as typically representing another modality. This multi-modal HCI is applied for text entry using a virtual keyboard appropriately designed in-house, facilitating enhanced performance. Our experimental results demonstrate that using multi-modalities for text entry through the virtual keyboard is more efficient and less strenuous than single modality system and also solves the Midas-touch problem, which is inherent in an eye-tracker based HCI system where only dwell time is used for selecting a character.