|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 66 |
| Published: December 2025 |
| Authors: Leena Chandrashekar, Sanjay S.B., Raghavendra M. Hegde, Samarth Shinnur, Samrudh B.R. |
10.5120/ijca2025926116
|
Leena Chandrashekar, Sanjay S.B., Raghavendra M. Hegde, Samarth Shinnur, Samrudh B.R. . Towards Adaptive Real-Time Hand Sign Language Interfaces Addressing User Independence, Pose and Occlusion Variability with Machine Learning. International Journal of Computer Applications. 187, 66 (December 2025), 41-51. DOI=10.5120/ijca2025926116
@article{ 10.5120/ijca2025926116,
author = { Leena Chandrashekar,Sanjay S.B.,Raghavendra M. Hegde,Samarth Shinnur,Samrudh B.R. },
title = { Towards Adaptive Real-Time Hand Sign Language Interfaces Addressing User Independence, Pose and Occlusion Variability with Machine Learning },
journal = { International Journal of Computer Applications },
year = { 2025 },
volume = { 187 },
number = { 66 },
pages = { 41-51 },
doi = { 10.5120/ijca2025926116 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2025
%A Leena Chandrashekar
%A Sanjay S.B.
%A Raghavendra M. Hegde
%A Samarth Shinnur
%A Samrudh B.R.
%T Towards Adaptive Real-Time Hand Sign Language Interfaces Addressing User Independence, Pose and Occlusion Variability with Machine Learning%T
%J International Journal of Computer Applications
%V 187
%N 66
%P 41-51
%R 10.5120/ijca2025926116
%I Foundation of Computer Science (FCS), NY, USA
Human-Computer Interaction (HCI) has emerged as a critical component of navigating and connecting with the digital world as technology has advanced. Hand gesture recognition has received substantial interest as a natural and intuitive communication interface. This paper describes the design and implementation of a real-time hand gesture detection system for supporting HCI, with a special focus on assisting the hearing and speech challenged. The proposed system captures hand gestures using a real-time video camera and creates a bespoke dataset that is robust by accounting for user, posture, and occlusion variability. A Convolutional Neural Network (CNN) is used to extract features, with 21 important features identified for each hand gesture. These features are then classified using a Random Forest method, which achieves an overall accuracy of 94.58% over several instances. Recognized gestures are translated into text and speech, allowing for efficient and convenient communication. The method allows you to combine various gestures to make whole sentences, which are often used in regular interactions. Performance assessment under different lighting circumstances reveals a PSNR of 3 to 4.27 dB, suggesting robustness to illumination fluctuations. A graphical user interface (GUI) with a feedback system allows for seamless two-way interaction, which improves usability and accessibility.