|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 31 |
| Published: August 2025 |
| Authors: Hitesh Kumar Gupta |
10.5120/ijca2025925560
|
Hitesh Kumar Gupta . When Better Eyes Lead to Blindness: A Diagnostic Study of the Information Bottleneck in CNN-LSTM Image Captioning Models. International Journal of Computer Applications. 187, 31 (August 2025), 1-9. DOI=10.5120/ijca2025925560
@article{ 10.5120/ijca2025925560,
author = { Hitesh Kumar Gupta },
title = { When Better Eyes Lead to Blindness: A Diagnostic Study of the Information Bottleneck in CNN-LSTM Image Captioning Models },
journal = { International Journal of Computer Applications },
year = { 2025 },
volume = { 187 },
number = { 31 },
pages = { 1-9 },
doi = { 10.5120/ijca2025925560 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2025
%A Hitesh Kumar Gupta
%T When Better Eyes Lead to Blindness: A Diagnostic Study of the Information Bottleneck in CNN-LSTM Image Captioning Models%T
%J International Journal of Computer Applications
%V 187
%N 31
%P 1-9
%R 10.5120/ijca2025925560
%I Foundation of Computer Science (FCS), NY, USA
Image captioning, situated at the intersection of computer vision and natural language processing, requires a sophisticated understanding of both visual scenes and linguistic structure. While modern approaches are dominated by large-scale Transformer architectures, this paper documents a systematic, iterative development of foundational image captioning models, progressing from a simple CNN-LSTM encoder-decoder to a competitive attention-based system. This paper presents a series of five models, beginning with Genesis and concluding with Nexus, an advanced model featuring an EfficientNetV2B3 backbone and a dynamic attention mechanism. The experiments chart the impact of architectural enhancements and demonstrate a key finding within the classic CNNLSTM paradigm: merely upgrading the visual backbone without a corresponding attention mechanism can degrade performance, as the single-vector bottleneck cannot transmit the richer visual detail. This insight validates the architectural shift to attention. Trained on the MS COCO 2017 dataset, the final model, Nexus, achieves a BLEU-4 score of 31.4, surpassing several foundational benchmarks and validating the iterative design process. This work provides a clear, replicable blueprint for understanding the core architectural principles that underpin modern vision-language tasks.