|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 63 |
| Published: December 2025 |
| Authors: Sadeen Ghaleb Alsabbagh, Suhair Amer |
10.5120/ijca2025926054
|
Sadeen Ghaleb Alsabbagh, Suhair Amer . Algorithmic and Empirical contributions in Linguistic Architectures for Grounding Hallucinating Models. International Journal of Computer Applications. 187, 63 (December 2025), 60-67. DOI=10.5120/ijca2025926054
@article{ 10.5120/ijca2025926054,
author = { Sadeen Ghaleb Alsabbagh,Suhair Amer },
title = { Algorithmic and Empirical contributions in Linguistic Architectures for Grounding Hallucinating Models },
journal = { International Journal of Computer Applications },
year = { 2025 },
volume = { 187 },
number = { 63 },
pages = { 60-67 },
doi = { 10.5120/ijca2025926054 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2025
%A Sadeen Ghaleb Alsabbagh
%A Suhair Amer
%T Algorithmic and Empirical contributions in Linguistic Architectures for Grounding Hallucinating Models%T
%J International Journal of Computer Applications
%V 187
%N 63
%P 60-67
%R 10.5120/ijca2025926054
%I Foundation of Computer Science (FCS), NY, USA
This paper examines hallucination in large language models (LLMs) through the lens of linguistic grounding. Hallucinations—plausible yet inaccurate outputs—undermine reliability, interpretability, and trust in generative systems. Existing mitigation strategies, including retrieval-augmented generation, fact-checking, and reinforcement learning with human feedback, vary in effectiveness but share a reliance on post-hoc correction rather than representational grounding. By comparing algorithmic approaches that optimize model behavior with empirical methods that depend on observed or human-guided validation, this study reveals a structural gap: current systems lack a semantic foundation to constrain generative drift. To address this, the paper introduces linguistic frames—structured templates capturing meaning, roles, and contexts—as a pathway for embedding semantic constraints directly into model architectures. Framed grounding offers a route toward architectures that balance fluency with truthfulness, positioning semantic representation as central to sustainable hallucination mitigation.