Research Article

Algorithmic and Empirical contributions in Linguistic Architectures for Grounding Hallucinating Models

by  Sadeen Ghaleb Alsabbagh, Suhair Amer
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Issue 63
Published: December 2025
Authors: Sadeen Ghaleb Alsabbagh, Suhair Amer
10.5120/ijca2025926054
PDF

Sadeen Ghaleb Alsabbagh, Suhair Amer . Algorithmic and Empirical contributions in Linguistic Architectures for Grounding Hallucinating Models. International Journal of Computer Applications. 187, 63 (December 2025), 60-67. DOI=10.5120/ijca2025926054

                        @article{ 10.5120/ijca2025926054,
                        author  = { Sadeen Ghaleb Alsabbagh,Suhair Amer },
                        title   = { Algorithmic and Empirical contributions in Linguistic Architectures for Grounding Hallucinating Models },
                        journal = { International Journal of Computer Applications },
                        year    = { 2025 },
                        volume  = { 187 },
                        number  = { 63 },
                        pages   = { 60-67 },
                        doi     = { 10.5120/ijca2025926054 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2025
                        %A Sadeen Ghaleb Alsabbagh
                        %A Suhair Amer
                        %T Algorithmic and Empirical contributions in Linguistic Architectures for Grounding Hallucinating Models%T 
                        %J International Journal of Computer Applications
                        %V 187
                        %N 63
                        %P 60-67
                        %R 10.5120/ijca2025926054
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

This paper examines hallucination in large language models (LLMs) through the lens of linguistic grounding. Hallucinations—plausible yet inaccurate outputs—undermine reliability, interpretability, and trust in generative systems. Existing mitigation strategies, including retrieval-augmented generation, fact-checking, and reinforcement learning with human feedback, vary in effectiveness but share a reliance on post-hoc correction rather than representational grounding. By comparing algorithmic approaches that optimize model behavior with empirical methods that depend on observed or human-guided validation, this study reveals a structural gap: current systems lack a semantic foundation to constrain generative drift. To address this, the paper introduces linguistic frames—structured templates capturing meaning, roles, and contexts—as a pathway for embedding semantic constraints directly into model architectures. Framed grounding offers a route toward architectures that balance fluency with truthfulness, positioning semantic representation as central to sustainable hallucination mitigation.

References
  • Lu, Rui, Zhenyu Hou, Zihan Wang, Hanchen Zhang, Xiao Liu, Yujiang Li, Shi Feng, Jie Tang, and Yuxiao Dong. "DeepDive: Advancing Deep Search Agents with Knowledge Graphs and Multi-Turn RL." arXiv preprint arXiv:2509.10446 (2025).
  • Wiesner, Florian, Matthias Wessling, and Stephen Baek. "Towards a Physics Foundation Model." arXiv preprint arXiv:2509.13805 (2025).
  • Wang, Miles, Tom Dupré la Tour, Olivia Watkins, Alex Makelov, Ryan A. Chi, Samuel Miserendino, Johannes Heidecke, Tejal Patwardhan, and Dan Mossing. "Persona Features Control Emergent Misalignment." arXiv preprint arXiv:2506.19823 (2025).
  • Videau, Mathurin, Badr Youbi Idrissi, Alessandro Leite, Marc Schoenauer, Olivier Teytaud, and David Lopez-Paz. "From Bytes to Ideas: Language Modeling with Autoregressive U-Nets." arXiv preprint arXiv:2506.14761 (2025).
  • Klein, Jan-Felix, and Lars Ohnemus. "ARK-V1: An LLM-Agent for Knowledge Graph Question Answering Requiring Commonsense Reasoning." arXiv preprint arXiv:2509.18063 (2025).
  • Bao, Wenrui, Zhiben Chen, Dan Xu, and Yuzhang Shang. "Learning to Parallel: Accelerating Diffusion Large Language Models via Adaptive Parallel Decoding." arXiv preprint arXiv:2509.25188 (2025).
  • Jiao, YiHan, ZheHao Tan, Dan Yang, DuoLin Sun, Jie Feng, Yue Shen, Jian Wang, and Peng Wei. "Hirag: Hierarchical-thought instruction-tuning retrieval-augmented generation." arXiv preprint arXiv:2507.05714 (2025).
  • Yan, Sikuan, Xiufeng Yang, Zuchao Huang, Ercong Nie, Zifeng Ding, Zonggen Li, Xiaowen Ma, Hinrich Schütze, Volker Tresp, and Yunpu Ma. "Memory-R1: Enhancing Large Language Model Agents to Manage and Utilize Memories via Reinforcement Learning." arXiv preprint arXiv:2508.19828 (2025).
  • Janiak, Denis, Jakub Binkowski, Albert Sawczyn, Bogdan Gabrys, Ravid Shwartz-Ziv, and Tomasz Kajdanowicz. "The Illusion of Progress: Re-evaluating Hallucination Detection in LLMs." arXiv preprint arXiv:2508.08285 (2025).
  • Hoang Nguyen, Minh, Van Dai Do, Dung Nguyen, Thin Nguyen, and Hung Le. "CausalPlan: Empowering Efficient LLM Multi-Agent Collaboration Through Causality-Driven Planning." arXiv e-prints (2025): arXiv-2508.
  • Zhou, Huichi, Yihang Chen, Siyuan Guo, Xue Yan, Kin Hei Lee, Zihan Wang, Ka Yiu Lee et al. "Memento: Fine-tuning llm agents without fine-tuning llms." Preprint (2025).
  • Song, Yuxuan, Zheng Zhang, Cheng Luo, Pengyang Gao, Fan Xia, Hao Luo, Zheng Li et al. "Seed diffusion: A large-scale diffusion language model with high-speed inference." arXiv preprint arXiv:2508.02193 (2025).
  • Wang, Haozhe, Qixin Xu, Che Liu, Junhong Wu, Fangzhen Lin, and Wenhu Chen. "Emergent hierarchical reasoning in llms through reinforcement learning." arXiv preprint arXiv:2509.03646 (2025).
  • [14] Hwang, Sukjun, Brandon Wang, and Albert Gu. "Dynamic chunking for end-to-end hierarchical sequence modeling." arXiv preprint arXiv:2507.07955 (2025).
  • Sheshadri, Abhay, John Hughes, Julian Michael, Alex Mallen, Arun Jose, and Fabien Roger. "Why Do Some Language Models Fake Alignment While Others Don't?." arXiv preprint arXiv:2506.18032 (2025).
  • Yin, Fangcong, Xi Ye, and Greg Durrett. "Lofit: Localized fine-tuning on llm representations." Advances in Neural Information Processing Systems 37 (2024): 9474-9506.
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Large Language Models (LLMs) Hallucination Mitigation Linguistic Frames Frame Semantics Knowledge Grounding

Powered by PhDFocusTM