Research Article

Proactive Anomaly Detection in Nuclear Power Plants Using Deep Autoencoders: Enhancing Explainability with LLMs

by  Tapendra Baduwal
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Issue 65
Published: December 2025
Authors: Tapendra Baduwal
10.5120/ijca2025926084
PDF

Tapendra Baduwal . Proactive Anomaly Detection in Nuclear Power Plants Using Deep Autoencoders: Enhancing Explainability with LLMs. International Journal of Computer Applications. 187, 65 (December 2025), 1-9. DOI=10.5120/ijca2025926084

                        @article{ 10.5120/ijca2025926084,
                        author  = { Tapendra Baduwal },
                        title   = { Proactive Anomaly Detection in Nuclear Power Plants Using Deep Autoencoders: Enhancing Explainability with LLMs },
                        journal = { International Journal of Computer Applications },
                        year    = { 2025 },
                        volume  = { 187 },
                        number  = { 65 },
                        pages   = { 1-9 },
                        doi     = { 10.5120/ijca2025926084 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2025
                        %A Tapendra Baduwal
                        %T Proactive Anomaly Detection in Nuclear Power Plants Using Deep Autoencoders: Enhancing Explainability with LLMs%T 
                        %J International Journal of Computer Applications
                        %V 187
                        %N 65
                        %P 1-9
                        %R 10.5120/ijca2025926084
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

In real-world applications, such as nuclear power plants, failure data are often limited. Unlike supervised learning, which also requires failure examples, deep autoencoder unsupervised learning is therefore employed, involving training the model on a normal operational dataset by calculating the reconstruction error and setting a threshold to analyze new unseen data. Any dataset exceeding this threshold is classified as abnormal, and the top five contributing features are identified based on the highest reconstruction errors. The proposed deep autoencoder employs an architecture based on the activation functions of the Leaky Rectified Linear Unit (LeakyReLU) and Exponential Linear Unit (ELU) to mitigate the problem of ’dying neurons’ and effectively capture complex, non-linear correlations between features. To enhance explainability, large language models (LLMs) are leveraged to analyze potential accident types and highlight likely areas of concern. Experiments were conducted on nuclear power plant accident data (NPPAD), generated using widely adopted PCTRAN simulation software. Comparative evaluations were conducted using Principal Component Analysis (PCA), Isolation Forest, ReLU-based autoencoders, and Deep autoencoders. Among these approaches, the proposed deep autoencoders achieved the best performance. These methods support a proactive anomaly detection method that empowers plant operators to detect potential accidents, identify their root causes, and make data-driven decisions, thereby improving safety, security, and timely maintenance.

References
  • Sepideh Pashami, Slawomir Nowaczyk, Yuantao Fan, Jakub Jakubowski, Nuno Paiva, Narjes Davari, Szymon Bobek, Samaneh Jamshidi, Hamid Sarmadi, Abdallah Alabdallah, et al. Explainable predictive maintenance. arXiv preprint arXiv:2306.05120, 2023.
  • George Bereznai. Nuclear power plant systems and operation. University of Ontario Institute of Technology, Oshawa, 2005.
  • World Nuclear Association. Nuclear power in the world today, September 2024. URL https: //world-nuclear.org/information-library/ current-and-future-generation/ nuclear-power-in-the-world-today.
  • Ramadass Sathya, Annamma Abraham, et al. Comparison of supervised and unsupervised learning algorithms for pattern classification. International Journal of Advanced Research in Artificial Intelligence, 2(2):34–38, 2013.
  • Ritu Sharma, Kavya Sharma, and Apurva Khanna. Study of supervised learning and unsupervised learning. International Journal for Research in Applied Science and Engineering Technology, 8(6):588–593, 2020.
  • Naoya Takeishi. Shapley values of reconstruction errors of pca for explaining anomaly detection. In 2019 international conference on data mining workshops (icdmw), pages 793– 798. IEEE, 2019.
  • Ben Qi, Xingyu Xiao, Jingang Liang, Li-chi Cliff Po, Liguo Zhang, and Jiejuan Tong. An open time-series simulated dataset covering various accidents for nuclear power plants. Scientific data, 9(1):766, 2022.
  • W Simulator. Pctran generic pressurized water reactor simulator exercise handbook. Vienna: International Atomic Energy Agency, 2019.
  • Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher, Xavier Amatriain, and Jianfeng Gao. Large language models: A survey. arXiv preprint arXiv:2402.06196, 2024.
  • Humza Naveed, Asad Ullah Khan, Shi Qiu, Muhammad Saqib, Saeed Anwar, Muhammad Usman, Naveed Akhtar, Nick Barnes, and Ajmal Mian. A comprehensive overview of large language models. arXiv preprint arXiv:2307.06435, 2023.
  • A Vaswani. Attention is all you need. Advances in Neural Information Processing Systems, 2017.
  • Hongzuo Xu, Guansong Pang, Yijie Wang, and Yongjun Wang. Deep isolation forest for anomaly detection. IEEE Transactions on Knowledge and Data Engineering, 35(12): 12591–12604, 2023.
  • Ben Qi, Jingang Liang, and Jiejuan Tong. Fault diagnosis techniques for nuclear power plants: a review from the artificial intelligence perspective. Energies, 16(4):1850, 2023.
  • Xiangyu Li, Tao Huang, Kun Cheng, Zhifang Qiu, and Tan Sichao. Research on anomaly detection method of nuclear power plant operation state based on unsupervised deep generative model. Annals of nuclear energy, 167:108785, 2022.
  • SA Cancemi, R Lo Frano, C Santus, and T Inoue. Unsupervised anomaly detection in pressurized water reactor digital twins using autoencoder neural networks. Nuclear Engineering and Design, 413:112502, 2023.
  • Abhishek Chaudhary, Junseo Han, Seongah Kim, Aram Kim, and Sunoh Choi. Anomaly detection and analysis in nuclear power plants. Electronics, 13(22):4428, 2024.
  • Adriano Liso, Angelo Cardellicchio, Cosimo Patruno, Massimiliano Nitti, Pierfrancesco Ardino, Ettore Stella, and Vito Ren`o. A review of deep learning-based anomaly detection strategies in industry 4.0 focused on application fields, sensing equipment, and algorithms. IEEE Access, PP:1–1, 01 2024. doi: 10.1109/ACCESS.2024.3424488.
  • Xingyu Xiao, Jingang Liang, Jiejuan Tong, and HaitaoWang. Emergency decision support techniques for nuclear power plants: Current state, challenges, and future trends. Energies, 17(10):2439, 2024.
  • Sidharth Prasad Mishra, Uttam Sarkar, Subhash Taraphder, Sanjay Datta, Devi Swain, Reshma Saikhom, Sasmita Panda, and Menalsh Laishram. Multivariate statistical data analysisprincipal component analysis (pca). International Journal of Livestock Research, 7(5):60–78, 2017.
  • Liton Chandra Paul, Abdulla Al Suman, and Nahid Sultan. Methodological analysis of principal component analysis (pca) method. International Journal of Computational Engineering & Management, 16(2):32–38, 2013.
  • Mohammed Kareem and Lamia Muhammed. Anomaly Detection in Streaming Data using Isolation Forest Tree Mohammed Shaker Kareem Supervised by. PhD thesis, 07 2024.
  • Fei Tony Liu, Kai Ting, and Zhi-Hua Zhou. Isolation forest. pages 413 – 422, 01 2009. doi: 10.1109/ICDM.2008.17.
  • Zhaomin Chen, Chai Kiat Yeo, Bu Sung Lee, and Chiew Tong Lau. Autoencoder-based network anomaly detection. In 2018 Wireless telecommunications symposium (WTS), pages 1–5. IEEE, 2018.
  • Umberto Michelucci. An introduction to autoencoders. arXiv preprint arXiv:2201.03898, 2022.
  • Jinwon An and Sungzoon Cho. Variational autoencoder based anomaly detection using reconstruction probability. Special lecture on IE, 2(1):1–18, 2015.
  • PyTorch Team. torchvision.transforms.v2.gaussiannoise, November 2024. URL https://pytorch.org/vision/ master/generated/torchvision.transforms.v2. GaussianNoise.html.
  • Eric W Weisstein. Normal distribution. https://mathworld. wolfram. com/, 2002.
  • Lucas BV de Amorim, George DC Cavalcanti, and Rafael MO Cruz. The choice of scaling technique matters for classification performance. Applied Soft Computing, 133: 109924, 2023.
  • Sagar Sharma, Simone Sharma, and Anidhya Athaiya. Activation functions in neural networks. Towards Data Sci, 6(12): 310–316, 2017.
  • Lu Lu, Yeonjong Shin, Yanhui Su, and George Em Karniadakis. Dying relu and initialization: Theory and numerical examples. arXiv preprint arXiv:1903.06733, 2019.
  • Shiv Ram Dubey, Satish Kumar Singh, and Bidyut Baran Chaudhuri. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing, 503:92– 108, 2022.
  • Juan Terven, Diana M Cordova-Esparza, Alfonzo Ramirez- Pedraza, and Edgar A Chavez-Urbiola. Loss functions and metrics in deep learning. a review. arXiv preprint arXiv:2307.02694, 2023.
  • Aryan Jadon, Avinash Patil, and Shruti Jadon. A comprehensive survey of regression-based loss functions for time series forecasting. In International Conference on Data Management, Analytics & Innovation, pages 117–147. Springer, 2024.
  • Twan Van Laarhoven. L2 regularization versus batch and weight normalization. arXiv preprint arXiv:1706.05350, 2017.
  • ˇZ Vujovi´c et al. Classification model evaluation metrics. International Journal of Advanced Computer Science and Applications, 12(6):599–606, 2021.
  • Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. The llama 3 herd of models. arXiv preprint arXiv:2407.21783, 2024.
  • Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
  • Hugging Face. Sft trainer documentation, 2024. URL https: //huggingface.co/docs/trl/en/sft_trainer.
  • KLU AI. Supervised fine-tuning glossary, 2024. URL https://klu.ai/glossary/ supervised-fine-tuning.
  • Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, and Roy Ka- Wei Lee. Llm-adapters: An adapter family for parameterefficient fine-tuning of large language models. arXiv preprint arXiv:2304.01933, 2023.
  • Venkatesh Balavadhani Parthasarathy, Ahtsham Zafar, Aafaq Khan, and Arsalan Shahid. The ultimate guide to fine-tuning llms from basics to breakthroughs: An exhaustive review of technologies, research, best practices, applied research challenges and opportunities. arXiv preprint arXiv:2408.13296, 2024.
  • Edward J Hu, Yelong Shen, PhillipWallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
  • Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. Advances in Neural Information Processing Systems, 36, 2024.
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Predictive Maintenance Anomaly detection Unsupervised learning Deep autoencoder Reconstruction threshold LLMs

Powered by PhDFocusTM