Research Article

Reducing Latency in Hybrid HPC Systems Through Containerization and Parallel GPU Processing

by  Manju George
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Issue 94
Published: March 2026
Authors: Manju George
10.5120/ijca2026926635
PDF

Manju George . Reducing Latency in Hybrid HPC Systems Through Containerization and Parallel GPU Processing. International Journal of Computer Applications. 187, 94 (March 2026), 55-60. DOI=10.5120/ijca2026926635

                        @article{ 10.5120/ijca2026926635,
                        author  = { Manju George },
                        title   = { Reducing Latency in Hybrid HPC Systems Through Containerization and Parallel GPU Processing },
                        journal = { International Journal of Computer Applications },
                        year    = { 2026 },
                        volume  = { 187 },
                        number  = { 94 },
                        pages   = { 55-60 },
                        doi     = { 10.5120/ijca2026926635 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2026
                        %A Manju George
                        %T Reducing Latency in Hybrid HPC Systems Through Containerization and Parallel GPU Processing%T 
                        %J International Journal of Computer Applications
                        %V 187
                        %N 94
                        %P 55-60
                        %R 10.5120/ijca2026926635
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

The research examines how High-Performance Computing (HPC) and cloud-native environments can meet through scalable computing environments. The study measures the optimization of computational efficiency of large-scale data processing through containerization and hardware acceleration. With the help of a synthetic dataset containing 411 examples of high-dimensional performance measurements, the research modeling simulates different workload distributions in hybrid infrastructures. The main operated tools are Kubernetes as an orchestration tool, Docker as an environment isolation tool, and dedicated software libraries as a tool that monitors GPU acceleration. Findings have shown that a combination of containerization and parallel processing can lower the latency by a large margin whilst ensuring that hardware is utilized fully. It is concluded in the abstract that a single piece of architecture is needed to handle modern data-intensive tasks.

References
  • E. Huaranga-Junco, S. González-Gerpe, M. Castillo-Cara, A. Cimmino, and R. García-Castro, "From cloud and fog computing to federated-fog computing: A comparative analysis of computational resources in real-time IoT applications based on semantic interoperability", Future Generation Computer Systems, vol. 159, pp. 134–150, 2024.
  • C. Guerrero, I. Lera, and C. Juiz, "Distributed genetic algorithm for application placement in the compute continuum leveraging infrastructure nodes for optimization", Future Generation Computer Systems, vol. 160, pp. 154–170, 2024.
  • N. Farabegoli, D. Pianini, R. Casadei, and M. Viroli, "Scalability through pulverization: Declarative deployment reconfiguration at runtime", Future Generation Computer Systems, vol. 161, pp. 545–558, 2024.
  • B. Sedlak, V. Casamayor Pujol, P. K. Donta, and S. Dustdar,"Equilibrium in the computing continuum through active inference", Future Generation Computer Systems, vol. 160, pp. 92–108, 2024.
  • J. J. Dongarra and P. Luszczek, "The LINPACK benchmark: Past, present, and future", Concurrency and Computation: Practice and Experience, vol. 15, no. 9, pp. 803–820, 2003.
  • J. J. Dongarra, J. R. Bunch, G. B. Moler, and G. W. Stewart, LINPACK Users’ Guide. Society for Industrial and Applied Mathematics, 1987.
  • F. Petrini, D. J. Kerbyson, and S. Pakin, "The case of the missing supercomputer performance: Achieving optimal performance on the 8,192 processors of ASCI Q", in Proc. ACM/IEEE Conf. Supercomputing, 2003, p. 55.
  • A. Snell, D. Goldfarb, and C. G. Willard, "Designed to scale: The Cray XT5 family of supercomputers", White Paper, 2007.
  • J. M. Bernabé Murcia, E. Cánovas, J. García-Rodríguez, A. M. Zarca, and A. Skarmeta, "Decentralized identity management solution for zero-trust multi-domain computing continuum frameworks", Future Generation Computer Systems, vol. 162, Art. no. 107479, 2025.
  • R. S. Madhuranthakam, "Scalable data engineering pipelines for real-time analytics in big data environments", FMDB Transactions on Sustainable Computing Systems, vol. 2, no. 3, pp. 154–166, 2024.
  • C. Guerrero, I. Lera, and C. Juiz, "Application placement optimization in distributed compute continuum infrastructures", Future Generation Computer Systems, vol. 160, pp. 154–170, 2024.
  • N. Farabegoli, D. Pianini, R. Casadei, and M. Viroli, "Runtime deployment reconfiguration for scalable distributed systems", Future Generation Computer Systems, vol. 161, pp. 545–558, 2024.
  • B. Sedlak, V. Casamayor Pujol, P. K. Donta, and S. Dustdar, "Active inference mechanisms for balanced scalable computing continuum", Future Generation Computer Systems, vol. 160, pp. 92–108, 2024.
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Scalable Computing GPU Acceleration Containerization Parallelism Cloud AI

Powered by PhDFocusTM