International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
Volume 185 - Issue 25 |
Published: Jul 2023 |
Authors: Dimitrios Papakyriakou, Ioannis S. Barbounakis |
![]() |
Dimitrios Papakyriakou, Ioannis S. Barbounakis . High Performance Linpack (HPL) Benchmark on Raspberry Pi 4B (8GB) Beowulf Cluster. International Journal of Computer Applications. 185, 25 (Jul 2023), 11-19. DOI=10.5120/ijca2023923005
@article{ 10.5120/ijca2023923005, author = { Dimitrios Papakyriakou,Ioannis S. Barbounakis }, title = { High Performance Linpack (HPL) Benchmark on Raspberry Pi 4B (8GB) Beowulf Cluster }, journal = { International Journal of Computer Applications }, year = { 2023 }, volume = { 185 }, number = { 25 }, pages = { 11-19 }, doi = { 10.5120/ijca2023923005 }, publisher = { Foundation of Computer Science (FCS), NY, USA } }
%0 Journal Article %D 2023 %A Dimitrios Papakyriakou %A Ioannis S. Barbounakis %T High Performance Linpack (HPL) Benchmark on Raspberry Pi 4B (8GB) Beowulf Cluster%T %J International Journal of Computer Applications %V 185 %N 25 %P 11-19 %R 10.5120/ijca2023923005 %I Foundation of Computer Science (FCS), NY, USA
This paper focuses on a High Performance Linpack (HPL) benchmarking performance analysis of a state of the Art Beowulf cluster deployed with 24 Raspberry Pi’s 4 (model B) (8GB RAM) computers with a CPU clocked at 1.5 GHz, 64-bit quad-core ARMv8 Cortex-A72. In particular, it presents the increased HPL performance of a Beowulf cluster with the use of the default microSD usage in all the RPi’s in the cluster (SDCS2 64GB micro SDXC 100R A1 C10) compared to using a cluster set-up where the master-node uses a Samsung (1TB) 980 PCI-E 3 NVMe M.2 SSD and the slave-nodes uses each a (256GB) Patriot P300P256GM28 NVME M.2 2280). Moreover, it presents the test results of a multithread execution of a C++ pi calculation program by using one to four cores in one RPi 4 B (8GB) using the above-mentioned microSD. In addition, it presents the test results of a multithread execution of a C++ with MPI (pi) calculation program by using 24 RPi’s 4B with the above-mentioned microSD. In terms of the HPL benchmarking performance testing of a Beowulf cluster where the NVMe M.2 SSD disks are used, RPi 4-B supports and deployed the option to use the entire SSD (MVMe) as a bootable external disk which the boot and root partition (where the actual HPL runs) is hosted in the external SSD. All of them are connected over two Gigabit switches (TL-SG1024D) in a parallel mode of operation so that to build a supercomputer.