HPC is a family of workloads that cover two distinct categories. Many industries employ high performance applications for analyzing data to assist with manufacturing, production, or product design. Examples of this include oil & gas simulations, chip design, automotive simulations, genomics and bioinformatics, and even financial modeling. These types of commercial high performance applications require massive file stores that can scale to huge capacities. These workloads focus their requirements on massive throughput, IOPS and connection scalability while maintaining a simple management experience. For these workloads, depending on the level of storage performance needed , either Shared External NVM Fabrics or the loosely coupled scale-out architecture is the correct architecture of choice.
The Lustre/Gluster use case is characterized by hundreds to thousands of compute cores doing massive amounts of computational work against a relatively small working data set. For these workloads $/GB, $/IOPS and very low latency are critical. Also, due to the relative size of the dataset and the fact that Lustre/Gluster do the scaling at their file system layer and not at the underlying block storage layer, the storage infrastructure actually needs to be able to scale down as required. For these reasons, the scale-up/down architecture is the correct architecture choice for this workload.
The Spider Chart below shows the distribution and weighting of the primary workload requirements for this use case.