As the computational performance of modern supercomputers is approaching the ExaFLOPS level (10**18 floating point operations per second), the demands on the storage systems that feed these supercomputers continue to increase. On top of traditional HPC simulation workloads, more and more supercomputers are also designed to simultaneously satisfy new data centric workflows with very different I/O characteristics.
Through their massive scale-out capabilities, current parallel file systems are more than able to provide the capacity and raw bandwidth that is needed at Exascale. But it can be difficult to achieve high performance for the increasingly complex data access patterns, to manage billions of small files efficiently, and to continuously feed large GPU based systems with the huge datasets that are required in ML/AI workloads.
This webcast will examine the different I/O workflows seen on supercomputers today, discussing the approaches the industry is taking to support the convergence of HPC and AI workflows, and highlighting some of the innovations in both storage hardware and parallel file system software that will enable high performance storage at Exascale and beyond, discussing:
• Overview of typical use cases: Numerical simulation, sensor data ingest and analysis, ML/AI, etc.
• Advancements in HPC storage hardware: From HDDs to storage class memory
• Solution design: HPC storage fabrics, software stacks, heterogeneity, and tiering
• Workflows: How to ensure data is available in the right place at the right moment
• Realities of high-performance storage management – from the perspective of end users and storage administrators