The ‘Data-Centric’ era is pushing change on data center architectures. The intersection of High Performance Computing (HPC), Cloud Computing and Artificial Intelligence (AI) applications mining big data has defined the 'Data-Centric' era for compute, storage and networking. Transporting and transforming massive amounts of data in real-time requires breakthroughs in computing systems, storage performance and network connectivity. The traditional approach of moving the data to the compute doesn't scale and can't deliver results in a timely manner. New technologies such as RDMA, NVMe and Distributed Computing Hardware using Parallel Software demand low latency and are created to address the need, however these new technologies put additional stress on the data center networks.
Cloud scale high performance computing services are effectively massive supercomputers, connected with a Performance Sensitive Network (PSN) that strives for the lowest possible latency. Latency that approaches the scale of an I/O or memory connected bus. To achieve these low latencies, the PSN needs to define new features for the Ethernet interconnect. Ethernet is the popular choice in computing and storage networks, but it still lags other custom technologies with respect to latency. This presentation will propose a number of new technologies for the Ethernet based Performance Sensitive Network. Many of the ideas are initially introduced here and some of the other ideas have been described by the IEEE 802 “Network Enhancements for the Next Decade” Industry Connections Activity.