Building Smart Scalable Storage SoCs with Embedded PCIe Switching
As storage systems embrace NVMe and all-flash, the need for scalability coupled with constantly evolving application requirements prompt SoC architects to look for ways to differentiate and future-proof their designs.
In this presentation we look at current storage architectures and propose an innovative way to design next-generation storage SoCs centered around the use of embedded PCIe switching.
We then introduce PLDA’s PCIe switch IP along with some real-world use cases, and explore the various IP features and capabilities that enable differentiation and future-proofing of SoCs in storage applications and beyond.
RecordedSep 29 202046 mins
Your place is confirmed, we'll send you email reminders
As storage systems embrace NVMe and all-flash, the need for scalability coupled with constantly evolving application requirements prompt SoC architects to look for ways to differentiate and future-proof their designs.
In this presentation we look at current storage architectures and propose an innovative way to design next-generation storage SoCs centered around the use of embedded PCIe switching.
We then introduce PLDA’s PCIe switch IP along with some real-world use cases, and explore the various IP features and capabilities that enable differentiation and future-proofing of SoCs in storage applications and beyond.
Exponential data growth is driving the need for increased performance in Enterprise and Data Center applications, and resulted in the emergence of new interconnect technologies such as CXL and CCIX, and faster transition to PCIe 5.0 and PCIe 6.0. This presentation looks at the different protocol technologies and introduces PLDA and Alphawave joint Controller and PHY IP solution for the Samsung Advanced Process Nodes.
We describe the joint solution in terms of features and capabilities, and present the stringent verification and validation methodology in place to guarantee first-pass silicon success for Samsung Foundry customers.
As storage systems embrace NVMe and all-flash, the need for scalability coupled with constantly evolving application requirements prompt SoC architects to look for ways to differentiate and future-proof their designs.
In this presentation we look at current storage architectures and propose an innovative way to design next-generation storage SoCs centered around the use of embedded PCIe switching.
We then introduce PLDA’s PCIe switch IP along with some real-world use cases, and explore the various IP features and capabilities that enable differentiation and future-proofing of SoCs in storage applications and beyond.
As storage systems embrace NVMe and all-flash, the need for scalability coupled with constantly evolving application requirements prompt SoC architects to look for ways to differentiate and future-proof their designs.
In this presentation we look at current storage architectures and propose an innovative way to design next-generation storage SoCs centered around the use of embedded PCIe switching.
We then introduce PLDA’s PCIe switch IP along with some real-world use cases, and explore the various IP features and capabilities that enable differentiation and future-proofing of SoCs in storage applications and beyond.
As the Compute Express Link™ (CXL™) interconnect protocol is gaining in popularity, mainly driven by the promise of higher performance and lower latency for CPU to Device communication, many questions arise around expected latency improvement. In this presentation, we describe the data flow model for the 3 protocols that comprise CXL (CXL.io, CXL.cache, CXL.mem) in contrast to traditional PCI Express, and look at the implications in terms of latency at the system level. We then present a couple of specific use cases that would clearly benefit a lower latency CXL interconnect and conclude with a look at the PLDA Controller IP for CXL and the design features allowing optimal CXL performance in silicon chips.
As the Compute Express Link™ (CXL™) interconnect protocol is gaining in popularity, mainly driven by the promise of higher performance and lower latency for CPU to Device communication, many questions arise around expected latency improvement. In this presentation, we describe the data flow model for the 3 protocols that comprise CXL (CXL.io, CXL.cache, CXL.mem) in contrast to traditional PCI Express, and look at the implications in terms of latency at the system level. We then present a couple of specific use cases that would clearly benefit a lower latency CXL interconnect and conclude with a look at the PLDA Controller IP for CXL and the design features allowing optimal CXL performance in silicon chips.
As the Compute Express Link™ (CXL™) interconnect protocol is gaining in popularity, mainly driven by the promise of higher performance and lower latency for CPU to Device communication, many questions arise around expected latency improvement. In this presentation, we describe the data flow model for the 3 protocols that comprise CXL (CXL.io, CXL.cache, CXL.mem) in contrast to traditional PCI Express, and look at the implications in terms of latency at the system level. We then present a couple of specific use cases that would clearly benefit a lower latency CXL interconnect and conclude with a look at the PLDA Controller IP for CXL and the design features allowing optimal CXL performance in silicon chips.
PLDA is a developer and licensor of Semiconductor Intellectual Property (SIP) specializing in high-speed interconnect supporting multi-gigabit rates (2.5G, 5G, 8G, 16G, 25G, 32G, 56G, 112G), and protocols such as PCI Express, CCIX, CXL, and Gen-Z. PLDA has established itself as a leader in that space with over 3,200 customers and 6,400 licenses in 62 countries. PLDA is a global technology company with offices in Silicon Valley, France, Bulgaria, Taiwan, and China.
Building Smart Scalable Storage SoCs with Embedded PCIe SwitchingStephane Hauradou, Product Marketing[[ webcastStartDate * 1000 | amDateFormat: 'MMM D YYYY h:mm a' ]]45 mins