Protocol Analysis for High-Speed Fibre Channel Fabrics
Protocol Analysis for High-Speed Fibre Channel Fabrics in the Data Center: Aka, Saving Your SAN (& Sanity)
The driving force behind adopting new tools and processes in test and measurement practices is the desire to understand, predict, and mitigate the impact of Sick but not Dead (SBND) conditions in datacenter fabrics. The growth and centralization of mission critical datacenter SAN environments has exposed the fact that many small yet seemingly insignificant problems have the potential of becoming large scale and impactful events, unless properly contained or controlled.
Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) for purposes of expedited data delivery place additional analytical demands on the datacenter manager.
To be sure, all tools have limitations in their effectiveness and areas of coverage, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. To that end, recognizing and reducing the effect of those limitations is essential.
This webinar will introduce participants to Protocol Analysis tools and how they may be incorporated into the “best practices” application of SAN problem solving. We will review:
•The protocol of the Phy
•Use of “in-line” capture tools
•Benefits of purposeful error injection for developing and supporting today’s high-speed Fibre Channel storage fabrics
After the webcast, check out the Q&A blog at http://bit.ly/2P0hsqp
RecordedOct 10 201862 mins
Your place is confirmed, we'll send you email reminders
Ryan Suzuki, Samsung; John Kim, NVIDIA, Tom Friend, Illuminosi
The automotive industry is effectively transforming the vehicle into a data center on wheels. Connectedness, autonomous driving, and media & entertainment bring in more and more storage onboard and into the networked data centers. But all the storage in (and for) a car is not created equal. There are 10s if not 100s of different processors on the car. Some are attached to storage and some are not. Each application demands different characteristics from the storage device. Let’s explore all of this in an informational journey with the industry experts from both the storage and automotive worlds.
• What’s driving growth in automotive storage?
• Special requirements for autonomous vehicles
• Where automotive data is typically stored?
• Special use cases
• Vehicle networking & compute changes and challenges
David McIntyre, Samsung; Jon Toor, Cloudian; Alex McDonald, SNIA; Christine McMonigal, Intel
Storing objects has become commonplace. Object storage provides bulk and undifferentiated storage for unstructured data like photos, video & audio, DNA sequences, files, backups, and it can even protect against ransomware. Object access is also simplified because there are no built-in hierarchies or filesystems of objects, and no devices to manage that look like disks.
So, what’s new? Object storage has traditionally been accomplished in the software stack and is now being accomplished directly on the media. In this presentation, we’ll highlight how this is happening and discuss:
• Object storage characteristics
• The differences and similarities between object and key value storage
• Security options unique to object storage including ransomware mitigation
• Why use object storage: Use cases and applications
• Object storage and containers: Why Kubernetes’ COSI (Container Object Storage Interface)?
Erik Smith, Dell Technologies; Fred Knight, NetApp; Curtis Ballard, HPE; Tom Friend, Illuminosi
NVMe® IP-based SANs (including TCP, RoCE, iWARP) have the potential to provide significant benefits in application environments ranging from the Edge to the Data Center. However, before we can fully unlock NVMe IP-based SAN’s potential, we first need to overcome the NVMe over Fabrics (NVMe-oF™) discovery problem. This discovery problem, specific to IP based fabrics, can result in the need for Host administrators to explicitly configure each Host to access each of the NVM subsystems in their environment. In addition, any time an NVM Subsystem interface is added or removed, the Host administrator may need to explicitly update the configuration of impacted hosts. This process does not scale when more than a few Host and NVM subsystem interfaces are in use. Also, due to the de-centralized nature of this process, it also adds complexity when trying to use NVMe IP-based SANs in environments that require a high-degrees of automation.
For these and other reasons, several companies have been collaborating on innovations that simplify and automate the discovery process used with NVMe IP-based SANs.
During this session we will explain:
• NVMe IP-based SAN discovery problem
• The types of network topologies that can support the automated discovery of NVMe-oF Discovery controllers
• Direct Discovery versus Centralized Discovery
• An overview of the discovery protocol
Bill Martin, Samsung; Jason Molgaard, Arm; Oscar Pinto, Samsung; Scott Shadley, NGD Systems
SNIA develops a wide range of standards to enhance the interoperability of various storage systems. With new technologies like computational storage, standards do not exist. As companies develop solutions, questions arise. Should computational storage have standards for recommended behavior for hardware and software? Should an application programming interface be defined?
At SNIA, over 250 volunteers answered yes, and new work is being defined both within SNIA and in collaboration with other industry standards bodies. Join leaders of the Computational Storage Technical Work Group as they discuss how they define and develop standards with input from many different companies and users, what they perceive as important today and moving forward, and how you can participate.
Vincent Hsu, IBM; Andy Longworth, HPE; Chip Maurer, Dell Technologies
This talk will focus on the history of “Big Data” and how it has pushed the storage envelope, eventually resulting in a seemingly perfect relationship with Cloud Storage. But local storage is the 3rd wheel in this relationship, and won’t go down easy. Can this marriage survive when Big Data is being pulled in two directions? Should Big Data pick one, or can the three of them live happily ever after? This webcast will cover:
• The impact of edge computing
• The erosion of the data center
• Managing data-on-the-fly
• Grid management
• Next-gen Hadoop and related technologies
• Supporting AI workloads
• Data gravity and distributed data
Kent Lusted, Intel; Brad Smith, NVIDIA; Sam Kocsis, Amphenol; Tim Lustig, NVIDIA
Modern data centers systems consist of hundreds of sub-systems that are all connected with optical transceivers, copper cables, and industry standards-based connectors and cages. For interconnecting storage subsystems, two things are happening: Speeds are radically increasing making the maximum reach of copper wire interconnects very short and, at the same time, increasingly larger storage systems are expanding in size and much further apart. This is making longer reach optical technologies much more popular. However, optical interconnect technologies are more costly and complex compared to copper with a plethora of new buzz-words and technology concepts.
The rate of change from the huge uptick in data demand is accelerating new product developments at an incredible pace. While much of the enterprise industry is still on 10G/40G/100GbE speeds, the next generation optics groups are already commercializing 800G with 1.6Tb transceivers in discussion! Today, it’s all about power, cost, and upgrade paths.
In this SNIA Network Storage Forum webinar we’ll cover the latest in the impressive array of data center infrastructure solutions designed to address expanding requirements for higher-bandwidth and lower-power. This will include next-generation solutions leveraging copper and optics to deliver high signal integrity, lower-latency, and lower insertion loss to achieve maximum efficiency, speed, and density.
AJ Casamento, Broadcom; Ed Mazurek, Cisco; John Kim, NVIDIA
Each SAN transport has its own way to initialize and transfer data. So how do initiators (hosts) and targets (storage arrays) communicate in Fibre Channel (FC) Storage Area Networks (SANs)?
Find out in this live webcast where Fibre Channel experts will answer:
• How do FC links activate?
• Is FC routable?
• What kind of flow control is present in FC?
• How do initiators find targets and set up their communication?
• Finally, how does actual data get transferred since that is the ultimate goal?
This session will introduce these concepts to demystify the FC SAN for the network professional.
After you watch the webcast, check out the Q&A blog at https://bit.ly/3Gh43RU
Patty Driever, IBM; Dave Peterson, Broadcom; Craig Carlson, Marvell; David Rodgers, Teledyne LeCroy
Fibre Channel (FC) is the storage networking protocol for enterprise data centers with over 142 million ports having been shipped. Fibre Channel is purpose built and engineered to meet the demands for enterprise data centers that require rock solid reliability, high performance, and scalability. In other words, Fibre Channel technology transports FC-SB-x, FCP, and FC-NVMe storage protocols natively.
This webcast explains how Fibre Channel is architected for unparalleled performance for storage protocols. If you are interested in this architecture, please join us to learn more on the Fibre Channel:
• Functional levels and components
• Physical model
• Communication models
• Interconnect topologies
• Classes of service
• General Fabric model
• Generic Services
This all leads to a future webcast where we’ll dive into:
• Building blocks and their hierarchy
o Frames, Sequences, Exchanges, Protocols
• Segmentation and reassembly
• Error detection and recovery
• Current enhancements e.g., Congestion Notifications
Michael McManus, Intel; Christopher Davidson, HPE; Torben Kling Petersen, HPE; Alex McDonald, SNIA CSTI Chair
The use of genomics in modern biology has revolutionized the speed of innovation for the discovery of medicines. The COVID pandemic response has quickened genetic research and driven the rapid development of vaccines. Genomics, however, requires a significant amount of compute and data storage to aid discovery. This session is for IT professionals who are faced with delivering and supporting IT solutions for the required compute and data storage for genomics workflows. It will feature viewpoints from both the bioinformatics and technology perspectives with a focus on some of these compute and data storage challenges.
We will discuss:
• How to best store and manage these large genomics datasets
• Methods for sharing these large datasets for collaborative analysis.
• Legal and ethical implications of storing shareable data in the cloud
• Transferring large data sets and the impact on storage and networking
After you watch the presentation, check out the Q&A blog: https://bit.ly/SNIAGenomicQA
Erin Farr, IBM; Vincent Hsu, IBM; Jim Fister, The Decision Place
Data gravity has pulled computing to the Edge and enabled significant advances in hybrid cloud deployments. The ability to run analytics from the datacenter to the Edge, where the data is created and lives, also creates new use cases for nearly every industry and company. However, this movement of compute to the Edge is not the only pattern to have emerged. How might these other use cases impact your storage strategy?
This interactive webcast by the SNIA CSTI will focus on the following topics:
• Emerging patterns of data movement and the use cases that drive them
• Cloud Bursting
• Federated Learning across the Edge and Hybrid Cloud
• Considerations for distributed cloud storage architectures to match these emerging patterns
Steve Van Lare, Anjuna; Anand Kashyap, Fortanix; Michael Hoard Intel
To counter the ever-increasing likelihood of catastrophic disruption and cost due to enterprise IT security threats, data center decision makers need to be vigilant in protecting their organization’s data. Confidential Computing is architected to provide security for data in use to meet this critical need for enterprises today.
This webcast provides insight into how data center, cloud and edge applications may easily benefit from cost-effective, real-world Confidential Computing solutions. This educational discussion will provide end-user examples, tips on how to assess systems before and after deployment, as well as key steps to complete along the journey to mitigate threat exposure. Presenting are Steve Van Lare (Anjuna), Anand Kashyap (Fortanix), and Michael Hoard (Intel), who will discuss:
· What would it take to build-your-own Confidential Computing solution?
· Emergence of easily deployable, cost-effective Confidential Computing solutions
· Real world usage examples and key technical, business and investment insights
After you watch the webcast, check out the Q&A blog at https://bit.ly/3DqFKj6
Moderator: Tim Lustig, NVIDIA; Panelists: Kfir Wolfson, Pliops and John F. Kim, NVIDIA
Thanks to big data, artificial intelligence (AI), the Internet of things (IoT), and 5G, demand for data storage continues to grow significantly. The rapid growth is causing storage and database-specific processing challenges within current storage architectures. New architectures, designed with millisecond latency, and high throughput, offer in-network and storage computational processing to offload and accelerate data-intensive workloads.
Join technology innovators as they highlight how to drive value and accelerate SSD storage through the specialized implementation of key value technology to remove inefficiencies through a Data Processing Unit for hardware acceleration of the storage stacks, and a hardware-enabled Storage Data Processor to accelerate compute-intensive functions.
By joining, you will learn why SSDs are a staple in modern storage architectures. These disaggegated systems use just a fraction of computational load and power while unlocking the full potential of networked flash storage.
Paul O’Neill, Intel; Parviz Peiravi, Intel; Glyn Bowden, HPE
As noted in our panel discussion “What is Confidential Computing and Why Should I Care?,” Confidential Computing is an emerging industry initiative focused on helping to secure data in use. The efforts can enable encrypted data to be processed in memory while lowering the risk of exposing it to the rest of the system, thereby reducing the potential for sensitive data to be exposed while providing a higher degree of control and transparency for users.
As computing moves to span multiple environments from on-premises to public cloud to edge, organizations need protection controls that help safeguard sensitive IP and workload data wherever the data resides. In this live webcast we’ll cover:
• How Confidential Computing works in multi-tenant cloud environments
• How sensitive data can be isolated from other privileged portions of the stack
• Applications in financial services, healthcare industries, and broader enterprise applications
• Contributing to the Confidential Computing Consortium
John Kim, NVIDIA; Tom Friend, Illuminosi; Alex McDonald, Independent Consultant Vice Chair SNIA NSF
So much of what we discuss in SNIA is the latest emerging technologies in storage. While it’s good to know all about the coming technologies, it’s also important to understand those technologies being sunsetted. In this series, we cover obsolete hardware, protocols, interfaces and other aspects of storage
In our second installment of our Storage Technologies & Practices Ripe for Refresh, we will cover older HDD device interfaces and file systems. Advice will be given on how to replace these in production environments as well as why these changes are recommended. Also, we will be covering protocols that you should consider removing from your networks, either older versions of protocols where only newer versions should be used, or protocols that have been supplanted by superior options and should be discontinued entirely.
Finally, we will look at physical networking interfaces and cabling that are popular today but face an uncertain future as networking speeds grow ever faster.
Mike Bursell, Co-founder, Enarx Project; David Kaplan, AMD; Ronald Perez, Intel; Jim Fister, The Decision Place
In the "arms race" of security, new defensive tactics are always needed. One significant approach is Confidential Computing: a technology that can isolate data and execution in a secure space on a system, which takes the concept of security to new levels. This SNIA Cloud Storage Technologies Initiative (CSTI) webcast will provide an introduction and explanation of Confidential Computing and will feature a panel of industry architects responsible for defining Confidential Compute. It will be a lively conversation on topics including:
• The basics of hardware-based Trusted Execution Environments (TEEs) and how they work to enable confidential computing
• How to architect solutions based around TEEs
• How this foundation fits with other security technologies
• Adjacencies to storage technologies
Moderator: Jim Fister, SNIA CMSI; Panelists: Eli Tiomkin, Chair, SNIA CS SIG, Nidish Kamath, KIOXIA, David McIntyre, Samsung
In modern analytics deployments, latency is the fatal flaw that limits the efficacy of the overall system. Solutions move at the speed of decision, and microseconds could mean the difference between success and failure against competitive offerings. Artificial Intelligence, Machine Learning, and In-Memory Analytics solutions have significantly reduced latency, but the sheer volume of data and its potential broad distribution across the globe prevents a single analytics node from efficiently harvesting and processing data.This panel discussion will feature industry experts discussing the different approaches to distributed analytics in the network and storage nodes. How does the storage providers of HDDs and SSD view the data creation and movement between the edge compute and the cloud? And how can computational storage be a solution to reduce data movement?
Claudio DeSanti, Dell; Nishant Lodha, Marvell; Hrishikesh Sathawane, Samsung; Eric Hibbard, Samsung; John Kim, NVIDIA
With ever increasing threat vectors both inside and outside the data center, a compromised customer dataset can quickly result in a torrent of lost business data, eroded trust, significant penalties, and potential lawsuits. Vulnerabilities exist at every point when scaling out NVMe, which require data to be secured every time it leaves a server or the storage media, not only when leaving the data center. NVMe over Fabrics is poised to be the one of the most dominant transports of the future and securing and validating the vast amounts of data that would traverse this fabric is not just prudent, but paramount.
Join the webcast to hear Industry experts discuss current and future strategies to secure and protect your mission critical data.
You will learn:
- Industry trends and regulations around data security
- Potential threats and vulnerabilities
- Existing security mechanisms and best practices
- How to secure NVMe in flight and at rest
- Ecosystem and market dynamics
- Upcoming standards
After you watch the presentation, check out the Q&A blog https://bit.ly/2Wnrk1Y
Christine McMonigal, Intel; John Kim, NVIDIA; Walt O'Brien, Dell; David McIntyre, Samsung
In the ongoing evolution of the datacenter, a popular debate involves how storage is allocated and managed. There are three competing visions about how storage should be done; those are Hyperconverged Infrastructure (HCI), Disaggregated Storage, and Centralized Storage.
IT architects, storage vendors, and industry analysts argue constantly over which is the best approach and even the exact definition of each. Isn’t Hyperconverged constrained? Is Disaggregated designed only for large cloud service providers? Is Centralized storage only for legacy applications?
Tune in to debate these questions and more:
• What is the difference between centralized, hyperconverged, and disaggregated infrastructure, when it comes to storage?
• Where does the storage controller or storage intelligence live in each?
• How and where can the storage capacity and intelligence be distributed?
• What is the difference between distributing the compute or application and distributing the storage?
• What is the role of a JBOF or EBOF (Just a Bunch of Flash or Ethernet Bunch of Flash) in these storage models?
• What are the implications for data center, cloud, and edge?
Join us for another SNIA Networking Storage Forum Great Storage Debate as leading storage minds converge to argue the definitions and merits of where to put the storage and storage intelligence.
After you watch the debate, check out the Q&A blog: https://bit.ly/3kcAwA3
Parmeshwr Prasad, Dell; Olga Buchonina, Chair SNIA Blockchain Storage Technical Work Group, ActionSpot
The storage industry is working on ways to meet the demand for the very high throughput required for the volume of transactions per second in Blockchain operations.
There have been numerous advancements in Field Programmable Gate Array (FPGA) and Application Specific Integrated Circuit (ASIC) logics to improve the number of transactions per second for Blockchain operations. But these FPGA/ASIC improvements will not be sufficient for increasing the demand of hardware resources required for Blockchain. Smart network interface cards (NICs) offload low-level functions from server CPUs, dramatically increasing network and application performance, offloading all network related processing.
In this webcast, you will learn:
• The features of a Smart Network Interface Card (SMART-NIC) and how this will improve Blockchain transactions
• Why using SCM is ideal for in-memory databases
• Advantages of direct data movement without involving filesystems
• Benefits of using SCM to improve Blockchain transactions
Kiran Ranabhor, Cisco; Mark Jones, Broadcom; Rupin Mohan, HPE; Nishant Lodha, Marvell; Howard Johnson, Broadcom
Fibre Channel (FC) networks run on a highly streamlined protocol designed to offer persistently high performance. The FC protocol has built-in feedback mechanisms to avoid congestion and to alleviate it if it occurs. There are many new technologies being developed to monitor and manage performance and availability issues that may arise from time to time. Moreover, many of these tools are available across the ecosystem and are part of the FC standard.
Come listen to Fibre Channel technology experts to understand:
• New technologies like FPIN notification
• Exciting new innovations coming to the FC network
• How to ensure predictable performance
• QoS considerations
• Why FC is the best transport protocol for storage environments
Updating the network infrastructure for the 21st century
With virtualization and cloud computing revolutionizing the data center, it's time that the network has its own revolution. Join the Network Infrastructure channel on all the hottest topics for network and storage professionals such as software-defined networking, WAN optimization and more to maintain performance and service in your infrastructure