The “Great Storage Debates” webcast series continues, this time on FCoE vs. iSCSI vs. iSER. Like past “Great Storage Debates,” the goal of this presentation is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.
One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.
Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.
That leads to several questions about FCoE, iSCSI and iSER:
•If we can run various network storage protocols over Ethernet, what
•What are the advantages and disadvantages of FCoE, iSCSI and iSER?
•How are they structured?
•What software and hardware do they require?
•How are they implemented, configured and managed?
•Do they perform differently?
•What do you need to do to take advantage of them in the data center?
•What are the best use cases for each?
Join our SNIA experts as they answer all these questions and more on the next Great Storage Debate.
After you watch the webcast, check out the Q&A blog from our presenters http://bit.ly/2NyJKUM
RecordedJun 21 201862 mins
Your place is confirmed, we'll send you email reminders
Jim Handy, Objective Analysis and Tom Coughlin, Coughlin Associates
Get prepared for SNIA’s Persistent Memory Summit with this webcast from Objective Analysis and Coughlin Associates. Following up on their 2018 groundbreaking report on emerging memories, Jim Handy and Tom Coughlin will update us on 2019 advances in support from SNIA, the launch of Optane memory on DIMMs, new MRAM types, and more. You won’t want to miss their analysis on the progress made, and their perspective on the groundwork that still needs to be covered to bring persistent memory to mainstream computing.
Thomas Rivera, Chair, SNIA Data Protection & Privacy Committee, Paul Talbut, SNIA
Failing to protect sensitive information can put a lot of people at risk of being exploited by cybercriminals, and can make a company face enormous legal penalties.
The way information is shared and stored can put the information at risk.
It is risky to store personal information on portable devices, which are easily lost or stolen.
In addition, the consequences of a data breach can be devastating. Identity theft could lead to financial losses, and a company could face lawsuits and legal penalties.
This presentation will cover what kinds of personal information must be protected & guidelines for keeping this info safe.
After viewing this session, attendees should:
1. Understand how Privacy is defined
2. Highlight some of the Privacy regulations from around the globe
3. Understand what information to safeguard
4. Understand how these privacy regulations affect organizations that handle personal information
Ross Stenfort, Facebook; Lee Prewitt Microsoft; J Metz, Cisco
What do Hyperscalers like Facebook and Microsoft have in common? They are cloud market leaders using NVMe SSDs in their architectures. Get a close up look into their application requirements and challenges, why they chose NVMe flash for their storage, and how they are successfully deploying NVMe to fuel their businesses.
Mark Rogov, Dell EMC; Brandon Hoff, Broadcom; J Metz, Cisco
One of Fibre Channel’s greatest strengths is its ability to scale to thousands and thousands of nodes, while providing predictable performance. So, when we say that Fibre Channel has unmatched scalability, what does that actually mean? And how does it work?
We often hear about “designed from the ground up,” but in this case it’s actually true. From each individual link, to the overall domain architecture, each step along the way is intended to be approached in a scalable fashion.
In this webinar, we’ll be breaking down the pieces of the puzzle that help give Fibre Channel its robustness when you’re working at fabrics even greater than 10,000 nodes. We’ll be talking about:
•What a deterministic storage network is
•Fabric management principles
•Negotiated credit transfers (buffer-to-buffer credits)
•Network Engineering/Design Principles
•Oversubscription and Fan-In Ratios
•Topologies that help scale
•Domains and Fabric limits
•Consistency of performance at scale
Along the way, we’ll be talking about some of the ways that Fibre Channel differs from other popular storage networks as they approach large-scale environments, and how it handles issues that arise in such cases.
Please join us on November 6th at 10:00 am PT/1:00 pm ET for another educational webinar on Fibre Channel!
Ted Vojnovich, Lenovo; Fred Bower, Lenovo; Tim Lustig, Mellanox
Software defined storage, or SDS, is growing in popularity in both cloud and enterprise accounts. But what makes it different from traditional storage arrays? Does it really save money? Is it more complicated to support? Is it more scalable or higher-performing? And does it have different networking requirements than traditional storage appliances?
Watch this SNIA webcast to learn:
•How software-defined storage differs from integrated storage appliances
•Whether SDS supports block, file, object, or all three types of storage access
•Potential issues or pitfalls with deploying SDS
•How SDS affects storage networking
•Scale-up vs. scale-out vs. hyperconverged vs. cloud
After you watch the webcast, check out the Q&A blog http://bit.ly/SDS-Q-A
Robin Gareiss, President and Founder, Nemertes Research
Intelligent Customer Engagement Series [Ep.5]: CX Success Stories Require Technology, Leadership, Data
A great story requires more than a compelling narrative. Marketing teams can significantly elevate their success with the right combination of leadership, technology, and data derived from well-planned customer interviews.
Crafting that perfect story requires an expanded mindset about what comprises “marketing.”
In this webinar, join Nemertes Research President Robin Gareiss, who recently completed detailed research with 518 companies on how they use advanced technologies and reshape their organizational structure to improve customer experience. Based on this research and her experience as a journalist, marketing content developer, and CX advisor, she will cover:
1. Organizational overhaul: Why a Chief Customer Officer is vital, and how the CMO and CCO work together for joint success.
2. Technology leverage: What are the key technologies and contact-center initiatives that result in measurable CX success—ultimately delivering crucial data to marketing teams that support their success stories?
3. The perfect story: How to conduct interviews that get real-world data to support your mission.
Pierre Mouallem, Lenovo; John Kim, Mellanox; J Metz, Cisco; Steve Vanderlinden, Lenovo
What does it mean to be protected and safe? You need the right people and the right technology. This presentation is going to go into the broad introduction of security principles in general. This will include some of the main aspects of security, including defining the terms that you must know, if you hope to have a good grasp of what makes something secure or not. We’ll be talking about the scope of security, including threats, vulnerabilities, and attacks – and what that means in real storage terms. In this live webcast we will cover:
•Protecting the data (Keeping “the bad” out)
•Threat landscape, Bad actors/hackers
•Attack vectors, attack surfaces, vulnerabilities
•Physical security issues
•Layers of protection (encryption – last line of defense)
•Remediation after a breach/incident
After you watch the webcast, check out the Q&A blog: http://bit.ly/2JQ1s5L
Fibre Channel has long been known to be a very secure protocol for storage. Even so, there is no such thing as a “perfectly secure” technology, and for that reason it’s important to constantly update and protect against threats.
The sheer variety of environments in which Fibre Channel fabrics are deployed makes it very difficult to simply rely only on physical security. In fact, it’s possible to access different storage systems by different users, even when spanned over several sites. Fibre Channel enables security services to specifically address these concerns, and prevent misconfigurations or access to data by non-authorized people and machines.
This webcast is going to dive deep into the guts of security aspects of Fibre Channel, looking closely at the protocols used to implement security in a Fibre Channel fabric. In particular, we’re going to look at:
•The definitions of the protocols to authenticate Fibre Channel devices
•What are the different classes of threats, and what are the mechanisms to protect against them
•What are session keys and how to set them up
•How Fibre Channel negotiates these parameters to insure frame-by-frame integrity and confidentiality
•How Fibre Channel establishes and distributes policies across a fabric
Please join us to learn more about the technical considerations that Fibre Channel brings to the table to secure and protect your data and information.
Robin Gareiss, President & Founder, Nemertes Research
Intelligent Customer Engagement Series [Ep.1] AI Drives Measurable Success in Customer Engagement
Nearly 50% more companies are using or planning to use AI in their customer engagement initiatives.
Nemertes recently studied how 518 companies are using AI and analytics to improve their customer experiences. This webinar details:
• how these companies use AI
• what measurable improvements resulted.
Ingo Fuchs, NetApp; Paul Burt, NetApp, Mike Jochimsen, Kaminario
Kubernetes is great for running stateless workloads, like web servers. It’ll run health checks, restart containers when they crash, and do all sorts of other wonderful things. So, what about stateful workloads?
This webcast will take a look at when it’s appropriate to run a stateful workload in cluster, or out. We’ll discuss the best options for running a workload like a database on the cloud, or in the cluster, and what’s needed to set that up.
•Running a database on a VM and connecting it to Kubernetes as a service
•Running a database in Kubernetes using a `stateful set`
•Running a database in Kubernetes using an Operator
•Running a database on a cloud managed service
After you watch the webcast, check out our Kubernetes Links & Resources blog at http://bit.ly/KubeLinks and our webcast Q&A blog at http://bit.ly/KubeQuestions
Alex McDonald, Vice-Chair SNIA EMEA, and Office of the CTO, NetApp; Paul Talbut, General Manager, SNIA EMEA
The SNIA EMEA Storage Developer Conference (SDC) will return to Tel Aviv in early February 2020.
SDC EMEA is organised by SNIA, the non-profit industry association responsible for data storage standards and education, and the conference is designed to provide an open and independent platform for technical education and knowledge sharing amongst the local storage development community.
SDC is built by developers – for developers.
This session will offer a preview of what is planned for the 2020 agenda ahead of the call for presentations and will also give potential sponsors the information they need to be able to budget for their participation in the event. If you have attended previously as a delegate, this is a great opportunity to learn more about how you can raise your profile as a speaker or get your company involved as a sponsor. There will be time allocated during the webcast to ask questions about the options available. Companies who have significant storage development teams will learn why this conference is valuable to the local technical community and why they should be directly engaged.
Michelle Tidwell, Program Director, IBM; Tom Clark, Distinguished Engineer, IBM; Matt Levan, Storage Solutions Architect, IBM
As enterprises move to a hybrid multi-cloud world, they are faced with many challenges. Decisions surrounding what technologies to use is one, but they are also seeing a transformation in traditional IT roles. The storage admins are asked to be more cloud savvy while new roles of cloud admins are emerging to handle the complexities of deploying simple and efficient clouds. Meanwhile, both these roles are asked to ensure a self-service environment is architected so that application developers can get resources needed to develop cutting edge apps not in weeks, days or hours, but in minutes.
In part one of this three part series, we covered the high level aspects of Kubernetes. This presentation will discuss key capabilities IT vendors are creating based on open source technologies such as Docker and Kubernetes to build self-service infrastructure to support hybrid multi-cloud deployments.We’ll cover:
•Persistent storage and how to specify it
•Ensuring application portability between Private and Public Clouds
•Building a self-service infrastructure (Helm, Operators)
•Selecting Block, File, Object (Traditional Storage, SDS)
After you watch the webcast, check out the Q&A blog here: http://bit.ly/2M3IVpm
Anne Blanchard, Senior Director of Product Marketing, Nasuni and Robin Smith, Technical Sales - Gospel Technology
The benefits of a cloud-first storage strategy are well-known: scalability, flexibility, agility, avoiding lock-in and spreading risk to name a few. But defining your cloud-first storage strategy requires you to take a hard look at your ecosystem and address the challenges of cloud adoption head on.
Join this panel to hear experts discuss how the key challenges - including taking risks with data assets, ownership, integration, security and compliance - can be overcome so that you can unlock the rewards of going cloud-first.
Eden KIm, CEO, Calypso Systems; Jim Fister, SNIA Solid State Storage Initiative
Real-world digital workloads often behave very differently from what might be expected. The equipment used in a computing system may function differently than anticipated. Unknown quirks in complicated software and operations running alongside the workload may be doing more or less than the user initially supposed. To truly understand what is happening, the right approach is to test and monitor the systems’ behaviors as real code is executed. By using measured data designers, vendors and service personnel can pinpoint the actual limits and bottlenecks that a particular workload is experiencing. Join the SNIA Solid State Storage Special Interest Group to learn how to be a part of the real-world workload revolution
Swordfish School: Introduction to SNIA Swordfish™ Features and Profiles
Ready to ride the wave to what’s next in storage management? As a part of an ongoing series of educational materials to help speed your SNIA Swordfish™ implementation in this Swordfish School webcast, Storage standards expert Richelle Ahlvers (Broadcom Inc.) will provide an introduction to the Features and Profiles concepts, describe how they work together, and talk about how to use both Features and Profiles when implementing Swordfish.
Features are used by implementations to advertise to clients what functionality they are able to support. Profiles are detailed descriptions that describe down to the individual property level what functionality is required for implementations to advertise Features. The Profiles are used for in-depth analysis during development, making it easy for clients to determine which Features to require for different configurations. They are also used to determine certification / conformance requirements.
About SNIA Swordfish™
Designed with IT administrators and DevOps engineers in mind to provide simplified and scalable storage management for data center environments, SNIA Swordfish™ is a standard that defines the management of data storage and services as an extension to the Distributed Management Task Force’s (DMTF) Redfish application programming interface specification. Unlike proprietary interfaces, Swordfish is open and easy-to-adopt with broad industry support.
Your one stop shop for everything SNIA Swordfish is https://www.snia.org/swordfish.
Sathish Gnanasekaran, Broadcom; John Kim, Mellanox; J Metz, Cisco; Tim Lustig, Mellanox
For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think.
The three main storage network transports - Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, placing a protocol such as NVMe over Fabrics can mean very different things when looking at one networking method in comparison to another.
Unfortunately, many network administrators may not understand how different storage solutions place burdens upon their networks. As more storage traffic traverses the network, customers face the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput. Watch this webinar to learn:
•Typical storage traffic patterns
•What is Incast, what is head of line blocking, what is congestion, what is a slow drain, and when do these become problems on a network?
•How Ethernet, Fibre Channel, InfiniBand handle these effects
•The proper role of buffers in handling storage network traffic
•Potential new ways to handle increasing storage traffic loads on the network
After you watch the webcast, check out the Q&A blog http://bit.ly/323kyNj
David Chalupsky, Intel; Craig Carlson, Marvell; Peter Onufryck, Microchip; John Kim, Mellanox
In the short period from 2014-2018, Ethernet equipment vendors have announced big increases in line speeds, shipping 25, 50, and 100 Gigabits-per -second (Gb/s) speeds and announcing 200/400 Gb/s. At the same time Fibre Channel vendors have launched 32GFC, 64GFC and 128GFC technology while InfiniBand has reached 200Gb/s (called HDR) speeds.
But who exactly is asking for these faster new networking speeds, and how will they use them? Are there servers, storage, and applications that can make good use of them? How are these new speeds achieved? Are new types of signaling, cables and transceivers required? How will changes in PCIe standards keep up? And do the faster speeds come with different distance limitations?
Watch this SNIA Networking Storage Forum (NSF) webcast to learn how these new speeds are achieved, where they are likely to be deployed for storage, and what infrastructure changes are needed to support them.
After you watch the webcast, check out the Q&A blog at http://bit.ly/2ZPleUr
The hottest topics for storage and infrastructure professionals
The Enterprise Storage channel has the most up-to-date, relevant content for storage and infrastructure professionals. As data centers evolve with big data, cloud computing and virtualization, organizations are going to need to know how to make their storage more efficient. Join this channel to find out how you can use the most current technology to satisfy your business and storage needs.