Fibre Channel has long been known to be a very secure protocol for storage. Even so, there is no such thing as a “perfectly secure” technology, and for that reason it’s important to constantly update and protect against threats.
The sheer variety of environments in which Fibre Channel fabrics are deployed makes it very difficult to simply rely only on physical security. In fact, it’s possible to access different storage systems by different users, even when spanned over several sites. Fibre Channel enables security services to specifically address these concerns, and prevent misconfigurations or access to data by non-authorized people and machines.
This webcast is going to dive deep into the guts of security aspects of Fibre Channel, looking closely at the protocols used to implement security in a Fibre Channel fabric. In particular, we’re going to look at:
•The definitions of the protocols to authenticate Fibre Channel devices
•What are the different classes of threats, and what are the mechanisms to protect against them
•What are session keys and how to set them up
•How Fibre Channel negotiates these parameters to insure frame-by-frame integrity and confidentiality
•How Fibre Channel establishes and distributes policies across a fabric
Please join us to learn more about the technical considerations that Fibre Channel brings to the table to secure and protect your data and information.
Michelle Tidwell, Program Director, IBM; Tom Clark, Distinguished Engineer, IBM; Matt Levan, Storage Solutions Architect, IBM
As enterprises move to a hybrid multi-cloud world, they are faced with many challenges. Decisions surrounding what technologies to use is one, but they are also seeing a transformation in traditional IT roles. The storage admins are asked to be more cloud savvy while new roles of cloud admins are emerging to handle the complexities of deploying simple and efficient clouds. Meanwhile, both these roles are asked to ensure a self-service environment is architected so that application developers can get resources needed to develop cutting edge apps not in weeks, days or hours, but in minutes.
In part one of this three part series, we covered the high level aspects of Kubernetes. This presentation will discuss key capabilities IT vendors are creating based on open source technologies such as Docker and Kubernetes to build self-service infrastructure to support hybrid multi-cloud deployments.We’ll cover:
•Persistent storage and how to specify it
•Ensuring application portability between Private and Public Clouds
•Building a self-service infrastructure (Helm, Operators)
•Selecting Block, File, Object (Traditional Storage, SDS)
Anne Blanchard, Senior Director of Product Marketing, Nasuni and Robin Smith, Technical Sales - Gospel Technology
The benefits of a cloud-first storage strategy are well-known: scalability, flexibility, agility, avoiding lock-in and spreading risk to name a few. But defining your cloud-first storage strategy requires you to take a hard look at your ecosystem and address the challenges of cloud adoption head on.
Join this panel to hear experts discuss how the key challenges - including taking risks with data assets, ownership, integration, security and compliance - can be overcome so that you can unlock the rewards of going cloud-first.
Eden KIm, CEO, Calypso Systems; Jim Fister, SNIA Solid State Storage Initiative
Real-world digital workloads often behave very differently from what might be expected. The equipment used in a computing system may function differently than anticipated. Unknown quirks in complicated software and operations running alongside the workload may be doing more or less than the user initially supposed. To truly understand what is happening, the right approach is to test and monitor the systems’ behaviors as real code is executed. By using measured data designers, vendors and service personnel can pinpoint the actual limits and bottlenecks that a particular workload is experiencing. Join the SNIA Solid State Storage Special Interest Group to learn how to be a part of the real-world workload revolution
Swordfish School: Introduction to SNIA Swordfish™ Features and Profiles
Ready to ride the wave to what’s next in storage management? As a part of an ongoing series of educational materials to help speed your SNIA Swordfish™ implementation in this Swordfish School webcast, Storage standards expert Richelle Ahlvers (Broadcom Inc.) will provide an introduction to the Features and Profiles concepts, describe how they work together, and talk about how to use both Features and Profiles when implementing Swordfish.
Features are used by implementations to advertise to clients what functionality they are able to support. Profiles are detailed descriptions that describe down to the individual property level what functionality is required for implementations to advertise Features. The Profiles are used for in-depth analysis during development, making it easy for clients to determine which Features to require for different configurations. They are also used to determine certification / conformance requirements.
About SNIA Swordfish™
Designed with IT administrators and DevOps engineers in mind to provide simplified and scalable storage management for data center environments, SNIA Swordfish™ is a standard that defines the management of data storage and services as an extension to the Distributed Management Task Force’s (DMTF) Redfish application programming interface specification. Unlike proprietary interfaces, Swordfish is open and easy-to-adopt with broad industry support.
Your one stop shop for everything SNIA Swordfish is https://www.snia.org/swordfish.
Sathish Gnanasekaran, Broadcom; John Kim, Mellanox; J Metz, Cisco; Tim Lustig, Mellanox
For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think.
The three main storage network transports - Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, placing a protocol such as NVMe over Fabrics can mean very different things when looking at one networking method in comparison to another.
Unfortunately, many network administrators may not understand how different storage solutions place burdens upon their networks. As more storage traffic traverses the network, customers face the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput. Watch this webinar to learn:
•Typical storage traffic patterns
•What is Incast, what is head of line blocking, what is congestion, what is a slow drain, and when do these become problems on a network?
•How Ethernet, Fibre Channel, InfiniBand handle these effects
•The proper role of buffers in handling storage network traffic
•Potential new ways to handle increasing storage traffic loads on the network
After you watch the webcast, check out the Q&A blog http://bit.ly/323kyNj
David Chalupsky, Intel; Craig Carlson, Marvell; Peter Onufryck, Microchip; John Kim, Mellanox
In the short period from 2014-2018, Ethernet equipment vendors have announced big increases in line speeds, shipping 25, 50, and 100 Gigabits-per -second (Gb/s) speeds and announcing 200/400 Gb/s. At the same time Fibre Channel vendors have launched 32GFC, 64GFC and 128GFC technology while InfiniBand has reached 200Gb/s (called HDR) speeds.
But who exactly is asking for these faster new networking speeds, and how will they use them? Are there servers, storage, and applications that can make good use of them? How are these new speeds achieved? Are new types of signaling, cables and transceivers required? How will changes in PCIe standards keep up? And do the faster speeds come with different distance limitations?
Watch this SNIA Networking Storage Forum (NSF) webcast to learn how these new speeds are achieved, where they are likely to be deployed for storage, and what infrastructure changes are needed to support them.
After you watch the webcast, check out the Q&A blog at http://bit.ly/2ZPleUr
Alan Bumgarner, Intel; Alex McDonald, NetApp; John Kim, Mellanox
Traditionally, much of the IT infrastructure that we’ve built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components.
We build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But currently fast memory has no persistence at all; it’s only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that.
Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by using byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface).
New memory technologies are challenging these assumptions. They look like storage in that they’re persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. Persistent memory (PM) lies between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, SNIA experts will discuss:
•Traditional uses of storage and memory as a cache
•How can we build and use systems based on PM?
•What would a system with storage, persistent memory and DRAM look like?
•Do we need a new programming model to take advantage of PM?
•Interesting use cases for systems equipped with PM
•How we might take better advantage of this new technology
Scott Sinclair, ESG; Michelle Tidwell, IBM, Mike Jochimsen, Kaminario; Eric Lakin, Univ. of Michigan; Alex McDonald, NetApp
Has hybrid cloud reached a tipping point? According to research from the Enterprise Strategy Group (ESG), IT organizations today are struggling to strike the right balance between public cloud and their on-premises infrastructure. In this SNIA webcast, ESG senior analyst, Scott Sinclair, will share research on current cloud trends, covering:
•Key drivers behind IT complexity
•IT spending priorities
•Multi-cloud & hybrid cloud adoption drivers
•When businesses are moving workloads from the cloud back on-premises
•Top security and cost challenges
•Future cloud projections
The research will be followed by a panel discussion with Scott Sinclair and SNIA cloud experts Alex McDonald, Michelle Tidwell, Mike Jochimsen and Eric Lakin.
Yamini Shastry, Viavi Solutions; David Rodgers, Teledyne LeCroy; Joe Kimpler, ATTO Technology
In the FCIA webcast “Protocol Analysis for High-Speed Fibre Channel Fabrics” experts covered the basics on protocol analysis tools and how to incorporate them into the “best practices” application of SAN problem solving.
Our experts return for this 201 course which will provide a deeper dive into how to interpret the output and results from the protocol analyzers. We will also share insight into using signal jammers and how to use them to correlate error conditions to be able to formulate real time solutions.
Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) complicate analysis, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. In addition, in-depth knowledge of how to decipher the analytical results and then determine potential solutions is critical.
Join us for a deeper dive into Protocol Analysis tools and how to interpret the analytical output from them. We will review:
•Inter switch links (ISL) – How to measure and minimize fabric congestion
•Post-capture analysis – Graphing, Trace reading, Performance metrics
•Benefits of purposeful error injection
•More Layer 2-3 and translation layers debug
•Link Services and Extended Link Services - LRR Link Ready Rests
You can watch the 1st webcast on this topic on-demand at http://bit.ly/2MxsWR7
Alex McDonald, Vice-Chair SNIA Europe, and Office of the CTO, NetApp; Paul Talbut, SNIA Europe General Manager
We’re all accustomed to transferring money from one bank account to another; a credit to the payer becomes a debit to the payee. But that model uses a specific set of sophisticated techniques to accomplish what appears to be a simple transaction. We’re also aware of how today we can order goods online, or reserve an airline seat over the Internet. Or even simpler, we can update a photograph on Facebook. Can these applications use the same models, or are new techniques required?
One of the more important concepts in storage is the notion of transactions, which are used in databases, financials, and other mission critical workloads. However, in the age of cloud and distributed systems, we need to update our thinking about what constitutes a transaction. We need to understand how new theories and techniques allow us to undertake transactional work in the face of unreliable and physically dispersed systems. It’s a topic full of interesting concepts (and lots of acronyms!). In this webcast, we’ll provide a brief tour of traditional transactional systems and their use of storage, we’ll explain new application techniques and transaction models, and we’ll discuss what storage systems need to look like to support these new advances.
And yes, we’ll explain all the acronyms and nomenclature too.
You will learn:
• A brief history of transactional systems from banking to Facebook
• How the Internet and distributed systems have changed and how we view transactions
• An explanation of the terminology, from ACID to CAP and beyond
• How applications, networks & particularly storage have changed to meet these demands
Alex McDonald, SNIA SSSI Co-Chair (Moderator), Tom Coughlin, Coughlin Associates, Motti Beck, Mellanox Technologies
Join SNIA Solid State Storage Initiative (SSSI) Education Chair and leading analyst Tom Coughlin and SSSI member Motti Beck of Mellanox Technologies for a journey into the requirements and trends in worldwide data storage for entertainment content acquisition, editing, archiving, and digital preservation. This webcast will cover capacity and performance trends and media projections for direct attached storage, cloud, and near-line network storage. It will also include results from a long-running digital storage survey of media and entertainment professionals. Learn what is needed for digital cinema, broadcast, cable, and internet applications and more.
This webcast will present an overview of scale-out file system architectures. To meet the increasingly higher demand on both capacity and performance in large cluster computing environments, the storage subsystem has evolved toward a modular and scalable design. The scale-out file system is one implementation of the trend, in addition to scale-out object and block storage solutions. This presentation will provide an introduction to scale-out-file systems and cover:
•General principles when architecting a scale-out file system storage solution
•Hardware and software design considerations for different workloads
•Storage challenges when serving a large number of compute nodes, e.g. name space consistency, distributed locking, data replication, etc.
•Use cases for scale-out file systems
•Common benchmark and performance analysis approaches
After you watch the webcast, check-out the Q&A blog at http://bit.ly/2EWqXQO
Don Deel, NetApp, SNIA; Moderated by Richelle Ahlvers, Broadcom, SNIA
Tools for speeding your implementation of the next-generation storage management standard
The SNIA Swordfish™ specification for the management of storage systems and data services is an extension of the DMTF Redfish® specification. Together, these specifications provide a unified approach for the management of servers and storage in converged, hyper-converged, hyperscale and cloud infrastructure environments.
To help speed your Swordfish development efforts, SNIA has produced open source storage management tools available now on GitHub for your use. Join this session for an overview of these open source tools, which include a Swordfish API Emulator, a Swordfish Basic Web Client, an example Swordfish plugin for the Microsoft Power BI business analytics service, and an example Swordfish plugin for the Datadog monitoring service.
Containers are a big trend in application deployment. The landscape of containers is moving fast and constantly changing, with new standards emerging every few months. Learn what’s new, what to pay attention to, and how to make sense of the ever-shifting container landscape.
This live webcast will cover:
•Container storage types and Container Frameworks
•An overview of the various storage APIs for the container landscape
•How to identify the most important projects to follow in the container world
•The Container Storage Interface spec and Kubernetes 1.13
•How to get involved in the container community
After you watch the webcast, check out the Q&A blog at http://bit.ly/2GPkFET
Patty Driever, IBM; Howard Johnson, Broadcom; Joe Kimpler, ATTO Technologies
FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilizes Fibre Channel as the underlying transport.
The FCIA FICON 101 webcast (on-demand at http://bit.ly/FICON101) described some of the key characteristics of the mainframe and how FICON satisfies the demands placed on mainframes for reliable and efficient access to data. FCIA experts gave a brief introduction into the layers of architecture (system/device and link) that the FICON protocol bridges. Using the FICON 101 session as a springboard, our experts return for FICON 201 where they will delve deeper into the architectural flow of FICON and how it leverages Fibre Channel to be an optimal mainframe transport.
Join this live FCIA webcast where you’ll learn:
- How FICON (FC-SB-x) maps onto the Fibre Channel FC-2 layer
- The evolution of the FICON protocol optimizations
- How FICON adapts to new technologies
Christine McMonigal, Intel; J Metz, Cisco; Alex McDonald, NetApp
“Why can’t I add a 33rd node?”
One of the great advantages of Hyperconvergence infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking.
In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition.
In this webinar, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:
•The impact of metadata on the network
•What happens as we add additional nodes
•How to right-size the network for growth
•Tricks of the trade from the networking perspective to make your HCI work better
Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will necessarily vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place.
After you watch the webcast, check out the Q&A blog at http://bit.ly/2Va4wwH
When it comes to storage, a byte is a byte is a byte, isn’t it? One of the enduring truths about simplicity is that scale makes everything hard, and with that comes complexity. And when we’re not processing the data, how do we store it and access it?
In this webcast, we will compare three types of data access: file, block and object storage, and the access methods that support them. Each has its own set of use cases, and advantages and disadvantages. Each provides simple to sophisticated management of the data, and each makes different demands on storage devices and programming technologies.
Perhaps you’re comfortable with block and file, but are interested in investigating the more recent class of object storage and access. Perhaps you’re happy with your understanding of objects, but would really like to understand files a bit better, and what advantages or disadvantages they have compared to each other. Or perhaps you want to understand how file, block and object are implemented on the underlying storage systems – and how one can be made to look like the other, depending on how the storage is accessed. Join us as we discuss and debate:
•How different types of storage drive different management & access solutions
•Where everything is in fixed-size chunks
•SCSI and SCSI-based protocols, and how FC and iSCSI fit in
•When everything is a stream of bytes
•NFS and SMB
•When everything is a blob
•HTTP, key value and RESTful interfaces
•When files, blocks and objects collide
Cloud computing innovation will power enterprise transformation in 2018. Cloud growth is also driving a rapid rise in the big data storage market, exacerbating the enterprise challenge around storage cost and complexity.
Join this webinar with Kevin L. Jackson, CEO, GovCloud Network LLC and globally recognized cloud computing thought leader. He will show how Cloud Storage 2.0 can be used to address this proliferation of real-time data from the web, mobile devices, social media, sensors, log files, and transactional applications, and how all of these are affecting today's data centers.
Ian Smith, CEO and Reuben Thompson, VP Technology, Gospel Technology
Join this webcast with Ian Smith, CEO and Reuben Thompson, VP Technology at Gospel Technology, as they discuss:
- Private enterprise blockchains vs public ecosystems (i.e. crypto)
- Enabling data transactional trust without compromising speed
- How blockchain can be used to store and protect data
Gospel is an enterprise data platform built on blockchain, providing data storage for the distributed era, as well as enterprise data security and data breach avoidance.
About the speakers:
Ian is a serial entrepreneur and experienced enterprise technology executive, at one point holding a VP Product Management role for IBM Storage, and has been involved in solving some of the largest and most complex infrastructure and data problems in enterprise business.
Reuben is responsible for all Gospel platform development and has extensive experience of managing large-scale software projects, scalable, distributed, service-oriented software architectures, and satisfying complex and divergent compliance requirements (FCA, PCI, etc).
The hottest topics for storage and infrastructure professionals
The Enterprise Storage channel has the most up-to-date, relevant content for storage and infrastructure professionals. As data centers evolve with big data, cloud computing and virtualization, organizations are going to need to know how to make their storage more efficient. Join this channel to find out how you can use the most current technology to satisfy your business and storage needs.