The SNIA Solid State Storage Initiative is partnering with SATA-IO and NVM Express to present a panel of experts from Objective Analysis, Micron, TE Connectivity, Intel, Calypso, and Coughlin Associates to give you the latest information on M.2, the new SSD card form factor. You will leave this webinar with an understanding of the M.2 market, M.2 cards and connection schemes, NVM Express, and M.2 performance; you’ll also be able to ask questions of the experts.
RecordedJun 10 201479 mins
Your place is confirmed, we'll send you email reminders
This webcast will present an overview of scale-out file system architectures. To meet the increasingly higher demand on both capacity and performance in large cluster computing environments, the storage subsystem has evolved toward a modular and scalable design. The scale-out file system is one implementation of the trend, in addition to scale-out object and block storage solutions. This presentation will provide an introduction to scale-out-file systems and cover:
•General principles when architecting a scale-out file system storage solution
•Hardware and software design considerations for different workloads
•Storage challenges when serving a large number of compute nodes, e.g. name space consistency, distributed locking, data replication, etc.
•Use cases for scale-out file systems
•Common benchmark and performance analysis approaches
Don Deel, NetApp, SNIA; Moderated by Richelle Ahlvers, Broadcom, SNIA
Tools for speeding your implementation of the next-generation storage management standard
The SNIA Swordfish™ specification for the management of storage systems and data services is an extension of the DMTF Redfish® specification. Together, these specifications provide a unified approach for the management of servers and storage in converged, hyper-converged, hyperscale and cloud infrastructure environments.
To help speed your Swordfish development efforts, SNIA has produced open source storage management tools available now on GitHub for your use. Join this session for an overview of these open source tools, which include a Swordfish API Emulator, a Swordfish Basic Web Client, an example Swordfish plugin for the Microsoft Power BI business analytics service, and an example Swordfish plugin for the Datadog monitoring service.
Containers are a big trend in application deployment. The landscape of containers is moving fast and constantly changing, with new standards emerging every few months. Learn what’s new, what to pay attention to, and how to make sense of the ever-shifting container landscape.
This live webcast will cover:
•Container storage types and Container Frameworks
•An overview of the various storage APIs for the container landscape
•How to identify the most important projects to follow in the container world
•The Container Storage Interface spec and Kubernetes 1.13
•How to get involved in the container community
Philip Kufeldt, Univ. of California, Santa Cruz; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:
•What prompted the development of composable infrastructure?
•What are the solutions?
•What is composable infrastructure?
•Enabling technologies (not just what’s here, but what’s needed…)
•Status of composable infrastructure standards/products
•What’s on the horizon – 2 years? 5 Years
•What it all means
Christine McMonigal, Intel; J Metz, Cisco; Alex McDonald, NetApp
“Why can’t I add a 33rd node?”
One of the great advantages of Hyperconvergence infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking.
In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition.
In this webinar, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:
•The impact of metadata on the network
•What happens as we add additional nodes
•How to right-size the network for growth
•Tricks of the trade from the networking perspective to make your HCI work better
Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will necessarily vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place.
When it comes to storage, a byte is a byte is a byte, isn’t it? One of the enduring truths about simplicity is that scale makes everything hard, and with that comes complexity. And when we’re not processing the data, how do we store it and access it?
In this webcast, we will compare three types of data access: file, block and object storage, and the access methods that support them. Each has its own set of use cases, and advantages and disadvantages. Each provides simple to sophisticated management of the data, and each makes different demands on storage devices and programming technologies.
Perhaps you’re comfortable with block and file, but are interested in investigating the more recent class of object storage and access. Perhaps you’re happy with your understanding of objects, but would really like to understand files a bit better, and what advantages or disadvantages they have compared to each other. Or perhaps you want to understand how file, block and object are implemented on the underlying storage systems – and how one can be made to look like the other, depending on how the storage is accessed. Join us as we discuss and debate:
•How different types of storage drive different management & access solutions
•Where everything is in fixed-size chunks
•SCSI and SCSI-based protocols, and how FC and iSCSI fit in
•When everything is a stream of bytes
•NFS and SMB
•When everything is a blob
•HTTP, key value and RESTful interfaces
•When files, blocks and objects collide
Sagi Grimberg, Lightbits; J Metz, Cisco; Tom Reu, Chelsio
In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor- or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a network, you need to understand NVMe over Fabrics (NVMe-oF).
TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express organization. This can mean really good things for storage and storage networking – but what are the tradeoffs?
In this webinar, the lead author of the NVMe/TCP specification, Sagi Grimberg, and J Metz, member of the SNIA and NVMe Boards of Directors, will discuss:
•What is NVMe/TCP
•How NVMe/TCP works
•What are the trade-offs?
•What should network administrators know?
•What kind of expectations are realistic?
•What technologies can make NVMe/TCP work better?
Cody Hosterman, Pure Storage; Jason Massae, VMware; J Metz, Cisco
With all the different storage arrays and connectivity protocols available today, knowing the best practices can help improve operational efficiency and ensure resilient operations. VMware’s storage global service has reported many of the common service calls they receive. In this webcast, we will share those insights and lessons learned by discussing:
- Common mistakes when setting up storage arrays
- Why iSCSI is the number one storage configuration problem
- Configuring adapters for iSCSI or iSER
- How to verify your PSP matches your array requirements
- NFS best practices
- How to maximize the value of your array and virtualization
- Troubleshooting recommendations
After you watch the webcast, check out the Q&A blog at http://bit.ly/2WjmFJW
Raghu Kulkarni, SNIA PM & NVDIMM SIG member and Alex McDonald, SNIA SSSI Co-Chair
Kick off the new year with a new SNIA Persistent Memory and NVDIMM Special Interest Group webcast on how applications can take advantage of Persistent Memory today with NVDIMM - the go-to Persistent Memory technology for boosting performance for next generation storage platforms. NVDIMM standards have paved the way to simple, plug-n-play solutions. If you're a developer or integrator who hasn't yet realized the benefits of NVDIMMs in your products, you will want to attend to learn about NVDIMM functionality, applications, and benefits. You'll come away with an understanding of how NVDIMMs fit into the persistent memory landscape.
Moderator: Alex McDonald, SNIA SSSI Co-Chair; Presenters: Tom Coughlin, Coughlin Associates & Jim Handy, Objective Analysis
Join SSSI members and respected analysts Tom Coughlin and Jim Handy for a look into their new Emerging Memory and Storage Technologies Report. Tom and Jim will examine emerging memory technologies and their interaction with standard memories, how a new memory layer improves computer performance, and the technical advantages and economies of scale that contribute to the enthusiasm for emerging memories. They will provide an outlook on market projections and enabling and driving applications. The webcast is the perfect preparation for the 2019 SNIA Persistent Memory Summit January 24, 2019.
Mike Walker, former Chair SNIA SMI TWG and former IBM Engineer, Don Deel, SNIA SMI Board Chair, SMI TWG Chair, NetApp
If you’re a storage equipment vendor, management software vendor or end-user of the ISO approved SNIA Storage Management Initiative Specification (SMI-S), you won’t want to miss this presentation. Enterprise storage industry expert Mike Walker will provide an overview of new indications, methods, properties and profiles of SMI-S 1.7 and the newly introduced version, SMI-S 1.8. If you haven’t yet made the jump to SMI-S 1.7, Walker will explain why it’s important to go directly to SMI-S 1.8.
Daniel Sazbon, SNIA Europe Chair, IBM; Alex McDonald, SNIA Europe Vice Chair, NetApp
John Kim, Mellanox; Saqib Jang, Chelsio; Fred Zhang, Intel
Scale-out storage is increasingly popular for cloud, high-performance computing, machine learning, and certain enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines.
But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. Join this webinar to learn:
•Scale-out storage solutions and what workloads they can address
•How your network may need to evolve to support scale-out storage
•Network considerations to ensure performance for demanding workloads
•Key considerations for all flash
After you watch the webcast, check out the Q&A blog: http://bit.ly/scale-out-q-a
Michelle Tidwell, IBM; Eric Lakin, University of Michigan; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture.
Cloud infrastructure is often procured by service providers and large enterprises in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures with slow cost recovery based on fluctuating customer adoption. Giving these cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets.
Clouds inherently require often unpredictable scalability – both up and down. Building a storage architecture with the ability to rapidly allocate resources for a specific customer need and reallocate resources based on changing customer requirements allows the cloud service provider to optimize storage capacity and performance pools in their data center without compromising the responsiveness to the change in needs. Such architecture should also align to the datacenter level orchestration system to allow for even higher level of resource optimization and flexibility.
In this webcast, you will learn:
•How modern storage technology allows you to build this infrastructure
•The role of software defined storage
•How to model cloud costs of new applications and or re-engineering existing applications
Tony Hurson, Intel; Rob Davis, Mellanox; John Kim, Mellanox
For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.
The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.
After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL
Greg McSorley, Amphenol; Rick Kutcipal, Broadcom; Kevin Marks, Dell; Jeremiah Tussey, Microsemi
The recent data explosion is a huge challenge for storage and IT system designers. How do you crunch all that data at a reasonable cost? Fortunately, your familiar SAS comes to the rescue with its new 24G speed. Its flexible connection scheme already allows designers to scale huge external storage systems with low latency. Now the new high operating speed offers the throughput you need to bring big data to its knobby knees! Our panel of storage experts will present practical solutions to today’s petabyte problems and beyond.
The Long Term Retention Technical Working Group and the Data Protection Committee will review the results of the 2017 100-year archive survey. In addition to the survey results, the presentation will cover the following topics:
· How the use of storage for archiving has evolved in ten years
· What type of information is now being retained and for how long
· Changes in corporate practices
· Impact of technology changes such as Cloud
John Kim, Mellanox; Alex McDonald, NetApp; J Metz, Cisco
In the history of enterprise storage there has been a trend to move from local storage to centralized, networked storage. Customers found that networked storage provided higher utilization, centralized and hence cheaper management, easier failover, and simplified data protection, which has driven the move to FC-SAN, iSCSI, NAS and object storage.
Recently, distributed storage has become more popular where storage lives in multiple locations but can still be shared. Advantages of distributed storage include the ability to scale-up performance and capacity simultaneously and--in the hyperconverged use case--to use each node (server) for both compute and storage. Attend this webcast to learn about:
•Pros and cons of centralized vs. distributed storage
•Typical use cases for centralized and distributed storage
•How distributed works for SAN, NAS, parallel file systems, and object storage
•How hyperconverged has introduced a new way of consuming storage
After the webcast, please check out our Q&A blog http://bit.ly/2xSajxJ
Tim Lustig, Mellanox; Fred Zhang, Intel; John Kim, Mellanox
Network-intensive applications, like networked storage or clustered computing, require a network infrastructure with high bandwidth and low latency. Remote Direct Memory Access (RDMA) supports zero-copy data transfers by enabling movement of data directly to or from application memory. This results in high bandwidth, low latency networking with little involvement from the CPU.
In the next SNIA ESF “Great Storage Debates” series webcasts, we’ll be examining two commonly known RDMA protocols that run over Ethernet; RDMA over Converged Ethernet (RoCE) and IETF-standard iWARP. Both are Ethernet-based RDMA technologies that reduce the amount of CPU overhead in transferring data among servers and storage systems.
The goal of this presentation is to provide a solid foundation on both RDMA technologies in a vendor-neutral setting that discusses the capabilities and use cases for each so that attendees can become more informed and make educated decisions.
Join to hear the following questions addressed:
•Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
•Use cases for RoCE and iWARP and what differentiates them?
•UDP/IP and TCP/IP: which uses which and what are the advantages and disadvantages?
•What are the software and hardware requirements for each?
•What are the performance/latency differences of each?
Join our SNIA experts as they answer all these questions and more on this next Great Storage Debate
After you watch the webcast, check out the Q&A blog http://bit.ly/2OH6su8
What new security requirements apply to Persistent Memory (PM)? While many existing security practices such as access control, encryption, multi-tenancy and key management apply to persistent memory, new security threats may result from the differences between PM and storage technologies. The SNIA PM security threat model provides a starting place for exposing system behavior, protocol and implementation security gaps that are specific to PM. This in turn motivates industry groups such as TCG and JEDEC to standardize methods of completing the PM security solution space.
The Storage Networking Industry Association (SNIA) is a non-profit organization made up of member companies spanning information technology. A globally recognized and trusted authority, SNIA’s mission is to lead the storage industry in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement and security of information.
All About M.2 SSDsSNIA Solid State Storage Initiative Experts from Jim Handy, Jon Tanguy, Jaren May, David Akerson, Eden Kim and Tom Coughlin[[ webcastStartDate * 1000 | amDateFormat: 'MMM D YYYY h:mm a' ]]78 mins