Hi [[ session.user.profile.firstName ]]

SNIA Webcasts

  • Date
  • Rating
  • Views
  • Networking Requirements for Scale-Out Storage
    Networking Requirements for Scale-Out Storage
    John Kim, Mellanox; Saqib Jang, Chelsio; Fred Zhang, Intel Recorded: Nov 14 2018 44 mins
    Scale-out storage is increasingly popular for cloud, high-performance computing, machine learning, and certain enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines.

    But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. Join this webinar to learn:

    •Scale-out storage solutions and what workloads they can address
    •How your network may need to evolve to support scale-out storage
    •Network considerations to ensure performance for demanding workloads
    •Key considerations for all flash
  • Create a Smarter and More Economic Cloud Storage Architecture
    Create a Smarter and More Economic Cloud Storage Architecture
    Michelle Tidwell, IBM; Eric Lakin, University of Michigan; Mike Jochimsen, Kaminario; Alex McDonald, NetApp Recorded: Nov 7 2018 55 mins
    Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture.

    Economic:
    Cloud infrastructure is often procured by service providers and large enterprises in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures with slow cost recovery based on fluctuating customer adoption. Giving these cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets.

    Technical:
    Clouds inherently require often unpredictable scalability – both up and down. Building a storage architecture with the ability to rapidly allocate resources for a specific customer need and reallocate resources based on changing customer requirements allows the cloud service provider to optimize storage capacity and performance pools in their data center without compromising the responsiveness to the change in needs. Such architecture should also align to the datacenter level orchestration system to allow for even higher level of resource optimization and flexibility.

    In this webcast, you will learn:
    •How modern storage technology allows you to build this infrastructure
    •The role of software defined storage
    •Accounting principles
    •How to model cloud costs of new applications and or re-engineering existing applications
    •Performance considerations
  • Extending RDMA for Persistent Memory over Fabrics
    Extending RDMA for Persistent Memory over Fabrics
    Tony Hurson, Intel; Rob Davis, Mellanox; John Kim, Mellanox Recorded: Oct 25 2018 60 mins
    For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

    The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

    After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL
  • Flash Storage with 24G SAS Leads the Way in Crunching Big Data
    Flash Storage with 24G SAS Leads the Way in Crunching Big Data
    Greg McSorley, Amphenol; Rick Kutcipal, Broadcom; Kevin Marks, Dell; Jeremiah Tussey, Microsemi Recorded: Oct 24 2018 49 mins
    The recent data explosion is a huge challenge for storage and IT system designers. How do you crunch all that data at a reasonable cost? Fortunately, your familiar SAS comes to the rescue with its new 24G speed. Its flexible connection scheme already allows designers to scale huge external storage systems with low latency. Now the new high operating speed offers the throughput you need to bring big data to its knobby knees! Our panel of storage experts will present practical solutions to today’s petabyte problems and beyond.
  • The 100-Year Archive Survey Results 2007-2017
    The 100-Year Archive Survey Results 2007-2017
    Sam Fineberg, Thomas Rivera, Bob Rogers Recorded: Oct 10 2018 60 mins
    The Long Term Retention Technical Working Group and the Data Protection Committee will review the results of the 2017 100-year archive survey. In addition to the survey results, the presentation will cover the following topics:
    · How the use of storage for archiving has evolved in ten years
    · What type of information is now being retained and for how long
    · Changes in corporate practices
    · Impact of technology changes such as Cloud
  • Centralized vs. Distributed Storage
    Centralized vs. Distributed Storage
    John Kim, Mellanox; Alex McDonald, NetApp; J Metz, Cisco Recorded: Sep 11 2018 63 mins
    In the history of enterprise storage there has been a trend to move from local storage to centralized, networked storage. Customers found that networked storage provided higher utilization, centralized and hence cheaper management, easier failover, and simplified data protection, which has driven the move to FC-SAN, iSCSI, NAS and object storage.

    Recently, distributed storage has become more popular where storage lives in multiple locations but can still be shared. Advantages of distributed storage include the ability to scale-up performance and capacity simultaneously and--in the hyperconverged use case--to use each node (server) for both compute and storage. Attend this webcast to learn about:
    •Pros and cons of centralized vs. distributed storage
    •Typical use cases for centralized and distributed storage
    •How distributed works for SAN, NAS, parallel file systems, and object storage
    •How hyperconverged has introduced a new way of consuming storage

    After the webcast, please check out our Q&A blog http://bit.ly/2xSajxJ
  • RoCE vs. iWARP
    RoCE vs. iWARP
    Tim Lustig, Mellanox; Fred Zhang, Intel; John Kim, Mellanox Recorded: Aug 22 2018 64 mins
    Network-intensive applications, like networked storage or clustered computing, require a network infrastructure with high bandwidth and low latency. Remote Direct Memory Access (RDMA) supports zero-copy data transfers by enabling movement of data directly to or from application memory. This results in high bandwidth, low latency networking with little involvement from the CPU.

    In the next SNIA ESF “Great Storage Debates” series webcasts, we’ll be examining two commonly known RDMA protocols that run over Ethernet; RDMA over Converged Ethernet (RoCE) and IETF-standard iWARP. Both are Ethernet-based RDMA technologies that reduce the amount of CPU overhead in transferring data among servers and storage systems.

    The goal of this presentation is to provide a solid foundation on both RDMA technologies in a vendor-neutral setting that discusses the capabilities and use cases for each so that attendees can become more informed and make educated decisions.

    Join to hear the following questions addressed:

    •Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
    •Use cases for RoCE and iWARP and what differentiates them?
    •UDP/IP and TCP/IP: which uses which and what are the advantages and disadvantages?
    •What are the software and hardware requirements for each?
    •What are the performance/latency differences of each?

    Join our SNIA experts as they answer all these questions and more on this next Great Storage Debate

    After you watch the webcast, check out the Q&A blog http://bit.ly/2OH6su8
  • The SNIA Persistent Memory Security Threat Model
    The SNIA Persistent Memory Security Threat Model
    Doug Voigt, Co-Chair, SNIA NVM Programming TWG and Distinguished Technologist, HPE Recorded: Aug 21 2018 56 mins
    What new security requirements apply to Persistent Memory (PM)? While many existing security practices such as access control, encryption, multi-tenancy and key management apply to persistent memory, new security threats may result from the differences between PM and storage technologies. The SNIA PM security threat model provides a starting place for exposing system behavior, protocol and implementation security gaps that are specific to PM. This in turn motivates industry groups such as TCG and JEDEC to standardize methods of completing the PM security solution space.
  • Cloud Mobility and Data Movement
    Cloud Mobility and Data Movement
    Eric Lakin, University of Michigan; Michelle Tidwell, IBM; Alex McDonald, NetApp Recorded: Aug 7 2018 60 mins
    We’re increasingly in a multi-cloud environment, with potentially multiple private, public and hybrid cloud implementations in support of a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely from one cloud to another.

    That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement.

    In this webcast, we’re going to explore some of these data movement and mobility issues with real-world examples from the University of Michigan. Register now for discussions on:

    •How do we secure data both at-rest and in-transit?
    •Why is data so hard to move? What cloud processes and interfaces should we use to make data movement easier?
    •How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
    •Should the application of the data influence how (and even if) we move the data?
    •How can data in the cloud be leveraged for multiple use cases?
  • FCoE vs. iSCSI vs. iSER
    FCoE vs. iSCSI vs. iSER
    J Metz, Cisco; Saqib Jang, Chelsio; Rob Davis, Mellanox; Tim Lustig, Mellanox Recorded: Jun 21 2018 62 mins
    The “Great Storage Debates” webcast series continues, this time on FCoE vs. iSCSI vs. iSER. Like past “Great Storage Debates,” the goal of this presentation is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.

    One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.

    Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.

    That leads to several questions about FCoE, iSCSI and iSER:

    •If we can run various network storage protocols over Ethernet, what
    differentiates them?
    •What are the advantages and disadvantages of FCoE, iSCSI and iSER?
    •How are they structured?
    •What software and hardware do they require?
    •How are they implemented, configured and managed?
    •Do they perform differently?
    •What do you need to do to take advantage of them in the data center?
    •What are the best use cases for each?

    Join our SNIA experts as they answer all these questions and more on the next Great Storage Debate.

    After you watch the webcast, check out the Q&A blog from our presenters http://bit.ly/2NyJKUM

Embed in website or blog