Hi [[ session.user.profile.firstName ]]

Storage

  • Date
  • Rating
  • Views
  • Networking Requirements for Scale-Out Storage
    Networking Requirements for Scale-Out Storage
    John Kim, Mellanox; Saqib Jang, Chelsio; Fred Zhang, Intel Recorded: Nov 14 2018 44 mins
    Scale-out storage is increasingly popular for cloud, high-performance computing, machine learning, and certain enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines.

    But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. Join this webinar to learn:

    •Scale-out storage solutions and what workloads they can address
    •How your network may need to evolve to support scale-out storage
    •Network considerations to ensure performance for demanding workloads
    •Key considerations for all flash
  • Create a Smarter and More Economic Cloud Storage Architecture
    Create a Smarter and More Economic Cloud Storage Architecture
    Michelle Tidwell, IBM; Eric Lakin, University of Michigan; Mike Jochimsen, Kaminario; Alex McDonald, NetApp Recorded: Nov 7 2018 55 mins
    Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture.

    Economic:
    Cloud infrastructure is often procured by service providers and large enterprises in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures with slow cost recovery based on fluctuating customer adoption. Giving these cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets.

    Technical:
    Clouds inherently require often unpredictable scalability – both up and down. Building a storage architecture with the ability to rapidly allocate resources for a specific customer need and reallocate resources based on changing customer requirements allows the cloud service provider to optimize storage capacity and performance pools in their data center without compromising the responsiveness to the change in needs. Such architecture should also align to the datacenter level orchestration system to allow for even higher level of resource optimization and flexibility.

    In this webcast, you will learn:
    •How modern storage technology allows you to build this infrastructure
    •The role of software defined storage
    •Accounting principles
    •How to model cloud costs of new applications and or re-engineering existing applications
    •Performance considerations
  • Extending RDMA for Persistent Memory over Fabrics
    Extending RDMA for Persistent Memory over Fabrics
    Tony Hurson, Intel; Rob Davis, Mellanox; John Kim, Mellanox Recorded: Oct 25 2018 60 mins
    For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

    The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

    After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL
  • Flash Storage with 24G SAS Leads the Way in Crunching Big Data
    Flash Storage with 24G SAS Leads the Way in Crunching Big Data
    Greg McSorley, Amphenol; Rick Kutcipal, Broadcom; Kevin Marks, Dell; Jeremiah Tussey, Microsemi Recorded: Oct 24 2018 49 mins
    The recent data explosion is a huge challenge for storage and IT system designers. How do you crunch all that data at a reasonable cost? Fortunately, your familiar SAS comes to the rescue with its new 24G speed. Its flexible connection scheme already allows designers to scale huge external storage systems with low latency. Now the new high operating speed offers the throughput you need to bring big data to its knobby knees! Our panel of storage experts will present practical solutions to today’s petabyte problems and beyond.
  • Protocol Analysis for High-Speed Fibre Channel Fabrics
    Protocol Analysis for High-Speed Fibre Channel Fabrics
    David Rodgers, Teledyne LeCroy; Yamini Shastry, Viavi Solutions; Joe Kimpler, ATTO Recorded: Oct 10 2018 62 mins
    Protocol Analysis for High-Speed Fibre Channel Fabrics in the Data Center: Aka, Saving Your SAN (& Sanity)

    The driving force behind adopting new tools and processes in test and measurement practices is the desire to understand, predict, and mitigate the impact of Sick but not Dead (SBND) conditions in datacenter fabrics. The growth and centralization of mission critical datacenter SAN environments has exposed the fact that many small yet seemingly insignificant problems have the potential of becoming large scale and impactful events, unless properly contained or controlled.

    Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) for purposes of expedited data delivery place additional analytical demands on the datacenter manager.
    To be sure, all tools have limitations in their effectiveness and areas of coverage, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. To that end, recognizing and reducing the effect of those limitations is essential.

    This webinar will introduce participants to Protocol Analysis tools and how they may be incorporated into the “best practices” application of SAN problem solving. We will review:
    •The protocol of the Phy
    •Use of “in-line” capture tools
    •Benefits of purposeful error injection for developing and supporting today’s high-speed Fibre Channel storage fabrics

    After the webcast, check out the Q&A blog at http://bit.ly/2P0hsqp
  • The 100-Year Archive Survey Results 2007-2017
    The 100-Year Archive Survey Results 2007-2017
    Sam Fineberg, Thomas Rivera, Bob Rogers Recorded: Oct 10 2018 60 mins
    The Long Term Retention Technical Working Group and the Data Protection Committee will review the results of the 2017 100-year archive survey. In addition to the survey results, the presentation will cover the following topics:
    · How the use of storage for archiving has evolved in ten years
    · What type of information is now being retained and for how long
    · Changes in corporate practices
    · Impact of technology changes such as Cloud
  • [Ep.22] Ask the Expert: The Challenges of Unstructured Data
    [Ep.22] Ask the Expert: The Challenges of Unstructured Data
    Erik Ottem, Director of Product Marketing, Western Digital and Erin Junio, Content Manager, BrightTALK Recorded: Sep 27 2018 41 mins
    This webinar is part of BrightTALK's Ask the Expert Series.

    Many organizations are drowning in unstructured data, which can break traditional storage infrastructure. We'll take a look at different ways to handle unstructured data with particular emphasis on object storage.

    Join this live Q&A with Erik Ottem, Director of Product Marketing at Western Digital, to:

    - Understand the definition of unstructured data
    - Review storage infrastructure block/file/object strengths and weaknesses
    - Discuss data integrity and system availability
    - Learn about data management at scale
  • Centralized vs. Distributed Storage
    Centralized vs. Distributed Storage
    John Kim, Mellanox; Alex McDonald, NetApp; J Metz, Cisco Recorded: Sep 11 2018 63 mins
    In the history of enterprise storage there has been a trend to move from local storage to centralized, networked storage. Customers found that networked storage provided higher utilization, centralized and hence cheaper management, easier failover, and simplified data protection, which has driven the move to FC-SAN, iSCSI, NAS and object storage.

    Recently, distributed storage has become more popular where storage lives in multiple locations but can still be shared. Advantages of distributed storage include the ability to scale-up performance and capacity simultaneously and--in the hyperconverged use case--to use each node (server) for both compute and storage. Attend this webcast to learn about:
    •Pros and cons of centralized vs. distributed storage
    •Typical use cases for centralized and distributed storage
    •How distributed works for SAN, NAS, parallel file systems, and object storage
    •How hyperconverged has introduced a new way of consuming storage

    After the webcast, please check out our Q&A blog http://bit.ly/2xSajxJ
  • Fibre Channel Interoperability
    Fibre Channel Interoperability
    Barry Maskas, HPE; Tim Sheehan, University of New Hampshire Interoperability Lab; David Rodgers, Teledyne LeCroy Recorded: Aug 23 2018 68 mins
    Interoperability is a primary basis for the predictable behavior of a Fibre Channel (FC) SAN. FC interoperability implies standards conformance by definition. Interoperability also implies exchanges between a range of products, or similar products from one or more different suppliers, or even between past and future revisions of the same products. Interoperability may be developed as a special measure between two products, while excluding the rest, and still be standards conformant. When a supplier is forced to adapt its system to a system that is not based on standards, it is not interoperability but rather, only compatibility.

    Every FC hardware and software supplier publishes an interoperability matrix and per product conformance based on having validated conformance, compatibility, and interoperability. There are many dimensions to interoperability, from the physical layer, optics, and cables; to port type and protocol; to server, storage, and switch fabric operating systems versions; standards and feature implementation compatibility; and to use case topologies based on the connectivity protocol (F-port, N-Port, NP-port, E-port, TE-port, D-port).

    In this session we will delve into the many dimensions of FC interoperability, discussing:

    •Standards and conformance
    •Validation of conformance and interoperability
    •FC-NVMe conformance and interoperability
    •Interoperability matrices
    •Multi-generational interoperability
    •Use case examples of interoperability

    After you watch the webcast, check out the FC Interoperability Q&A blog https://fibrechannel.org/a-qa-on-fibre-channel-interoperability/
  • RoCE vs. iWARP
    RoCE vs. iWARP
    Tim Lustig, Mellanox; Fred Zhang, Intel; John Kim, Mellanox Recorded: Aug 22 2018 64 mins
    Network-intensive applications, like networked storage or clustered computing, require a network infrastructure with high bandwidth and low latency. Remote Direct Memory Access (RDMA) supports zero-copy data transfers by enabling movement of data directly to or from application memory. This results in high bandwidth, low latency networking with little involvement from the CPU.

    In the next SNIA ESF “Great Storage Debates” series webcasts, we’ll be examining two commonly known RDMA protocols that run over Ethernet; RDMA over Converged Ethernet (RoCE) and IETF-standard iWARP. Both are Ethernet-based RDMA technologies that reduce the amount of CPU overhead in transferring data among servers and storage systems.

    The goal of this presentation is to provide a solid foundation on both RDMA technologies in a vendor-neutral setting that discusses the capabilities and use cases for each so that attendees can become more informed and make educated decisions.

    Join to hear the following questions addressed:

    •Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
    •Use cases for RoCE and iWARP and what differentiates them?
    •UDP/IP and TCP/IP: which uses which and what are the advantages and disadvantages?
    •What are the software and hardware requirements for each?
    •What are the performance/latency differences of each?

    Join our SNIA experts as they answer all these questions and more on this next Great Storage Debate

    After you watch the webcast, check out the Q&A blog http://bit.ly/2OH6su8

Embed in website or blog