Hi [[ session.user.profile.firstName ]]

2017 European Storage Research and Trends

ESG, in cooperation with SNIA Europe, will present the key findings from the 2017 European Storage Research Report. The content will focus on key areas of technology spending and forecasts, as well as highlighting customer reaction to the adoption of new storage technologies.
Recorded Dec 11 2017 56 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Alex McDonald, SNIA Europe and Mark Peters, Enterprise Strategy Group
Presentation preview: 2017 European Storage Research and Trends

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Networking Requirements for Hyperconvergence Feb 5 2019 6:00 pm UTC 75 mins
    Christine McMonigal, Intel; Saqib Jang, Chelsio; Alex McDonald, NetApp
    “Why can’t I add a 33rd node?”

    One of the great advantages of Hyperconvergence infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage and the same box. All you need to do is add networking.

    In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition.

    In this webinar, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:

    •The impact of metadata on the network
    •What happens as we add additional nodes
    •How to right-size the network for growth
    •Tricks of the trade from the networking perspective to make your HCI work better
    •And more…

    Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will necessarily vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place.
  • Virtualization and Storage Networking Best Practices Jan 17 2019 6:00 pm UTC 75 mins
    Cody Hosterman, Pure Storage; Jason Massae, VMware; J Metz, Cisco
    With all the different storage arrays and connectivity protocols available today, knowing the best practices can help improve operational efficiency and ensure resilient operations. VMware’s storage global service has reported many of the common service calls they receive. In this webcast, we will share those insights and lessons learned by discussing:
    •Common mistakes when setting up storage arrays
    •Most valuable configurations
    •How to maximize the value of your array and vSphere
  • Emerging Memory Poised to Explode Dec 11 2018 7:00 pm UTC 75 mins
    Moderator: Alex McDonald, SNIA SSSI Co-Chair; Presenters: Tom Coughlin, Coughlin Associates & Jim Handy, Objective Analysis
    Join SSSI members and respected analysts Tom Coughlin and Jim Handy for a look into their new Emerging Memory and Storage Technologies Report. Tom and Jim will examine emerging memory technologies and their interaction with standard memories, how a new memory layer improves computer performance, and the technical advantages and economies of scale that contribute to the enthusiasm for emerging memories. They will provide an outlook on market projections and enabling and driving applications. The webcast is the perfect preparation for the 2019 SNIA Persistent Memory Summit January 24, 2019.
  • Take the Leap to SNIA’s Storage Management Initiative Specification 1.8 Recorded: Dec 5 2018 36 mins
    Mike Walker, former Chair SNIA SMI TWG and former IBM Engineer, Don Deel, SNIA SMI Board Chair, SMI TWG Chair, NetApp
    If you’re a storage equipment vendor, management software vendor or end-user of the ISO approved SNIA Storage Management Initiative Specification (SMI-S), you won’t want to miss this presentation. Enterprise storage industry expert Mike Walker will provide an overview of new indications, methods, properties and profiles of SMI-S 1.7 and the newly introduced version, SMI-S 1.8. If you haven’t yet made the jump to SMI-S 1.7, Walker will explain why it’s important to go directly to SMI-S 1.8.
  • Introduction to SNIA Swordfish™ ─ Scalable Storage Management Recorded: Dec 4 2018 39 mins
    Daniel Sazbon, SNIA Europe Chair, IBM; Alex McDonald, SNIA Europe Vice Chair, NetApp
    The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.
  • Networking Requirements for Ethernet Scale-Out Storage Recorded: Nov 14 2018 44 mins
    John Kim, Mellanox; Saqib Jang, Chelsio; Fred Zhang, Intel
    Scale-out storage is increasingly popular for cloud, high-performance computing, machine learning, and certain enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines.

    But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. Join this webinar to learn:

    •Scale-out storage solutions and what workloads they can address
    •How your network may need to evolve to support scale-out storage
    •Network considerations to ensure performance for demanding workloads
    •Key considerations for all flash

    After you watch the webcast, check out the Q&A blog: http://bit.ly/scale-out-q-a
  • Create a Smarter and More Economic Cloud Storage Architecture Recorded: Nov 7 2018 55 mins
    Michelle Tidwell, IBM; Eric Lakin, University of Michigan; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
    Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture.

    Economic:
    Cloud infrastructure is often procured by service providers and large enterprises in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures with slow cost recovery based on fluctuating customer adoption. Giving these cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets.

    Technical:
    Clouds inherently require often unpredictable scalability – both up and down. Building a storage architecture with the ability to rapidly allocate resources for a specific customer need and reallocate resources based on changing customer requirements allows the cloud service provider to optimize storage capacity and performance pools in their data center without compromising the responsiveness to the change in needs. Such architecture should also align to the datacenter level orchestration system to allow for even higher level of resource optimization and flexibility.

    In this webcast, you will learn:
    •How modern storage technology allows you to build this infrastructure
    •The role of software defined storage
    •Accounting principles
    •How to model cloud costs of new applications and or re-engineering existing applications
    •Performance considerations
  • Extending RDMA for Persistent Memory over Fabrics Recorded: Oct 25 2018 60 mins
    Tony Hurson, Intel; Rob Davis, Mellanox; John Kim, Mellanox
    For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

    The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

    After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL
  • Flash Storage with 24G SAS Leads the Way in Crunching Big Data Recorded: Oct 24 2018 49 mins
    Greg McSorley, Amphenol; Rick Kutcipal, Broadcom; Kevin Marks, Dell; Jeremiah Tussey, Microsemi
    The recent data explosion is a huge challenge for storage and IT system designers. How do you crunch all that data at a reasonable cost? Fortunately, your familiar SAS comes to the rescue with its new 24G speed. Its flexible connection scheme already allows designers to scale huge external storage systems with low latency. Now the new high operating speed offers the throughput you need to bring big data to its knobby knees! Our panel of storage experts will present practical solutions to today’s petabyte problems and beyond.
  • The 100-Year Archive Survey Results 2007-2017 Recorded: Oct 10 2018 60 mins
    Sam Fineberg, Thomas Rivera, Bob Rogers
    The Long Term Retention Technical Working Group and the Data Protection Committee will review the results of the 2017 100-year archive survey. In addition to the survey results, the presentation will cover the following topics:
    · How the use of storage for archiving has evolved in ten years
    · What type of information is now being retained and for how long
    · Changes in corporate practices
    · Impact of technology changes such as Cloud
  • Centralized vs. Distributed Storage Recorded: Sep 11 2018 63 mins
    John Kim, Mellanox; Alex McDonald, NetApp; J Metz, Cisco
    In the history of enterprise storage there has been a trend to move from local storage to centralized, networked storage. Customers found that networked storage provided higher utilization, centralized and hence cheaper management, easier failover, and simplified data protection, which has driven the move to FC-SAN, iSCSI, NAS and object storage.

    Recently, distributed storage has become more popular where storage lives in multiple locations but can still be shared. Advantages of distributed storage include the ability to scale-up performance and capacity simultaneously and--in the hyperconverged use case--to use each node (server) for both compute and storage. Attend this webcast to learn about:
    •Pros and cons of centralized vs. distributed storage
    •Typical use cases for centralized and distributed storage
    •How distributed works for SAN, NAS, parallel file systems, and object storage
    •How hyperconverged has introduced a new way of consuming storage

    After the webcast, please check out our Q&A blog http://bit.ly/2xSajxJ
  • RoCE vs. iWARP Recorded: Aug 22 2018 64 mins
    Tim Lustig, Mellanox; Fred Zhang, Intel; John Kim, Mellanox
    Network-intensive applications, like networked storage or clustered computing, require a network infrastructure with high bandwidth and low latency. Remote Direct Memory Access (RDMA) supports zero-copy data transfers by enabling movement of data directly to or from application memory. This results in high bandwidth, low latency networking with little involvement from the CPU.

    In the next SNIA ESF “Great Storage Debates” series webcasts, we’ll be examining two commonly known RDMA protocols that run over Ethernet; RDMA over Converged Ethernet (RoCE) and IETF-standard iWARP. Both are Ethernet-based RDMA technologies that reduce the amount of CPU overhead in transferring data among servers and storage systems.

    The goal of this presentation is to provide a solid foundation on both RDMA technologies in a vendor-neutral setting that discusses the capabilities and use cases for each so that attendees can become more informed and make educated decisions.

    Join to hear the following questions addressed:

    •Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
    •Use cases for RoCE and iWARP and what differentiates them?
    •UDP/IP and TCP/IP: which uses which and what are the advantages and disadvantages?
    •What are the software and hardware requirements for each?
    •What are the performance/latency differences of each?

    Join our SNIA experts as they answer all these questions and more on this next Great Storage Debate

    After you watch the webcast, check out the Q&A blog http://bit.ly/2OH6su8
  • The SNIA Persistent Memory Security Threat Model Recorded: Aug 21 2018 56 mins
    Doug Voigt, Co-Chair, SNIA NVM Programming TWG and Distinguished Technologist, HPE
    What new security requirements apply to Persistent Memory (PM)? While many existing security practices such as access control, encryption, multi-tenancy and key management apply to persistent memory, new security threats may result from the differences between PM and storage technologies. The SNIA PM security threat model provides a starting place for exposing system behavior, protocol and implementation security gaps that are specific to PM. This in turn motivates industry groups such as TCG and JEDEC to standardize methods of completing the PM security solution space.
  • Cloud Mobility and Data Movement Recorded: Aug 7 2018 60 mins
    Eric Lakin, University of Michigan; Michelle Tidwell, IBM; Alex McDonald, NetApp
    We’re increasingly in a multi-cloud environment, with potentially multiple private, public and hybrid cloud implementations in support of a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely from one cloud to another.

    That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement.

    In this webcast, we’re going to explore some of these data movement and mobility issues with real-world examples from the University of Michigan. Register now for discussions on:

    •How do we secure data both at-rest and in-transit?
    •Why is data so hard to move? What cloud processes and interfaces should we use to make data movement easier?
    •How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
    •Should the application of the data influence how (and even if) we move the data?
    •How can data in the cloud be leveraged for multiple use cases?
  • FCoE vs. iSCSI vs. iSER Recorded: Jun 21 2018 62 mins
    J Metz, Cisco; Saqib Jang, Chelsio; Rob Davis, Mellanox; Tim Lustig, Mellanox
    The “Great Storage Debates” webcast series continues, this time on FCoE vs. iSCSI vs. iSER. Like past “Great Storage Debates,” the goal of this presentation is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.

    One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.

    Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.

    That leads to several questions about FCoE, iSCSI and iSER:

    •If we can run various network storage protocols over Ethernet, what
    differentiates them?
    •What are the advantages and disadvantages of FCoE, iSCSI and iSER?
    •How are they structured?
    •What software and hardware do they require?
    •How are they implemented, configured and managed?
    •Do they perform differently?
    •What do you need to do to take advantage of them in the data center?
    •What are the best use cases for each?

    Join our SNIA experts as they answer all these questions and more on the next Great Storage Debate.

    After you watch the webcast, check out the Q&A blog from our presenters http://bit.ly/2NyJKUM
  • Everything You Wanted To Know...But Were Too Proud To Ask - Storage Controllers Recorded: May 15 2018 48 mins
    Peter Onufryk, Microsemi, Craig Carlson, Cavium, Chad Hintz, Cisco, John Kim, Mellanox, J Metz, Cisco
    Are you a control freak? Have you ever wondered what was the difference between a storage controller, a RAID controller, a PCIe Controller, or a metadata controller? What about an NVMe controller? Aren’t they all the same thing?

    In part Aqua of the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series, we’re going to be taking an unusual step of focusing on a term that is used constantly, but often has different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. From the outside looking in, it may be easy to get confused. You can even have controllers managing other controllers!
    Here we’ll be revisiting some of the pieces we talked about in Part Chartreuse [https://www.brighttalk.com/webcast/663/215131], but with a bit more focus on the variety we have to play with:
    •What do we mean when we say “controller?”
    •How are the systems being managed different?
    •How are controllers used in various storage entities: drives, SSDs, storage networks, software-defined
    •How do controller systems work, and what are the trade-offs?
    •How do storage controllers protect against Spectre and Meltdown?
    Join us to learn more about the workhorse behind your favorite storage systems.

    After you watch the webcast, check out the Q&A blog at http://bit.ly/2JgcHlM
  • What’s Next in Storage: Analysts and Experts Share their Predictions Recorded: May 2 2018 43 mins
    Greg McSorley, SNIA Technical Council (non-voting); Rick Kutcipal, President, STA; Don Jeanette of TRENDFOCUS
    You won’t want to miss the opportunity to hear leading data storage experts provide their insights on prominent technologies that are shaping the market. With the exponential rise in demand for high capacity and secured storage systems, it’s critical to understand the key factors influencing adoption and where the highest growth is expected. From SSDs and HDDs to storage interfaces and NAND devices, get the latest information you need to shape key strategic directions and remain competitive.
  • Introduction to SNIA Swordfish™ ─ Scalable Storage Management Recorded: Apr 19 2018 62 mins
    Richelle Ahlvers, Broadcom; Don Deel, NetApp
    The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.
  • File vs. Block vs. Object Storage Recorded: Apr 17 2018 67 mins
    Mark Carlson, Toshiba, Alex McDonald, NetApp, Saqib Jang, Chelsio, John Kim, Mellanox
    File vs. Block vs. Object Storage

    When it comes to storage, a byte is a byte is a byte, isn’t it? One of the truths about simplicity is that scale makes everything hard, and with that comes complexity. And when we’re not processing the data, how do we store it and access it?

    The only way to manage large quantities of data is to make it addressable in larger pieces, above the byte level. For that, we’ve designed sets of data management protocols that help us do several things: address large lumps of data by some kind of name or handle, organize it for storage on external storage devices with different characteristics, and provide protocols that allow us to programmatically write and read it.

    In this webcast, we'll compare three types of data access: file, block and object storage, and the access methods that support them. Each has its own use cases, and advantages and disadvantages; each provides simple to sophisticated data management; and each makes different demands on storage devices and programming technologies.

    Join us as we discuss and debate:

    Storage devices
    - How different types of storage drive different management & access solutions
    Block
    - Where everything is in fixed-size chunks
    - SCSI and SCSI-based protocols, and how FC and iSCSI fit in
    File
    - When everything is a stream of bytes
    - NFS and SMB
    Object
    - When everything is a blob
    - HTTP, key value and RESTful interfaces
    - When files, blocks and objects collide

    After you watch the webcast, check out the Q&A blog: https://wp.me/p1kTSa-bh
  • Containers and Persistent Memory Recorded: Apr 17 2018 33 mins
    Arthur Sainio, Co-Chair, SNIA Persistent Memory and NVDIMM Special Interest Group
    Containers can make it easier for developers to know that their software will run, no matter where it is deployed. What do customers, storage developers, and the industry want to see to fully unlock the potential of persistent memory in a container environment? This presentation will discuss how persistent memory is a revolutionary technology which will boost the performance of next-generation packaging of applications and libraries into containers.

    You’ll learn:
    •What SNIA is doing to advance persistent memory
    •What the ecosystem enablement efforts are around persistent memory solutions
    •How NVDIMMs are paving the way for plug-n-play adoption into container environments

    About the presenter:
    Arthur is Co-Chair of the SNIA Persistent Memory and NVDIMM Special Interest Group, which accelerates the awareness and adoption of Persistent Memories and NVDIMMs for computing architectures.

    As a Director of Product Marketing at SMART Modular Technologies. Arthur has been driving new product launch and business development activities at SMART since 1998. Prior to Smart, Arthur worked as a product manager at Hitachi Semiconductor America. While there, his focus was on DRAM, SRAM and Flash technologies.

    Arthur holds a MBA from San Francisco State University and a MS from Arizona State University
SNIA
The Storage Networking Industry Association (SNIA) is a non-profit organization made up of member companies spanning information technology. A globally recognized and trusted authority, SNIA’s mission is to lead the storage industry in developing and promoting vendor-neutral architectures, standards and educational services that facilitate the efficient management, movement and security of information.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: 2017 European Storage Research and Trends
  • Live at: Dec 11 2017 6:00 pm
  • Presented by: Alex McDonald, SNIA Europe and Mark Peters, Enterprise Strategy Group
  • From:
Your email has been sent.
or close