Hi [[ session.user.profile.firstName ]]

What’s Next: Software-Defined Storage as a Service Without Legacy Limitations

Storage is where your data lives and is needed to run your workloads on premise, in a colocation, Cloud or on the move. Many advances have been made in data management capabilities such as virtualizing the storage software layer and utilizing Cloud/hyper-converged hosting platforms. However, the software used to run storage systems continues to rely on traditional RAID and data management methods.

This approach continues to impose limitations on storage performance and agility. Next generation storage software must be built from the ground up to provide better performance and flexibility.

Madison Cloud has teamed with StorONE to deliver new software-defined storage capabilities with an entirely new software stack. Imagine using the same drive pool to simultaneously deliver block, file and object storage services. Forget the complexity of managing RAID groups and lengthy rebuilds. Access the full IOPS potential of your NVMe/SSD/HDD drives. Mix & match different drive types and sizes in a single system. Take as many snapshots as you like, without suffering from performance degradation and consuming valuable drive space. Use the largest available disk drives without RAID overhead and collapse your data center footprint in weeks, not years.

Simplify storage procurement by flattening complex pricing structures into a single pricing tier that provides Petabyte scale block, file and object storage in a single platform, for as little as $0.01/GB/month.

Your new simplified data management fabric now delivers what you need, when you need it, in a 100% utility model.

Join us to learn how Madison Cloud and StorONE can deliver the data management platform you always wanted.
Recorded Apr 21 2020 59 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Tom Bendien, Gal Naor, Guy Loewenberg, Randall van Allen
Presentation preview: What’s Next: Software-Defined Storage as a Service Without Legacy Limitations

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Not Again! Data Deduplication for Storage Systems Nov 10 2020 6:00 pm UTC 75 mins
    Abhishek Rajimwale, Dell; John Kim, NVIDIA; Alex McDonald, SNIA NSF Vice Chair
    Organizations inevitably store multiple copies of the same data. Users and applications store the same files over and over, intentionally or inadvertently. Developers, testers and analysts keep many similar copies of the same data. And backup programs copy the same or similar files daily, often to multiple locations or storage devices. It’s not unusual to end up with some data replicated thousands of times.

    So how do we stop the duplication madness? Join this webcast where we’ll discuss how to reduce the number of copies of data that get stored, mirrored, or backed up as we discuss:

    •Should I eliminate duplicates at the desktop, server, storage or backup device?
    •Dedupe technology
    •Local vs. global deduplication
    •Avoiding or reducing data copies (non-dupe)
    •Block-level vs. file- or object-level deduplication
    •In-line vs. post-process deduplication
    •More efficient backup techniques

    Register today (but only once please) for this webcast so you can start saving space and end the extra data replication.
  • Storage Implications at the Velocity of 5G Streaming Oct 21 2020 5:00 pm UTC 75 mins
    Steve Adams, Intel; Chip Maurer, Dell; Michael Hoard, Intel
    The broad adoption of 5G, Internet of things (IoT) and edge computing will reshape the nature and role of enterprise and cloud storage over the next several years. What building blocks, capabilities and integration methods are needed to make this happen?

    Join this webcast for a discussion on:

    ●With 5G, IoT and edge computing - how much data are we talking about?
    ●What will be the first applications leading to collaborative data-intelligence streaming?
    ●How can low latency microservices and AI quickly extract insights from large amounts of data?
    ●What are the emerging requirements for scalable stream storage - from peta to zeta?
    ●How are yesterday’s object-based batch analytic processing (Hadoop) and today’s streaming messaging capabilities (Apache Kafka and RabbitMQ) work together?
    ●What are the best approaches for getting data from the Edge to the Cloud?
  • What’s New in FC-NVMe-2? Oct 15 2020 6:00 pm UTC 75 mins
    Marcus Thordal, Broadcom; Craig Carlson, Marvell; Mark Jones, Broadcom
    Why do we need enhanced error recovery? And, how does it work? In this webcast we explore the fact that “bit errors happen” and how that occurs. We also do a deep dive into the mechanism of the enhanced error recovery added to FC-NVMe-2. Join FCIA experts as they guide you through the intricacies of error detection and recovery to provide the most reliable NVMe over Fibre Channel deployment possible.
  • Technology Implications of Internet of Payments Oct 14 2020 5:00 pm UTC 75 mins
    Glyn Bowden, HPE; Richard George, Health Life Prosperity Shared Ltd; Jim Fister, The Decision Place
    Electronic payments, once the purview of a few companies, have expanded to include a variety of financial and technology companies. Internet of Payment (IoP) enables payment processing over many kinds of IoT devices and has also led to the emergence of the micro-transaction. The growth of independent payment services offering e-commerce solutions, such as Square, and the entry of new ways to pay, such as Apple Pay mean that a variety of devices and technologies also have come into wide use.

    Along with the rise and dispersal of the payment eco-system, more of our assets that we exchange for payment are becoming digitized as well. When digital ownership is equivalent to physical ownership, security and scrutiny of those digital platforms and methods takes a leap forward in significance.

    Assets and funds are now widely distributed across multiple organizations. Physical asset ownership is even being shared between many stakeholders resulting in more ownership opportunities for less investment but in a distributed way.

    In this talk we look at the impact of all of these new principles across multiple use cases and how it impacts not only on the consumers driving this behavior but on the underlying infrastructure that supports and enables it. We will look particularly at:

    •The cloud network, applications and storage implications of Internet of Payments
    •Use of emerging blockchain capabilities for payment histories and smart contracts
    •Identity and security challenges at the device in addition to point of payment
    •Considerations on architecting IoP solutions for future scale
  • Using Data Literacy to Drive Insight Recorded: Sep 17 2020 48 mins
    Glyn Bowden, HPE; Jim Fister, The Decision Place
    The pandemic has taught data professionals one essential thing. Data is like water when it escapes; it reaches every aspect of the community it inhabits. This fact becomes apparent when the general public has access to statistics, assessments, analysis and even medical journals related to the pandemic, at a scale never seen before.
    Insight understands information in context to the degree that you can gain an understanding beyond just the facts presented and instead make reasonable predictions and suppositions about new instances of that data.
    Having access to data does not automatically grant the reader knowledge of how to interpret that data or the ability to derive insight. It is even challenging to judge the accuracy or value in that data.
    The skill required is known as data literacy, and in this presentation, we will look at how access to one data source will inevitably drive the need to access more. We will examine:
    •How data literacy is defined by the ability to interpret and apply context
    •What supporting information is needed
    •How a data scientist approaches new data sources and the questions they ask of it
    •How to seek out supporting or challenging data to validate its accuracy and value for providing insight
    •How this impacts underlying information systems and how data platforms need to adjust to this purpose+ data eco-system where data sources are no longer single use
  • Optimizing NVMe-oF Performance with Different Ethernet Transports: Host Factors Recorded: Sep 16 2020 62 mins
    Fred Zhang, Intel; Eden Kim, Calypso Systems; David Woolf, UNH-IOL; Tom Friend, Illuminosi
    NVMe over Fabrics technology is gaining momentum and getting more tractions in data centers, but there are three kinds of Ethernet based NVMe over Fabrics transports: iWARP, RoCEv2 and TCP. How do we optimize NVMe over Fabrics performance with different Ethernet transports?

    Setting aside the consideration of network infrastructure, scalability, security requirement and complete solution stack, this webcast will explore the performance of different Ethernet-based transport for NVMe over Fabrics at micro benchmark level. We will show three key performance indicators: IOPs, Throughput, and Latency with different workloads including: Sequential Read/Write, Random Read/Write, 70%Read/30%Write, with different data size. We will compare the result of three Ethernet based transports: iWARP, RoCEv2 and TCP.

    Further, we will dig a little bit deeper to talk about the variables that will impact the performance of different Ethernet transports. There are a lot of variables that you can tune but these variables will impact the performance of each transport to different extents. We will cover the variables:
    1.How many CPU cores are needed (I’m willing to give)?
    2.Optane SSD or 3D NAND SSD?
    3.How deep should the Q-Depth be?
    4.Why do I need to care about MTU?

    This discussion won’t tell you which transport is the best. Instead we unfold the performance of each transport and tell you what it would take for each transport to get the best performance, so that you can make the best choice for your transport for NVMe over Fabrics solutions.
  • Composable Infrastructure and Computational Storage Recorded: Sep 15 2020 53 mins
    Moderator: Alex McDonald, SNIA CMSI Co-Chair; Presenters: Eli Tiomkin, NGD Systems; Philip Kufeldt, Seagate Technology
    In this webcast, SNIA experts will discuss what composable infrastructure is, what prompted its development, solutions, enabling technologies, standards/products and how computational storage fits in.
  • RAID on CPU: RAID for NVMe SSDs without a RAID Controller Card Recorded: Sep 9 2020 60 mins
    Fausto Vaninetti, Cisco Systems and SNIA EMEA Board Advisor; Igor Konopko, Intel; Paul Talbut, SNIA EMEA
    RAID on CPU is an enterprise RAID solution specifically designed for NVMe-based solid state drives (SSDs). This innovative technology provides the ability to directly connect NVMe-based SSD’s to PCIe lanes and make RAID arrays using those SSD’s without the need for a RAID Host Bus Adapter (HBA). As a result, customers gain NVMe SSD performance and data availability without the need of a traditional RAID HBA.

    This webcast will recall key concepts for NVMe SSDs and RAID levels and will take a deep dive into RAID on the CPU technology and the way it compares to traditional Software and Hardware RAID solutions. Learn more about this new technology and how it is implemented, and gain a clear insight into the advantages of RAID on the CPU.
  • Compression: Putting the Squeeze on Storage Recorded: Sep 2 2020 52 mins
    John Kim, NVIDIA; Brian Will, Intel; Ilker Cebeli, Samsung
    Everyone knows data volumes are exploding faster than IT budgets. And customers are increasingly moving to flash storage, which is faster and easier to use than hard drives, but still more expensive. To cope with this conundrum and squeeze more efficiency from storage, storage vendors and customers can turn to data reduction techniques such as compression, deduplication, thin provisioning and snapshots. This webcast will specifically focus on data compression, which can be done at different times, at stages in the storage process, and using different techniques. We’ll discuss:

    •Where compression can be done: at the client, on the network, on the storage controller, or within the storage devices
    •What types of data should be compressed
    •When to compress: real-time compression vs. post-process compression
    •Different compression techniques
    •How compression affects performance

    Tune in to this compact and informative SNIA webcast, which packs in copious content .
  • The Key to Value: Understanding the NVMe Key-Value Standard Recorded: Sep 1 2020 66 mins
    Bill Martin; Samsung; John Kim, NVIDIA
    The storage industry has many applications that rely on storing data as objects. In fact, it’s the most popular way that unstructured data is accessed. At the drive level, however, the devil is in the details. Normally, storage devices store information as blocks, not objects. This means that there is some translation that goes on between the data as it is consumed (i.e., objects) and the data that is stored (i.e., blocks).

    Naturally, being efficient means that there are performance boosts, and simplicity means that there are fewer things that can go wrong. Moving towards storing key value pairs that get away from the traditional block storage paradigm make it easier and simpler to access objects.

    Both The NVM Express™ group and SNIA have done quite a bit of work in standardizing this approach:

    •NVM Express™ has completed standardization of the Key Value Command Set
    •SNIA has standardized a Key Value API
    •Spoiler alert: these two work very well together!

    What does this mean? And why should you care? That’s what this webinar is going to cover! This presentation will discuss the benefits of Key Value storage, present the major features of the NVMe-KV Command Set and how it interacts with the NVMe standards. It will also cover the SNIA KV-API and open source work that is available to take advantage of Key Value storage.

    We’ll be going deep under the covers to discuss:
    •How this approach is different than traditional block-based storage
    •Why doing this makes sense for certain types of data (and, of course, why doing this may not make sense for certain types of data)
    •How this simplifies the storage stack
    •Who should care about this, why they should care about this, and whether or not you are in that group
  • Does Your Storage Need a Cyber Insurance Tune-Up? Recorded: Aug 27 2020 60 mins
    Eric Hibbard, SNIA Security Technical Work Group Chair; Casey Boggs, ReputationUS; Paul Talbut, SNIA EMEA
    Protection against cyber threats is recognized as a necessary component of an effective risk management approach, typically based on a well-known cybersecurity framework. A growing area to further mitigate risks and provide organizations with the high level of protection they need is cyber insurance. However, it’s not as simple as buying a pre-packaged policy.

    This webcast will provide an overview of how cyber insurance fits in a risk management program. It will identify key terms and conditions that should be understood and carefully negotiated. Cyber insurance policies may not cover all types of losses, so it is critical to identify what risks and conditions are excluded from a cyber insurance policy before you buy.

    Join this webcast to learn:
    •General threat tactics, risk management approaches, cybersecurity frameworks
    •How cyber insurance fits within an enterprise data security strategy
    •Nuances of cyber insurance – exclusions, exemption, triggers, deductibles and payouts
    •Challenges associated with data stored in the cloud
  • Data Center Scalability Made Easy with Fibre Channel Services Recorded: Aug 26 2020 62 mins
    David Peterson, Broadcom; Barry Maskas, HPE; Kiran Ranabhor, Cisco
    Fibre Channel Services such as the Fabric Login Server, Fabric Controller, and Name Server are used to support management and operation of a Fibre Channel Fabric by providing a method of registering and maintaining devices connected in the network. As the need for additional Fabric Services, such as traffic flow analysis and congestion management have surfaced, Fibre Channel continues to evolve to ensure easy data center scalability.

    In this webcast, FCIA experts will provide context to the terminology and dive into Fibre Channel Services, including device and topology discovery, zoning, security, clock synchronization and management. It will also decode some common acronyms like, FC-CT, FC-GS-9, and FC-SW-8.

    Join us to learn:

    •What are Fabric Services?
    •Overview of long-standing Fabric Services and what the newer Fabric Services provide
    •What is FC-CT? And how does it relate to Fibre Channel Fabric Services?
    •Fibre Channel Generic Services and Switch Fabric functionality
  • Everything You Wanted to Know...But Were Too Proud to Ask: Data Reduction Recorded: Aug 18 2020 61 mins
    John Kim, NVIDIA; Alex McDonald, NetApp
    Everyone knows data volumes are growing rapidly, far faster than IT budgets, which range from flat to minimal annual growth. One of the drivers of such rapid data growth is storing multiple copies of the same data. Developers copy data for testing and analysis. Users email and store multiple copies of the same files. Administrators typically back up the same data over and over, often with minimal to no changes.

    To avoid a budget crisis and paying more than once to store the same data, storage vendors and customers want to use data reduction techniques such as deduplication, compression, thin provisioning and snapshots.

    This webcast will specifically focus on the fundamentals of data reduction, which can be performed in different places and at different stages of the data lifecycle. Like most technologies, there are related means to do this, but with enough difference to cause confusion. For that reason, we’re going to be looking at:

    •How companies end up with so many copies of the same data
    •Difference between deduplication and compression – when should you use one vs. the other?
    •Where to reduce data: application-level, networked storage, backups, and during data movement
    •When to collapse the copies: real-time vs. post-process deduplication
    •Performance considerations

    Tune in to this efficient and educational SNIA webcast, which covers valuable concepts with minimal repetition or redundancy.
  • Storage Networking Security Series: Applied Cryptography Recorded: Aug 5 2020 59 mins
    John Kim, NVIDIA; Eric Hibbard, SNIA Security TWG Chair; Olga Buchonina, SNIA Blockchain TWG Chair; Alex McDonald, NetApp
    The rapid growth in infrastructure to support the real time and continuous collection and sharing of data to make better business decisions has led to an age of unprecedented information access and storage. This proliferation of data sources and of high-density data storage has put volumes of data at one’s fingertips. While the collection of large amounts of data has increased knowledge and efficiencies for businesses, it has also made attacks upon that information—theft, modification, or holding it for ransom--more tempting and easier. Cryptography is often used to protect valuable data.

    This webcast will present an overview of applied cryptography techniques for the most popular use cases. We will discuss ways of securing data, the factors and trade-offs that must be considered, as well as some of the general risks that need to be mitigated, including:

    •Encryption techniques for authenticating users
    •Encrypting data—either at rest or in motion
    •Using hashes to authenticate/ Information coding and data transfer methodologies
    •Cryptography for Blockchain
  • Enterprise and Data Center SSD Form Factor - the end of the 2.5-inch disk era? Recorded: Aug 4 2020 78 mins
    J.Hands, SSD SIG; B.Lynn, Dell; R Stenfort, FB ,P, Kaler, HPE; J. Geldman, Kioxia; J. Hinkle, Lenovo;J. Adrian, Microsoft
    The Enterprise and Data Center SSD Form Factor (EDSFF) is designed natively for data center NVMe SSDs to improve thermal, power, performance, and capacity scaling. EDSFF has different variants for flexible and scalable performance, dense storage configurations, general purpose servers, and improved data center TCO. At the 2020 Open Compute Virtual Summit OEMs, cloud service providers, hyperscale data center, and SSD vendors showcased products and their vision for how this new family of SSD form factors solves real data challenges.

    Join this SNIA Compute Memory and Storage Initiative webcast where expert panelists from companies that have been involved in EDSFF since the beginning discuss how they will use the EDSFF form factor. OEMs will discuss their goals for E3 and the new updated version of the E3 specification! (SFF-TA-1008) Hyperscale data center and cloud service providers will discuss how E1.S (SFF-TA-1006) helps solve performance scalability, serviceability, capacity, and thermal challenges for future NVMe SSDs and persistent memory in 1U servers.
  • AIOPs for Integrated Infrastructure Management Recorded: Jul 16 2020 65 mins
    Chandrasekar Balasubramanian, Associate Technical Director, GAVS Technologies Ltd.
    There are multiple tools used for Infrastructure management in each of the various towers like Network, Server, Storage, Applications etc. These tools are located in different geographical locations within an organization and cater to multiple infrastructure monitoring needs. But, it does not lead to a single source of truth because the monitoring data is available in Silos in different tools and locations.

    So, there is a definite need for unification of the data sources into a single unified view. Once Unified, we have the perfect pre-requisite setting for integrated infrastructure management. AIOPs means applying Artificial intelligence in IT operations and infrastructure management. It encompasses unification, noise suppression, co-relation, prediction and remediation. After unification of monitoring data and processing of alerts using a AIOPs platform, the next step is to do noise suppression to remove the false positives in the unified platform. Then, co-relations can be done between the various alerts and events.

    The most important culmination of all this is prediction using Advanced AI/ML algorithms to predict what would happen in future based on the past and current events/alerts.

    Finally, Automation is also done to remediate the problems that get logged automatically as a result of automatically triaging the monitored events and getting them logged into tickets which get automatic remediation. This is in summary, Integrated Infrastructure management using AIOPs. In this talk we will get into the details and how AIOps based platform leads to proactive and integrated infrastructure management.
  • Point Break: When to Archive Content On Prem or in the Cloud Recorded: Jul 15 2020 33 mins
    John Bell, Sr. Consultant, Caringo, Inc.
    While the cloud should undoubtedly be part of your remote workflow strategy, should it be used for your content archiving needs? What are the “breaking points” from a technical and business requirement perspective you need to look at when evaluating cloud versus on-premise archival and how do you evaluate these criteria.

    In this session, learn about:
    - The increasing importance of “accessible” archives for remote workflows
    - The differences between cloud storage and on-premises object storage
    - The benefits of on-premises object storage and hybrid storage solutions
    - How to determine which storage solution is right for your archive needs
  • [Panel] How to Overcome Enterprise Storage Challenges Recorded: Jul 15 2020 44 mins
    Greg Schulz, Storage IO | Anne Blanchard, Nasuni | Jon Toor, Cloudian
    In 2020, data management, security, scalability and cost control are fast becoming the challenges that are dominating enterprise storage.

    And amid the explosion of data, storage is the protector of one of the modern enterprise’s most valuable assets, making it crucial that businesses’ storage infrastructure is robust, well-managed, scalable, efficient and flexible.

    Join this expert panel, where storage leaders get together to discuss and share insights into how to maximise storage performance across the ecosystem to ensure business success, including:

    - How IT leaders can overcome their storage security issue
    - How to marry capacity and performance as unstructured data continues to grow
    - What tools can help simplify the task of storage and data management

    Panel:
    Greg Schulz, Senior Advisory Analyst, Storage IO (Moderator)
    Anne Blanchard, Senior Director of Product Marketing, Nasuni
    Jon Toor, CMO, Cloudian
  • Cloud native apps – why container storage and not legacy storage Recorded: Jul 14 2020 29 mins
    Jonathan Kong, CEO, Storidge
    Companies are developing and running software in much smaller packages called containers. These small containers collectively deliver services as a cloud native application. Using container technology to run cloud native applications can dramatically lower costs, improve efficiency and speed workflow ... but only if all of the pieces fit together properly.

    Containerized apps have a fundamental impact on storage infrastructure that is not obvious. A key challenge is how to operate storage for stateful applications without adding unnecessary complexity.

    Legacy applications run in an environment that is:
    - Manually operated
    - Siloed storage systems
    - Static

    Modern apps operate in a new model that :
    - Orchestrated
    - Scalable
    - Dynamic, mobile, portable

    This sets expectations that legacy infrastructure can’t address. The result is a disruptive, once in 25 year shift, requiring revolutionary changes for storage infrastructure.

    Cloud native apps need container storage that is:
    - Automated; infrastructure as service, orchestration integrated, automatic data locality, failover, high availability and data recovery
    - Scales horizontally and vertically
    - Automated mobility
    - Developer centric
    - Runs on any platform
    - Storage-as-a-service
  • Storage Wars: Object vs. File vs. Block Recorded: Jul 14 2020 77 mins
    Alex McDonald, Tom Christensen, Anne Blanchard, Sanjay Jagad, Bill Martin
    Research firm IDC projects that by the end of 2020, the world will generate 44 zettabytes (44 trillion gigabytes) of data per year. Data storage, therefore, is no longer simple, and in the age of cloud maturity and Digital Transformation, storage infrastructure strategies are shifting. With object stores having a very different set of characteristics than file and block storage, it can be confusing to know which is the right strategy that will keep up with an expanding digital universe.

    Even small businesses struggle to manage the ever-growing pile of files stored on various networks and systems, so the challenges for enterprise companies can be hard to wrap your head around.

    Join this panel of object, file and block storage experts as they discuss:

    - The best way to manage unstructured data
    - Are there deceptive costs in a file storage strategy?
    - Use cases for object storage

    Speakers:
    Alex McDonald - SNIA Networking Storage Forum and Office of the CTO, NetApp (Moderator)
    Tom Christensen, CTO & Customer Advocacy - Northern EMEA, Hitachi Vantara
    Anne Blanchard, Sr. Director of Product Marketing, Nasuni
    Sanjay Jagad, Senior Director of Products and Solutions, Cloudian
    Bill Martin, SNIA Technical Council Co-Chair and Samsung
The hottest topics for storage and infrastructure professionals
The Enterprise Storage channel has the most up-to-date, relevant content for storage and infrastructure professionals. As data centers evolve with big data, cloud computing and virtualization, organizations are going to need to know how to make their storage more efficient. Join this channel to find out how you can use the most current technology to satisfy your business and storage needs.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: What’s Next: Software-Defined Storage as a Service Without Legacy Limitations
  • Live at: Apr 21 2020 10:00 am
  • Presented by: Tom Bendien, Gal Naor, Guy Loewenberg, Randall van Allen
  • From:
Your email has been sent.
or close