The increasing use of the cloud for data protection, including backup, disaster recovery, archiving and long term preservation, brings many benefits to IT shops and also some challenges. The webcast will consist of a presentation covering the current state of the industry and use cases related to data protection in the cloud. The presentation will be followed by a panel discussion with audience participation.
RecordedMay 22 201259 mins
Your place is confirmed, we'll send you email reminders
J Metz, Cisco, Alex McDonald, NetApp, John Kim, Mellanox, Chad Hintz, Cisco
Welcome to this first part of the webcast series, where we’re going to take an irreverent, yet still informative look, at the parts of a storage solution in Data Center architectures. We’re going to star with the very basics – The Naming of the Parts. We’ll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we’ll learn:
•What an initiator is
•What a target is
•What a storage controller is
•What a RAID is, and what a RAID controller is
•What a Volume Manager is
•What a Storage Stack is
With these fundamental parts, we’ll be able to place them into a context so that you can understand how all these pieces fit together to form a Data Center storage environment.
Oh, and why are the parts named after colors, instead of numbered? Because there is no order to these webcasts. Each is a standalone seminar on understanding some of the elements of storage systems that can help you learn about technology without admitting that you were faking it the whole time! If you are looking for a starting point – the absolute beginning place – start with this one. We’ll be using these terms in all the other presentations.
Ethernet technology had been a proven standard for over 30 years and there are many networked storage solutions based on Ethernet. While storage devices are evolving rapidly with new standards and specifications, Ethernet is moving towards higher speeds as well: 10Gbps, 25Gbps, 50Gbps and 100Gbps….making it time to re-introduce Ethernet Networked Storage.
This live Webcast will start by providing a solid foundation on Ethernet networked storage and move to the latest advancements, challenges, use cases and benefits. You’ll hear:
•The evolution of storage devices - spinning media to NVM
•New standards: NVMe and NVMe over Fabric
•A retrospect of traditional networked storage including SAN and NAS
•How new storage devices and new standards would impact Ethernet networked storage
•Ethernet based software-defined storage and the hyper-converged model
•A look ahead at new Ethernet technologies optimized for networked storage in the future
Register today for this live Webcast where our experts will be on hand to answer your questions.
Cloud storage has transformed the storage industry, however interoperability challenges that were overlooked during the initial stages of growth are now emerging as front and center issues. Join this Webcast to learn the major challenges that businesses leveraging services from multiple cloud providers or moving from one cloud provider to another face.
The SNIA Cloud Data Management Interface standard (CDMI) addresses these challenges by offering data interoperability between clouds. SNIA and Tata Consultancy Services (TCS) have partnered to create a SNIA CDMI Conformance Test Program to help cloud storage providers achieve CDMI conformance.
As interoperability becomes critical, end user companies should include the CDMI standard in their RFPs and demand conformance to CDMI from vendors.
Join us on July 19th to learn:
•Critical challenges that the cloud storage industry is facing
•Issues in a multi-cloud provider environment
•Addressing cloud storage interoperability challenges
•How the CDMI standard works
•Benefits of CDMI conformance testing
•Benefits for end user companies
Nancy Bennis, Director of Alliances, Cleversafe an IBM Company, Alex McDonald, Chair, SNIA Cloud Storage Initiative, NetApp
Object storage is a secure, simple, scalable, and cost-effective means of embracing the explosive growth of unstructured data enterprises generate every day.
Many organizations, like large service providers, have already begun to leverage software-defined object storage to support new application development and DevOps projects. Meanwhile, legacy enterprise companies are in the early stages of exploring the benefits of object storage for their particular business and are searching for how they can use cloud object storage to modernize their IT strategies, store and protect data while dramatically reducing the costs associated with legacy storage sprawl.
This Webcast will highlight the market trends towards the adoption of object storage , the definition and benefits of object storage, and the use cases that are best suited to leverage an underlying object storage infrastructure.
In this webcast you will learn:
•How to accelerate the transition from legacy storage to a cloud object architecture
•Understand the benefits of object storage
•Primary use cases
•How an object storage can enable your private, public or hybrid cloud strategy without compromising security, privacy or data governance
Sam Fineberg, Distinguished Technologist, HPE, Ben Swartzlander, OpenStack Architect, NetApp, Thomas Rivera, SNIA DPCO Chair
This Webcast will focus on the data protection capabilities of the OpenStack Mitaka release, which includes multiple resiliency features. Join Dr. Sam Fineberg, Distinguished Technologist (HPE), and Ben Swartzlander, Project Team Lead OpenStack Manila (NetApp), as they discuss:
- Storage-related features of Mitaka
- Data protection capabilities – Snapshots and Backup
- Manila share replication
- Live migration
- Rolling upgrades
- HA replication
Our experts will be on hand to answer your questions.
This Webcast is co-sponsored by two groups within the Storage Networking Industry Association (SNIA): the Cloud Storage Initiative (CSI), and the Data Protection & Capacity Optimization Committee (DPCO).
Computer architecture is undergoing cataclysmic change. New flash tiers have been added to storage, and SSD caching has brought DAS back into servers. Storage Class Memory looms on the horizon, and with it come new storage protocols, new DIMM formats, and even new processor instructions. Meanwhile new chip technologies are phasing in, like 3D NAND flash and 3D XPoint Memory, new storage formats are being proposed including the Open-Channel SSD and Storage Intelligence, and all-flash storage is rapidly migrating into those applications that are not moving into the cloud. The future promises to bring us computing functions embedded within the memory array, learning systems permeating all aspects of computing, and adoption of architectures that are very different from today’s standard Von Neuman machines. In this presentation we will examine these technical changes and reflect on ways to avoid designing systems and software that limit our ability to migrate from today’s technologies to those of tomorrow.
Mark Carlson, Principal Engineer, Industry Standards, Toshiba
Cloud Computing and Storage/Data is maturing but where are Enterprises in the adoption of the cloud? Are they increasingly adopting public cloud? Are they setting up their own private clouds? How successful are they in doing so?
This panel will discuss the issues these customers are facing and how various products, services and data management techniques are addressing those issues.
Michelle Tidwell, SNIA Board Member, IBM Systems Storage, Business Line Manager, Software Defined Storage
We've heard it said that data is the new natural resource. In today's extremely dynamic, fast growing and interconnected world, businesses need more agile IT infrastructure to handle larger, faster, and growing variant types of Oceans of Data. The rise of cloud and hybrid cloud infrastructures, the common practice of server virtualization for efficiency and flexibility requires storage infrastructure that is equally flexible, to deliver, manage and protect data with superior performance to keep businesses operational through any data disruption or disaster. The IBM System Storage session will examine the IBM technologies that will help address the challenges and pain-points that IT professionals are experiencing to deliver dynamic insights for businesses and governments worldwide. Included in the IBM session are examples of how clients today are deploying Flash, Object and Software Defined Storage to rapidly and effectively deliver monetization of data.
Hyperconverged Infrastructures (HCIs) are popular solutions for a wide range of computing applications in small and medium-sized businesses. Their ease of deployment and operation, plus the ability to consolidate less efficient infrastructures into a comprehensive solution from a single vendor, have made them a good fit in these organizations. However, in the enterprise, companies with over 1000 employees, HCI adoption has been more limited. Certain use cases, such as providing a turnkey infrastructure for an enterprise’s remote and branch offices, are becoming more common. But what other usage scenarios are these larger companies looking at for hyperconverged appliances?
Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up. Using DNA to archive data is an attractive possibility because it is extremely dense, with a raw limit of 1 exabyte/mm3 (10^9 GB/mm3), and long-lasting, with observed half-life of over 500 years.
This work presents an architecture for a DNA-based archival storage system. It is structured as a key-value store, and leverages common biochemical techniques to provide random access. We also propose a new encoding scheme that offers controllable redundancy, trading off reliability for density. We demonstrate feasibility, random access, and robustness of the proposed encoding with wet lab experiments. Finally, we highlight trends in biotechnology that indicate the impending practicality of DNA storage.
In the era of data explosion in Cloud-Mobile convergence and Internet of Things, traditional architectures and storage systems will not be sufficient to support the transition of enterprises to cognitive analytics. The ever increasing data rates and the demand to reduce time to insights will require an integrated approach to data ingest, processing and storage to reduce end-to-end latency, much higher throughput, much better resource utilization, simplified manageability, and considerably lower energy usage to handle highly diversified analytics. Yet next-generation storage systems must also be smart about data content and application context in order to further improve application performance and user experience. A new software-defined storage system architecture offers the ability to tackle such challenges. It features seamless end-to-end data service of scalable performance, intelligent manageability, high energy efficiency, and enhanced user experience.
Camberley Bates, Managing Director and Senior Analyst, The Evaluator Group
Since the 90’s the storage architectures of SAN and NAS have been well understood and deployed with the focus on efficiency. With cloud-like applications, the massive scale of data and analytics, the introduction of solid state and HPC type applications hitting the data center, the architectures are changing, rapidly. It is a time of incredible change and opportunity for business and the IT staff that supports the change. Welcome to the new world of Enterprise Data Storage.
Moderator-Thomas Rivera,HDS; Panelitsts-Tony Cox,Cryptsoft; Eric Hibbard,HDS; Walt Hubis,Hubis Tech Ass; Tim Hudson,Cryptsoft
This WebCast will cover the basics of Encryption & Key Management as it relates to storage systems, as well as some of the related Best Practices.
This WebCast will explore the fundamental concepts of implementing secure enterprise storage using current technologies, and will focus on the implementation of a practical secure storage system. The high level requirements that drive the implementation of secure storage for the enterprise, including legal issues, key management, current available technologies, as well as fiscal considerations will be explored.
There will also be implementation examples that will illustrate how these requirements are applied to actual system implementations.
At the end, there will be a Q&A at the end for the audience to ask questions for the Panelists.
There are many permutations of technologies, interconnects and application level approaches in play with solid state storage today. It is becoming increasingly difficult to reason clearly about which problems are best solved by various permutations of these. In this webcast Doug Voigt, chair of the SNIA NVM Programming model will outline key architectural principals that may allow us to think about the application of networked solid state technologies more systematically.
Fred Knight, Standards Technologist, NetApp, Andy Banta, Storage Janitor, SolidFire/NetApp, David Fair, Chair, SNIA-ESF
iSCSI is an Internet Protocol standard for transferring SCSI commands across an Ethernet network, enabling hosts to link to storage devices wherever they may be. In this Webcast, we will discuss the evolution of iSCSI including iSER, which is iSCSI technology that takes advantage of various RDMA fabric technologies to enhance performance. Register now to hear:
•A brief history of iSCSI
•How iSCSI works
•IETF refinements to the specification
•Enhancing iSCSI performance with iSER
The Webcast will be live, so please bring your questions for our experts.
Jeff Chang, AgigA Tech; Arthur Sainio, SMART Modular; Doug Voigt, HP-E; Mat Young, Netlist
The IT industry has made tremendous progress innovating up and down the computing stack to enable and take advantage of non-volatile memory (NVM). But questions still remain on where NVM plays in the memory stack, how it will evolve in the CPU architecture, and where operating systems will need to be enhanced. Join the SNIA NVDIMM Special Interest Group to learn about the latest developments in NVDIMM, understand how the SNIA NVM Programming Model can be applied in NVM development work, and find your NVM answers!
Wayne Adams, SNIA Board of Directors; Mark Carlson, SNIA Technical Council; Camberley Bates, Evaluator Group General Mgr
This webcast will feature an interactive discussion with the subject matter experts who have organized the Data Storage Innovation Conference, planned for June 13-15, 2016.
Get an overview of the Conference agenda that addresses the most pressing data storage and cloud trends spanning storage class memory, data security, data protection, cloud development and management, new hyper-converged storage systems, big data and analytics, storage networks and protocols, file-systems, technology standards, software defined storage and best practices as they apply to networked storage, data management and data protection.
Attendees will also become aware of conference highlights including Hot Topic sessions, state of the market research study on Enterprise Hyper-converged Storage Deployment, and recently released solutions featured in the Innovation Spotlight. Webcast attendees will be encouraged to follow SNIA developments and webcasts live on June 13-14 , as well as attend the Conference.
Alex McDonald, SNIA-ESF Vice Chair, Chad Hintz, SNIA-ESF Board Member
The popular & ubiquitous Network File System (NFS) is a standard protocol that allows applications to store and manage data on a remote computer or server. NFS provides two services; a network part that connects users or clients to a remote system or server; and a file-based view of the data. Together these provide a seamless environment that masks the differences between local files and remote files.
This SNIA Ethernet Storage Forum Webcast is an introduction and overview presentation to NFS for technologists and tech managers interested in understanding:
oNFS history and development
oThe facilities and services NFS provides
oWhy NFS rose in popularity to dominate file based services
oWhy NFS continues to be important in the cloud
Originally presented at SNIA’s 2015 Storage Developer Conference, this webcast will discuss how Facebook’s massive and continuously growing corpus of photos, videos, and other Binary Large OBjects (BLOBs) need to be reliably stored and quickly accessed.
As the footprint of BLOBs increases, storing them in their traditional storage system, Haystack, is becoming increasingly inefficient. To increase Facebook’s storage efficiency, measured in the effective-replication-factor of BLOBs, they examine the underlying access patterns of BLOBs and identify temperature zones that include hot BLOBs that are accessed frequently and warm BLOBs that are accessed far less often.
Facebook’s overall BLOB storage system is designed to isolate warm BLOBs and enable them to use a specialized warm BLOB storage system, f4. f4 is a new system that lowers the effective-replication-factor of warm BLOBs while remaining fault tolerant and able to support the lower throughput demands.
Eric Slack, Sr. Analyst, Evaluator Group, Alex McDonald, Chair, SNIA Cloud Storage, Glyn Bowden, SNIA Cloud Storage Board
A Software Defined Data Center (SDDC) is a compute facility in which all elements of the infrastructure - networking, storage, CPU and security - are virtualized and removed from proprietary hardware stacks. Deployment, provisioning and configuration as well as the operation, monitoring and automation of the entire environment is abstracted from hardware and implemented in software.
The results of this software-defined approach include maximizing agility and minimizing cost, benefits that appeal to IT organizations of all sizes. In fact, understanding SDDC concepts can help IT professionals in any organization better apply these software-defined concepts to storage, networking, compute and other infrastructure decisions.
If you’re interested in Software-Defined Data Centers and how such a thing might be implemented – and why this concept is important to IT professionals who aren’t involved with building data centers - then please join us on March 15th as Eric Slack, Sr. Analyst with Evaluator Group, will explain what “software-defined” really means and why it’s important to all IT organizations and join a discussion with Alex McDonald, Chair for SNIA’s Cloud Storage Initiative about how these concepts apply to the modern data center.
In this webinar we’ll be exploring:
•How a SDDC leverages this concept to make the private cloud feasible
•How we can apply SDDC concepts to an existing data center
•How to develop your own software-defined data center environment
The Storage Networking Industry Association (SNIA) is a not-for-profit global organization, made up of some 400 member companies spanning virtually the entire storage industry. SNIA's mission is to lead the storage industry worldwide in developing and promoting standards, technologies, and educational services to empower organizations in the management of information.
The industry was surprised when Dell announced its intent to acquire EMC for $67 billion, the largest tech deal ever. Merging two large stagnate companies with very different cultures and high-level of overlap in products can pose significant challenges.
Join this webinar to learn about:
- The acquisition implications and how it’ll affect your long-term storage investment
- The uncertainty on Dell and EMC’s roadmap and which products will continue to be invested in
-Alternate storage solutions that enable you to transform data into insights and value for your organization
With the relentless speed of innovation in data center technologies, how do you decide on your next step? Virtualization is being applied to every aspect of the data center, and it’s critical to first understand what makes sense for your business and why.
Join us for an open panel of NSX top influencers as they engage in a no-slide webcast conversation. They’ll kick off the session with a discussion on network virtualization and how it completes the virtualization infrastructure; how they see the evolution of the data center progressing; and the role of fluid architectures.
Don’t miss this opportunity to learn from industry experts as they share their valuable insights with the IT community.
Build a fundamentally more agile, efficient and secure application environment with VMware NSX network virtualization on powerful industry standard infrastructure featuring Intel® Xeon® processors and Intel® Ethernet 10GB/40GB Converged Network Adapters.
Cities around the world are transforming their increasingly congested landscapes into safer, smarter, and more sustainable environments that better serve their residents and visitors alike. These “smart cities”, are enabling a continuous exchange of information between devices, infrastructure, networks and people, creating immense possibilities for the broader Internet of Things (IoT) ecosystem and the communications industry.
Here in the U.S., recent partnerships between the Federal government and private industry are helping to advance smart city solutions and deployments including the U.S. Department of Transportation’s Smart City Challenge and the recently announced White House Advanced Wireless Research Initiative. We’re just scratching the surface of the innovations to come.
But what truly makes a city smart? What applications and solutions are currently being deployed and what more will be developed? What types of critical network infrastructure needs to exist in order to enable a more connected society? What role will fiber, sensors, LPWANs, densified small cells and DAS, massive MIMO, and other solutions play as these networks deploy? How can we protect the data being transmitted around the city? What lessons have been learned thus far and are there business opportunities and models to support expectations of market growth? And how best can local governments and citizens be educated to understand the importance of smart city initiatives and potential return on investment?
Speakers from AT&T and SAP will delve into these questions and more during the live webcast. We also welcome your questions, so get ready to bring them into the mix.
Michael Zeto, Director of Smart Cities, AT&T
Josh Waddell, Global Vice President, IoT Strategy, SAP
Steve Brumer, Partner, 151 Advisors
Limor Schafman, Director of Content Development, TIA
Um im Zeitalter der digitalen Transformation überleben zu können ist eine klare Service Orientierung unabdingbar. Im Fokus steht die zu erbringende Leistung bzw. der Nutzen den sie erzeugt. Die sogenannten „Digital Natives des 21. Jahrhunderts“ wie amazon, Tesla oder airbnb nutzen die Mechanismen des digitalen Zeitalters nicht nur um neue Märkte zu erschließen, sondern auch um nachhaltig die Spielregeln traditioneller Märkte zu verändern.
Wer mithalten möchte, kann nicht mehr dem alt-bewährten Build-to-Order Ansatz folgen. Überleben werden jene Unternehmen, die es schaffen, Services schnell, agil und kostengünstig zu produzieren. Über Jahre gewachsene, hybride IT Landschaften machen die Umsetzung dieser Aufgabe aber nicht leichter.
Lernen Sie in diesem Webcast, was sich hinter „Service Design Thinking“ verbirgt und welchen Weg es gibt, um diese Herausforderungen erfolgreich zu meistern.
SD-WAN can dramatically reduce cost and increases the ability to rapidly bring new services online, connecting users to all types of applications, and speeding up time to market. But the idea of re-architecting the WAN can be daunting and the decision to adopt an SD-WAN solution can be a difficult one.
Join renowned network expert Ethan Banks, Co-Founder of Packet Pushers, and Rolf Muralt, VP Products Management SD-WAN at Silver Peak, in a webinar that discusses the SD-WAN market, lessons learned, and what features to be on the lookout for as you make your decision. They will discuss issues around technology selection and deployment, including:
· How a zero-touch, hybrid or SD-WAN can leverage multiple connectivity forms
· Ways to prioritize and route traffic across different connections
· Quality of Service (QoS), and how to maintain 100% uptime
· Best practices for transitioning with minimal impact on budget and resources
· Real customer examples that demonstrate different deployment stage and benefits
Traditional performance testing typically requires that all components of the application are “completed,” integrated and deployed into an appropriate environment. This results in testing not being done until late in the delivery cycle or sometimes skipped entirely. Which can then lead to a less then optimal user experience, expensive rework and potential loss of business.
Many organizations are adopting service virtualization to overcome the key challenges associated with performance testing. During this session see why and specifically how service virtualization:
•Enables you to do testing early in the dev cycle by simulating unavailable production systems and missing components
•Helps you control the inputs (like response times and 3rd party system responses) so you can do more negative and exploratory testing
•Provisions performance test environments “in a box” for on-demand testing
•Works with CA APM so that you can monitor an app during a load and performance test and see how the app reacts
Applications - the lifeblood of modern business - can be in a sorry state of affairs given today's forced alignment with server, OS, and storage boundaries. This can not only cause deployment delays and complexity, but it also results in underutilized hardware and inflated operational costs. There is a drive to embrace new technologies and methodologies in the enterprise, but this presents significant challenges. Limited application-awareness at the infrastructure level makes it nearly impossible to deliver on the promised SLAs and the tight coupling of applications and underlying operating software (OS or hypervisors) compromises application portability as well as developer productivity.
A growing number of enterprises are turning to application containers to support more efficient and effective development and deployment in an application-centric IT paradigm. By abstracting applications from the underlying infrastructure, containers can simplify application deployment, and enable seamless portability across machines and clouds. Containers can also enable significant cost savings by consolidating multiple applications per machine without compromising performance or predictability. Join us to learn more about container adoption in the enterprise and how a container-based server and storage virtualization environment can help take your software-defined datacenter transformation to the next level of an application-defined datacenter.
Featuring speakers from F5, Illumio, Nutanix, Rubrik, and Workspot. Compare and evaluate 4 leading hyperconverged platform-optimized solutions that expand the capabilities of the Nutanix enterprise cloud platform: F5 application delivery, Illumio adaptive security, Rubrik data protection, and Workspot VDI.
• Workspot's cloud-native, infinitely and instantly scalable orchestration architecture (aka VDI 2.0) enables enterprise-class VDI deployment in hours, in which you can use all your existing infrastructure (apps, desktops and data).
• Rubrik eliminates backup pain with automation, instant recovery, unlimited replication, and data archival at infinite scale -- with zero complexity.
• Visualization 2.0 from Illumio shows you a live, interactive map of all of your application traffic across your data centers and clouds, and identifies applications for secure migration to the Nutanix platform.
• F5 delivers your mission critical applications on an enterprise cloud that uniquely delivers the agility, pay-as-you-grow consumption, and operational simplicity of the public cloud without sacrificing the predictability, security, and control of on-premises infrastructure.
The constant barrage of application connectivity and security policy change requests, not to mention the relentless battle against cyber-attacks have made the traditional approach to managing security untenable. In order keep your business both agile and secure – across today’s highly complex and diverse enterprise networks – you must focus your security management efforts on what matters most – the applications that power your business.
Join Joe DiPietro, SE Director at AlgoSec on Tuesday, July 26 at 11am EDT for a technical webinar, where he will discuss an application-centric, lifecycle approach to security policy management – from automatically discovering application connectivity requirements, through ongoing change management and proactive risk analysis, to secure decommissioning – that will help you improve your security maturity and business agility. During the webinar, Joe will explain how to:
• Understand the security policy management lifecycle and its impact on application availability, security and compliance
• Auto-discover and map business applications and their connectivity flows – and why it’s important
• Securely migrate business application connectivity and security devices to a new data center
•Get a single pane of glass that aligns application connectivity with your security device estate
• Identify risk and vulnerabilities and prioritize them based on business criticality