Trends in Data Protection and Restoration Technologies
Many disk technologies, both old and new, are being used to augment tried and true backup and data protection methodologies to deliver better information and application restoration performance. These technologies work in parallel with the existing backup paradigm,
This session will discuss many of these technologies in detail. Important considerations of data protection include performance, scale, regulatory compliance, recovery objectives and cost. Technologies include contemporary backup, disk based backups, snapshots, continuous data protection and capacity optimized storage.
Detail of these technologies interoperate will be provided as well as best practices recommendations for deployment in today's heterogeneous data centers.
RecordedJul 30 200945 mins
Your place is confirmed, we'll send you email reminders
J Metz, Cisco, Alex McDonald, NetApp, John Kim, Mellanox, Chad Hintz, Cisco
Welcome to this first part of the webcast series, where we’re going to take an irreverent, yet still informative look, at the parts of a storage solution in Data Center architectures. We’re going to star with the very basics – The Naming of the Parts. We’ll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we’ll learn:
•What an initiator is
•What a target is
•What a storage controller is
•What a RAID is, and what a RAID controller is
•What a Volume Manager is
•What a Storage Stack is
With these fundamental parts, we’ll be able to place them into a context so that you can understand how all these pieces fit together to form a Data Center storage environment.
Oh, and why are the parts named after colors, instead of numbered? Because there is no order to these webcasts. Each is a standalone seminar on understanding some of the elements of storage systems that can help you learn about technology without admitting that you were faking it the whole time! If you are looking for a starting point – the absolute beginning place – start with this one. We’ll be using these terms in all the other presentations.
Ethernet technology had been a proven standard for over 30 years and there are many networked storage solutions based on Ethernet. While storage devices are evolving rapidly with new standards and specifications, Ethernet is moving towards higher speeds as well: 10Gbps, 25Gbps, 50Gbps and 100Gbps….making it time to re-introduce Ethernet Networked Storage.
This live Webcast will start by providing a solid foundation on Ethernet networked storage and move to the latest advancements, challenges, use cases and benefits. You’ll hear:
•The evolution of storage devices - spinning media to NVM
•New standards: NVMe and NVMe over Fabric
•A retrospect of traditional networked storage including SAN and NAS
•How new storage devices and new standards would impact Ethernet networked storage
•Ethernet based software-defined storage and the hyper-converged model
•A look ahead at new Ethernet technologies optimized for networked storage in the future
Register today for this live Webcast where our experts will be on hand to answer your questions.
Cloud storage has transformed the storage industry, however interoperability challenges that were overlooked during the initial stages of growth are now emerging as front and center issues. Join this Webcast to learn the major challenges that businesses leveraging services from multiple cloud providers or moving from one cloud provider to another face.
The SNIA Cloud Data Management Interface standard (CDMI) addresses these challenges by offering data interoperability between clouds. SNIA and Tata Consultancy Services (TCS) have partnered to create a SNIA CDMI Conformance Test Program to help cloud storage providers achieve CDMI conformance.
As interoperability becomes critical, end user companies should include the CDMI standard in their RFPs and demand conformance to CDMI from vendors.
Join us on July 19th to learn:
•Critical challenges that the cloud storage industry is facing
•Issues in a multi-cloud provider environment
•Addressing cloud storage interoperability challenges
•How the CDMI standard works
•Benefits of CDMI conformance testing
•Benefits for end user companies
Nancy Bennis, Director of Alliances, Cleversafe an IBM Company, Alex McDonald, Chair, SNIA Cloud Storage Initiative, NetApp
Object storage is a secure, simple, scalable, and cost-effective means of embracing the explosive growth of unstructured data enterprises generate every day.
Many organizations, like large service providers, have already begun to leverage software-defined object storage to support new application development and DevOps projects. Meanwhile, legacy enterprise companies are in the early stages of exploring the benefits of object storage for their particular business and are searching for how they can use cloud object storage to modernize their IT strategies, store and protect data while dramatically reducing the costs associated with legacy storage sprawl.
This Webcast will highlight the market trends towards the adoption of object storage , the definition and benefits of object storage, and the use cases that are best suited to leverage an underlying object storage infrastructure.
In this webcast you will learn:
•How to accelerate the transition from legacy storage to a cloud object architecture
•Understand the benefits of object storage
•Primary use cases
•How an object storage can enable your private, public or hybrid cloud strategy without compromising security, privacy or data governance
Sam Fineberg, Distinguished Technologist, HPE, Ben Swartzlander, OpenStack Architect, NetApp, Thomas Rivera, SNIA DPCO Chair
This Webcast will focus on the data protection capabilities of the OpenStack Mitaka release, which includes multiple resiliency features. Join Dr. Sam Fineberg, Distinguished Technologist (HPE), and Ben Swartzlander, Project Team Lead OpenStack Manila (NetApp), as they discuss:
- Storage-related features of Mitaka
- Data protection capabilities – Snapshots and Backup
- Manila share replication
- Live migration
- Rolling upgrades
- HA replication
Our experts will be on hand to answer your questions.
This Webcast is co-sponsored by two groups within the Storage Networking Industry Association (SNIA): the Cloud Storage Initiative (CSI), and the Data Protection & Capacity Optimization Committee (DPCO).
Computer architecture is undergoing cataclysmic change. New flash tiers have been added to storage, and SSD caching has brought DAS back into servers. Storage Class Memory looms on the horizon, and with it come new storage protocols, new DIMM formats, and even new processor instructions. Meanwhile new chip technologies are phasing in, like 3D NAND flash and 3D XPoint Memory, new storage formats are being proposed including the Open-Channel SSD and Storage Intelligence, and all-flash storage is rapidly migrating into those applications that are not moving into the cloud. The future promises to bring us computing functions embedded within the memory array, learning systems permeating all aspects of computing, and adoption of architectures that are very different from today’s standard Von Neuman machines. In this presentation we will examine these technical changes and reflect on ways to avoid designing systems and software that limit our ability to migrate from today’s technologies to those of tomorrow.
Mark Carlson, Principal Engineer, Industry Standards, Toshiba
Cloud Computing and Storage/Data is maturing but where are Enterprises in the adoption of the cloud? Are they increasingly adopting public cloud? Are they setting up their own private clouds? How successful are they in doing so?
This panel will discuss the issues these customers are facing and how various products, services and data management techniques are addressing those issues.
Michelle Tidwell, SNIA Board Member, IBM Systems Storage, Business Line Manager, Software Defined Storage
We've heard it said that data is the new natural resource. In today's extremely dynamic, fast growing and interconnected world, businesses need more agile IT infrastructure to handle larger, faster, and growing variant types of Oceans of Data. The rise of cloud and hybrid cloud infrastructures, the common practice of server virtualization for efficiency and flexibility requires storage infrastructure that is equally flexible, to deliver, manage and protect data with superior performance to keep businesses operational through any data disruption or disaster. The IBM System Storage session will examine the IBM technologies that will help address the challenges and pain-points that IT professionals are experiencing to deliver dynamic insights for businesses and governments worldwide. Included in the IBM session are examples of how clients today are deploying Flash, Object and Software Defined Storage to rapidly and effectively deliver monetization of data.
Hyperconverged Infrastructures (HCIs) are popular solutions for a wide range of computing applications in small and medium-sized businesses. Their ease of deployment and operation, plus the ability to consolidate less efficient infrastructures into a comprehensive solution from a single vendor, have made them a good fit in these organizations. However, in the enterprise, companies with over 1000 employees, HCI adoption has been more limited. Certain use cases, such as providing a turnkey infrastructure for an enterprise’s remote and branch offices, are becoming more common. But what other usage scenarios are these larger companies looking at for hyperconverged appliances?
Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up. Using DNA to archive data is an attractive possibility because it is extremely dense, with a raw limit of 1 exabyte/mm3 (10^9 GB/mm3), and long-lasting, with observed half-life of over 500 years.
This work presents an architecture for a DNA-based archival storage system. It is structured as a key-value store, and leverages common biochemical techniques to provide random access. We also propose a new encoding scheme that offers controllable redundancy, trading off reliability for density. We demonstrate feasibility, random access, and robustness of the proposed encoding with wet lab experiments. Finally, we highlight trends in biotechnology that indicate the impending practicality of DNA storage.
In the era of data explosion in Cloud-Mobile convergence and Internet of Things, traditional architectures and storage systems will not be sufficient to support the transition of enterprises to cognitive analytics. The ever increasing data rates and the demand to reduce time to insights will require an integrated approach to data ingest, processing and storage to reduce end-to-end latency, much higher throughput, much better resource utilization, simplified manageability, and considerably lower energy usage to handle highly diversified analytics. Yet next-generation storage systems must also be smart about data content and application context in order to further improve application performance and user experience. A new software-defined storage system architecture offers the ability to tackle such challenges. It features seamless end-to-end data service of scalable performance, intelligent manageability, high energy efficiency, and enhanced user experience.
Camberley Bates, Managing Director and Senior Analyst, The Evaluator Group
Since the 90’s the storage architectures of SAN and NAS have been well understood and deployed with the focus on efficiency. With cloud-like applications, the massive scale of data and analytics, the introduction of solid state and HPC type applications hitting the data center, the architectures are changing, rapidly. It is a time of incredible change and opportunity for business and the IT staff that supports the change. Welcome to the new world of Enterprise Data Storage.
Moderator-Thomas Rivera,HDS; Panelitsts-Tony Cox,Cryptsoft; Eric Hibbard,HDS; Walt Hubis,Hubis Tech Ass; Tim Hudson,Cryptsoft
This WebCast will cover the basics of Encryption & Key Management as it relates to storage systems, as well as some of the related Best Practices.
This WebCast will explore the fundamental concepts of implementing secure enterprise storage using current technologies, and will focus on the implementation of a practical secure storage system. The high level requirements that drive the implementation of secure storage for the enterprise, including legal issues, key management, current available technologies, as well as fiscal considerations will be explored.
There will also be implementation examples that will illustrate how these requirements are applied to actual system implementations.
At the end, there will be a Q&A at the end for the audience to ask questions for the Panelists.
There are many permutations of technologies, interconnects and application level approaches in play with solid state storage today. It is becoming increasingly difficult to reason clearly about which problems are best solved by various permutations of these. In this webcast Doug Voigt, chair of the SNIA NVM Programming model will outline key architectural principals that may allow us to think about the application of networked solid state technologies more systematically.
Fred Knight, Standards Technologist, NetApp, Andy Banta, Storage Janitor, SolidFire/NetApp, David Fair, Chair, SNIA-ESF
iSCSI is an Internet Protocol standard for transferring SCSI commands across an Ethernet network, enabling hosts to link to storage devices wherever they may be. In this Webcast, we will discuss the evolution of iSCSI including iSER, which is iSCSI technology that takes advantage of various RDMA fabric technologies to enhance performance. Register now to hear:
•A brief history of iSCSI
•How iSCSI works
•IETF refinements to the specification
•Enhancing iSCSI performance with iSER
The Webcast will be live, so please bring your questions for our experts.
Jeff Chang, AgigA Tech; Arthur Sainio, SMART Modular; Doug Voigt, HP-E; Mat Young, Netlist
The IT industry has made tremendous progress innovating up and down the computing stack to enable and take advantage of non-volatile memory (NVM). But questions still remain on where NVM plays in the memory stack, how it will evolve in the CPU architecture, and where operating systems will need to be enhanced. Join the SNIA NVDIMM Special Interest Group to learn about the latest developments in NVDIMM, understand how the SNIA NVM Programming Model can be applied in NVM development work, and find your NVM answers!
Wayne Adams, SNIA Board of Directors; Mark Carlson, SNIA Technical Council; Camberley Bates, Evaluator Group General Mgr
This webcast will feature an interactive discussion with the subject matter experts who have organized the Data Storage Innovation Conference, planned for June 13-15, 2016.
Get an overview of the Conference agenda that addresses the most pressing data storage and cloud trends spanning storage class memory, data security, data protection, cloud development and management, new hyper-converged storage systems, big data and analytics, storage networks and protocols, file-systems, technology standards, software defined storage and best practices as they apply to networked storage, data management and data protection.
Attendees will also become aware of conference highlights including Hot Topic sessions, state of the market research study on Enterprise Hyper-converged Storage Deployment, and recently released solutions featured in the Innovation Spotlight. Webcast attendees will be encouraged to follow SNIA developments and webcasts live on June 13-14 , as well as attend the Conference.
Alex McDonald, SNIA-ESF Vice Chair, Chad Hintz, SNIA-ESF Board Member
The popular & ubiquitous Network File System (NFS) is a standard protocol that allows applications to store and manage data on a remote computer or server. NFS provides two services; a network part that connects users or clients to a remote system or server; and a file-based view of the data. Together these provide a seamless environment that masks the differences between local files and remote files.
This SNIA Ethernet Storage Forum Webcast is an introduction and overview presentation to NFS for technologists and tech managers interested in understanding:
oNFS history and development
oThe facilities and services NFS provides
oWhy NFS rose in popularity to dominate file based services
oWhy NFS continues to be important in the cloud
Originally presented at SNIA’s 2015 Storage Developer Conference, this webcast will discuss how Facebook’s massive and continuously growing corpus of photos, videos, and other Binary Large OBjects (BLOBs) need to be reliably stored and quickly accessed.
As the footprint of BLOBs increases, storing them in their traditional storage system, Haystack, is becoming increasingly inefficient. To increase Facebook’s storage efficiency, measured in the effective-replication-factor of BLOBs, they examine the underlying access patterns of BLOBs and identify temperature zones that include hot BLOBs that are accessed frequently and warm BLOBs that are accessed far less often.
Facebook’s overall BLOB storage system is designed to isolate warm BLOBs and enable them to use a specialized warm BLOB storage system, f4. f4 is a new system that lowers the effective-replication-factor of warm BLOBs while remaining fault tolerant and able to support the lower throughput demands.
Eric Slack, Sr. Analyst, Evaluator Group, Alex McDonald, Chair, SNIA Cloud Storage, Glyn Bowden, SNIA Cloud Storage Board
A Software Defined Data Center (SDDC) is a compute facility in which all elements of the infrastructure - networking, storage, CPU and security - are virtualized and removed from proprietary hardware stacks. Deployment, provisioning and configuration as well as the operation, monitoring and automation of the entire environment is abstracted from hardware and implemented in software.
The results of this software-defined approach include maximizing agility and minimizing cost, benefits that appeal to IT organizations of all sizes. In fact, understanding SDDC concepts can help IT professionals in any organization better apply these software-defined concepts to storage, networking, compute and other infrastructure decisions.
If you’re interested in Software-Defined Data Centers and how such a thing might be implemented – and why this concept is important to IT professionals who aren’t involved with building data centers - then please join us on March 15th as Eric Slack, Sr. Analyst with Evaluator Group, will explain what “software-defined” really means and why it’s important to all IT organizations and join a discussion with Alex McDonald, Chair for SNIA’s Cloud Storage Initiative about how these concepts apply to the modern data center.
In this webinar we’ll be exploring:
•How a SDDC leverages this concept to make the private cloud feasible
•How we can apply SDDC concepts to an existing data center
•How to develop your own software-defined data center environment
The Storage Networking Industry Association (SNIA) is a not-for-profit global organization, made up of some 400 member companies spanning virtually the entire storage industry. SNIA's mission is to lead the storage industry worldwide in developing and promoting standards, technologies, and educational services to empower organizations in the management of information.
Many studies have been done on the benefits of Predictive Analytics on customer engagement in order to change customer behaviour. However, the side less romanticized is the benefit to IT operations as it is sometimes difficult to turn the focus from direct revenue impacting gain to the more indirect revenue gains that can come from optimization and pro-active issue resolution.
I will be speaking, from an application operations engineers perspective, on the benefits to the business of using Predictive Analytics to optimize applications.
I will summarize the stages of analytics maturity that lead an organization from traditional reporting (descriptive analytics: hindsight), through predictive analytics (foresight), and into prescriptive analytics (insight). The benefits of big data (especially high-variety data) will be demonstrated with simple examples that can be applied to significant use cases.
The goal of data science in this case is to discover predictive power and prescriptive power from your data collections, in order to achieve optimal decisions and outcomes.
Join this webinar to see how the CloudPhysics Public Cloud Planning Rightsizer identifies opportunities to lower your costs of running applications on the public cloud.
The Public Cloud Planning Rightsizer automatically identifies on-premises virtual machines (VMs) that are over-provisioned with more resources (such as CPU and memory) than they use. This lets you optimize instance matching to the ideal cloud instances. Rightsizing reveals the verifiable cost of running workloads in the cloud. Now you can answer the question, “will we save money by migrating applications to the cloud?”
This webinar shows how Public Cloud Planning Rightsizer collects resource utilization data from each VM on a fine-grained basis, and then analyzes those data across time to discover the VM’s actual resource needs. Imagine an on-premises VM configured with 8 vCPUs: if the Rightsizer shows that it has never used more than 2 vCPUs, you can Rightsize that VM to a smaller instance in the cloud, saving substantial funds.
Many enterprise organizations are moving beyond antivirus software, adding new types of controls and monitoring tools to improve incident prevention, detection, and response on their endpoints. Unfortunately, some of these firms are doing so by adding tactical technologies that offering incremental benefits only.
So what’s needed?
A strategic approach that covers the entire ESG endpoint security continuum from threat prevention to incident response. A truly comprehensive solution will also include advanced endpoint security controls that reduce the attack surface and tight integration with network security, SIEM, and threat intelligence to improve threat detection and response processes.
Join ESG senior principal analyst Jon Oltsik, Intel Security, and Bufferzone on a webinar on July 21 at 10am PT/1pm ET to learn more about next-generation endpoint security requirements and strategies.
The ever changing Cloud Service Provider marketplace is filled with growing opportunities and increasing competition. Mike Slisinger, Cloud Solutions Architect at Nutanix, and Chris Feltham, Cloud Solution Sales Manager at Intel, will discuss how Nutanix and Intel collaborate on cloud technologies and solutions to help Cloud Service Providers solve infrastructure challenges and simplify operations. We will also discuss how current Nutanix and Intel powered Service Providers are building differentiated services that provide true business value to their customers.
Attending this webcast should provide Cloud Service Providers with a good understanding of how Intel and Nutanix can help reduce costs of offering cloud services while enabling and growing new revenue streams for business.
Hyperconverged infrastructures combine compute and storage components into a modular, scale-out platform that typically includes a hypervisor and some comprehensive management software. The technology is usually sold as self-contained appliance modules running on industry-standard server hardware with internal HDDs and SSDs. This capacity is abstracted and pooled into a shared resource for VMs running on each module or ‘node’ in the cluster. Hyperconverged infrastructures are sold as stand-alone appliances or as software that companies or integrators can use to build their own compute environments for private or hybrid clouds, special project infrastructures or departmental/remote office IT systems.
Understand what hyperconvergence is – and is not
Understand the capabilities this technology can bring
Discussion of where this technology is going
How and where it is being used in the Enterprise
DMTF’s Platform Management Components Intercommunications (PMCI) Working Group develops standards to address “inside the box” communication and functional interfaces between the components of the platform management subsystem such as management controllers, BIOS, and intelligent management devices. Presented by DMTF’s Senior VP of Technology, Hemal Shah, this webinar will provide an overview of PMCI standards including Management Component Transport Protocol (MCTP), Platform Level Data Model (PLDM) and Network Controller Sideband Interface (NC-SI).
Digital transformation is on the agenda of every company and creates a new focus on agile software development. Join us to learn how platform as a service for software developers and operations (DevOps) transforms the underlying infrastructure cloud. We will cover the IT requirements and the important role of scale-out infrastructure, infrastructure as code and containers for such clouds.
From May 2018, the EU rules on data protection are changing, and all companies with more than 250 employees will need to reassess their practices. What’s more, the penalties for non-compliance are changing too—so now’s the time to get prepared.
The days of ensuring each designer has their workstation under their desk is becoming less the norm. Many organizations, particularly media and entertainment as well as architecture and engineering are considering leveraging the cloud to provide workstations to solve common IT problems resulting from big data sets, a dispersed and flexible workforce as well as increasing concern for data security.
Alex Herrera, a senior analyst with Jon Peddie Research, author, and consultant to the world’s leading computer graphics and semiconductor companies will provide guidance on how organizations can develop an IT strategy to deploy and support a secure cloud model, where pay-as-you-go is the norm.
This session will provide valuable insights including:
• Pros and cons of hosting workstations in the cloud
• How to effectively manage workflows
• Differences between private and public clouds
• Key considerations for cloud deployments
Teradici’s CTO will discuss how customers can effectively leverage Teradici PCoIP Workstation Access Software to securely deliver a seamless end user experience from the cloud.
Those who attend the webinar will receive a copy of the slide deck.
Q&A will follow at the end of the session.