Trends in Data Protection and Restoration Technologies
Many disk technologies, both old and new, are being used to augment tried and true backup and data protection methodologies to deliver better information and application restoration performance. These technologies work in parallel with the existing backup paradigm,
This session will discuss many of these technologies in detail. Important considerations of data protection include performance, scale, regulatory compliance, recovery objectives and cost. Technologies include contemporary backup, disk based backups, snapshots, continuous data protection and capacity optimized storage.
Detail of these technologies interoperate will be provided as well as best practices recommendations for deployment in today's heterogeneous data centers.
RecordedJul 30 200945 mins
Your place is confirmed, we'll send you email reminders
Mark Rogov, EMC, Ken Cantrell, NetApp, Alex McDonald, NetApp
The storage performance benchmarking dynamic duo, Mark Rogov and Ken Cantrell, are back. Having covered storage performance benchmarking fundamentals, system under test and most recently block components, this forth installment of the Webcast series will focus on File Components.
Register now to learn why the File World is different from the Block World. Mark and Ken will walk through the basic filesystem theory to how filesystem data layout affects performance, covering:
• Why file?
• Local vs. shared filesystems
• Compare and contrast typical file performance use cases
Chad Thibodeau, Principal Product Manager, Veritas, Alex McDonald, Chair SNIA Cloud Storage Initiative
Containers are the latest in what are new and innovative ways of packaging, managing and deploying distributed applications. In this webcast, we’ll introduce the concept of containers; what they are and the advantages they bring illustrated by use cases, why you might want to consider them as an app deployment model, and how they differ from VMs or bare metal deployments.
We’ll follow up with a look at what is required from a storage perspective when using Docker, one of the leading systems that provides a lightweight, open and secure environment for the deployment of containers. Finally, we’ll round out our Docker introduction by presenting the takeaways from DockerCon, an industry event for makers and operators of distributed applications built on Docker, that took place in Seattle in June of this year.
Join us for this discussion on:
•Application deployment history
•Containers vs. virtual machines vs. bare metal
• Factors driving containers and common use cases
• Storage ecosystem and features
• Container storage table stakes
•Introduction to Docker
•Key takeaways from DockerCon 2016
J Metz, Cisco, Alex McDonald, NetApp, John Kim, Mellanox, Chad Hintz, Cisco, Fred Knight, NetApp
Welcome to this first part of the webcast series, where we’re going to take an irreverent, yet still informative look, at the parts of a storage solution in Data Center architectures. We’re going to start with the very basics – The Naming of the Parts. We’ll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we’ll learn:
•What an initiator is
•What a target is
•What a storage controller is
•What a RAID is, and what a RAID controller is
•What a Volume Manager is
•What a Storage Stack is
With these fundamental parts, we’ll be able to place them into a context so that you can understand how all these pieces fit together to form a Data Center storage environment.
Oh, and why are the parts named after colors, instead of numbered? Because there is no order to these webcasts. Each is a standalone seminar on understanding some of the elements of storage systems that can help you learn about technology without admitting that you were faking it the whole time! If you are looking for a starting point – the absolute beginning place – start with this one. We’ll be using these terms in all the other presentations.
Ethernet technology had been a proven standard for over 30 years and there are many networked storage solutions based on Ethernet. While storage devices are evolving rapidly with new standards and specifications, Ethernet is moving towards higher speeds as well: 10Gbps, 25Gbps, 50Gbps and 100Gbps….making it time to re-introduce Ethernet Networked Storage.
This live Webcast will start by providing a solid foundation on Ethernet networked storage and move to the latest advancements, challenges, use cases and benefits. You’ll hear:
•The evolution of storage devices - spinning media to NVM
•New standards: NVMe and NVMe over Fabric
•A retrospect of traditional networked storage including SAN and NAS
•How new storage devices and new standards would impact Ethernet networked storage
•Ethernet based software-defined storage and the hyper-converged model
•A look ahead at new Ethernet technologies optimized for networked storage in the future
Register today for this live Webcast where our experts will be on hand to answer your questions.
Cloud storage has transformed the storage industry, however interoperability challenges that were overlooked during the initial stages of growth are now emerging as front and center issues. Join this Webcast to learn the major challenges that businesses leveraging services from multiple cloud providers or moving from one cloud provider to another face.
The SNIA Cloud Data Management Interface standard (CDMI) addresses these challenges by offering data interoperability between clouds. SNIA and Tata Consultancy Services (TCS) have partnered to create a SNIA CDMI Conformance Test Program to help cloud storage providers achieve CDMI conformance.
As interoperability becomes critical, end user companies should include the CDMI standard in their RFPs and demand conformance to CDMI from vendors.
Join us on July 19th to learn:
•Critical challenges that the cloud storage industry is facing
•Issues in a multi-cloud provider environment
•Addressing cloud storage interoperability challenges
•How the CDMI standard works
•Benefits of CDMI conformance testing
•Benefits for end user companies
Nancy Bennis, Director of Alliances, Cleversafe an IBM Company, Alex McDonald, Chair, SNIA Cloud Storage Initiative, NetApp
Object storage is a secure, simple, scalable, and cost-effective means of embracing the explosive growth of unstructured data enterprises generate every day.
Many organizations, like large service providers, have already begun to leverage software-defined object storage to support new application development and DevOps projects. Meanwhile, legacy enterprise companies are in the early stages of exploring the benefits of object storage for their particular business and are searching for how they can use cloud object storage to modernize their IT strategies, store and protect data while dramatically reducing the costs associated with legacy storage sprawl.
This Webcast will highlight the market trends towards the adoption of object storage , the definition and benefits of object storage, and the use cases that are best suited to leverage an underlying object storage infrastructure.
In this webcast you will learn:
•How to accelerate the transition from legacy storage to a cloud object architecture
•Understand the benefits of object storage
•Primary use cases
•How an object storage can enable your private, public or hybrid cloud strategy without compromising security, privacy or data governance
Sam Fineberg, Distinguished Technologist, HPE, Ben Swartzlander, OpenStack Architect, NetApp, Thomas Rivera, SNIA DPCO Chair
This Webcast will focus on the data protection capabilities of the OpenStack Mitaka release, which includes multiple resiliency features. Join Dr. Sam Fineberg, Distinguished Technologist (HPE), and Ben Swartzlander, Project Team Lead OpenStack Manila (NetApp), as they discuss:
- Storage-related features of Mitaka
- Data protection capabilities – Snapshots and Backup
- Manila share replication
- Live migration
- Rolling upgrades
- HA replication
Our experts will be on hand to answer your questions.
This Webcast is co-sponsored by two groups within the Storage Networking Industry Association (SNIA): the Cloud Storage Initiative (CSI), and the Data Protection & Capacity Optimization Committee (DPCO).
Computer architecture is undergoing cataclysmic change. New flash tiers have been added to storage, and SSD caching has brought DAS back into servers. Storage Class Memory looms on the horizon, and with it come new storage protocols, new DIMM formats, and even new processor instructions. Meanwhile new chip technologies are phasing in, like 3D NAND flash and 3D XPoint Memory, new storage formats are being proposed including the Open-Channel SSD and Storage Intelligence, and all-flash storage is rapidly migrating into those applications that are not moving into the cloud. The future promises to bring us computing functions embedded within the memory array, learning systems permeating all aspects of computing, and adoption of architectures that are very different from today’s standard Von Neuman machines. In this presentation we will examine these technical changes and reflect on ways to avoid designing systems and software that limit our ability to migrate from today’s technologies to those of tomorrow.
Mark Carlson, Principal Engineer, Industry Standards, Toshiba
Cloud Computing and Storage/Data is maturing but where are Enterprises in the adoption of the cloud? Are they increasingly adopting public cloud? Are they setting up their own private clouds? How successful are they in doing so?
This panel will discuss the issues these customers are facing and how various products, services and data management techniques are addressing those issues.
Michelle Tidwell, SNIA Board Member, IBM Systems Storage, Business Line Manager, Software Defined Storage
We've heard it said that data is the new natural resource. In today's extremely dynamic, fast growing and interconnected world, businesses need more agile IT infrastructure to handle larger, faster, and growing variant types of Oceans of Data. The rise of cloud and hybrid cloud infrastructures, the common practice of server virtualization for efficiency and flexibility requires storage infrastructure that is equally flexible, to deliver, manage and protect data with superior performance to keep businesses operational through any data disruption or disaster. The IBM System Storage session will examine the IBM technologies that will help address the challenges and pain-points that IT professionals are experiencing to deliver dynamic insights for businesses and governments worldwide. Included in the IBM session are examples of how clients today are deploying Flash, Object and Software Defined Storage to rapidly and effectively deliver monetization of data.
Hyperconverged Infrastructures (HCIs) are popular solutions for a wide range of computing applications in small and medium-sized businesses. Their ease of deployment and operation, plus the ability to consolidate less efficient infrastructures into a comprehensive solution from a single vendor, have made them a good fit in these organizations. However, in the enterprise, companies with over 1000 employees, HCI adoption has been more limited. Certain use cases, such as providing a turnkey infrastructure for an enterprise’s remote and branch offices, are becoming more common. But what other usage scenarios are these larger companies looking at for hyperconverged appliances?
Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up. Using DNA to archive data is an attractive possibility because it is extremely dense, with a raw limit of 1 exabyte/mm3 (10^9 GB/mm3), and long-lasting, with observed half-life of over 500 years.
This work presents an architecture for a DNA-based archival storage system. It is structured as a key-value store, and leverages common biochemical techniques to provide random access. We also propose a new encoding scheme that offers controllable redundancy, trading off reliability for density. We demonstrate feasibility, random access, and robustness of the proposed encoding with wet lab experiments. Finally, we highlight trends in biotechnology that indicate the impending practicality of DNA storage.
In the era of data explosion in Cloud-Mobile convergence and Internet of Things, traditional architectures and storage systems will not be sufficient to support the transition of enterprises to cognitive analytics. The ever increasing data rates and the demand to reduce time to insights will require an integrated approach to data ingest, processing and storage to reduce end-to-end latency, much higher throughput, much better resource utilization, simplified manageability, and considerably lower energy usage to handle highly diversified analytics. Yet next-generation storage systems must also be smart about data content and application context in order to further improve application performance and user experience. A new software-defined storage system architecture offers the ability to tackle such challenges. It features seamless end-to-end data service of scalable performance, intelligent manageability, high energy efficiency, and enhanced user experience.
Camberley Bates, Managing Director and Senior Analyst, The Evaluator Group
Since the 90’s the storage architectures of SAN and NAS have been well understood and deployed with the focus on efficiency. With cloud-like applications, the massive scale of data and analytics, the introduction of solid state and HPC type applications hitting the data center, the architectures are changing, rapidly. It is a time of incredible change and opportunity for business and the IT staff that supports the change. Welcome to the new world of Enterprise Data Storage.
Moderator-Thomas Rivera,HDS; Panelitsts-Tony Cox,Cryptsoft; Eric Hibbard,HDS; Walt Hubis,Hubis Tech Ass; Tim Hudson,Cryptsoft
This WebCast will cover the basics of Encryption & Key Management as it relates to storage systems, as well as some of the related Best Practices.
This WebCast will explore the fundamental concepts of implementing secure enterprise storage using current technologies, and will focus on the implementation of a practical secure storage system. The high level requirements that drive the implementation of secure storage for the enterprise, including legal issues, key management, current available technologies, as well as fiscal considerations will be explored.
There will also be implementation examples that will illustrate how these requirements are applied to actual system implementations.
At the end, there will be a Q&A at the end for the audience to ask questions for the Panelists.
There are many permutations of technologies, interconnects and application level approaches in play with solid state storage today. It is becoming increasingly difficult to reason clearly about which problems are best solved by various permutations of these. In this webcast Doug Voigt, chair of the SNIA NVM Programming model will outline key architectural principals that may allow us to think about the application of networked solid state technologies more systematically.
Fred Knight, Standards Technologist, NetApp, Andy Banta, Storage Janitor, SolidFire/NetApp, David Fair, Chair, SNIA-ESF
iSCSI is an Internet Protocol standard for transferring SCSI commands across an Ethernet network, enabling hosts to link to storage devices wherever they may be. In this Webcast, we will discuss the evolution of iSCSI including iSER, which is iSCSI technology that takes advantage of various RDMA fabric technologies to enhance performance. Register now to hear:
•A brief history of iSCSI
•How iSCSI works
•IETF refinements to the specification
•Enhancing iSCSI performance with iSER
The Webcast will be live, so please bring your questions for our experts.
Jeff Chang, AgigA Tech; Arthur Sainio, SMART Modular; Doug Voigt, HP-E; Mat Young, Netlist
The IT industry has made tremendous progress innovating up and down the computing stack to enable and take advantage of non-volatile memory (NVM). But questions still remain on where NVM plays in the memory stack, how it will evolve in the CPU architecture, and where operating systems will need to be enhanced. Join the SNIA NVDIMM Special Interest Group to learn about the latest developments in NVDIMM, understand how the SNIA NVM Programming Model can be applied in NVM development work, and find your NVM answers!
Wayne Adams, SNIA Board of Directors; Mark Carlson, SNIA Technical Council; Camberley Bates, Evaluator Group General Mgr
This webcast will feature an interactive discussion with the subject matter experts who have organized the Data Storage Innovation Conference, planned for June 13-15, 2016.
Get an overview of the Conference agenda that addresses the most pressing data storage and cloud trends spanning storage class memory, data security, data protection, cloud development and management, new hyper-converged storage systems, big data and analytics, storage networks and protocols, file-systems, technology standards, software defined storage and best practices as they apply to networked storage, data management and data protection.
Attendees will also become aware of conference highlights including Hot Topic sessions, state of the market research study on Enterprise Hyper-converged Storage Deployment, and recently released solutions featured in the Innovation Spotlight. Webcast attendees will be encouraged to follow SNIA developments and webcasts live on June 13-14 , as well as attend the Conference.
The Storage Networking Industry Association (SNIA) is a not-for-profit global organization, made up of some 400 member companies spanning virtually the entire storage industry. SNIA's mission is to lead the storage industry worldwide in developing and promoting standards, technologies, and educational services to empower organizations in the management of information.
Within the financial services industry, middle office analytics and simulations continue to grow in volume and complexity. Massive compute and storage demands cause strain on IT resources. While new technologies promise speed and scalability, evaluating this unique middle office environment requires a look at compliance, risk, and pricing analytics to determine potential gains and losses. In this webinar, IDC – Financial Insights Research Director, Bill Fearnley, looks at current middle office IT workflows supporting analytics, backtesting and financial modeling and evaluates a hybrid cloud infrastructure to support growing demands.
In this webinar, you’ll:
· Hear an IDC Analyst’s view on the current financial services IT environment
· Learn of common challenges and approaches to combat growing strain on compute and storage infrastructure
· Join in a discussion about the viability of enabling cloud services to expand compute and storage capacity
· Gain guidance on how large hedge funds and investment banks are overcoming inherent cloud challenges like latency, data accessibility, and cost management
There has been a great deal of interest in Graphene. Some would call it hype. But with its flexibility and heat conduction properties, this atom-thin layer of carbon, which has been touted as the strongest material ever measured, has enormous product and market potential for the ICT industry.
Because graphene is conductive at nano-scale layers, it can be used for lightweight, flexible yet durable display screens, electric circuits and solar cells. It is also currently being made into inks and 3D printable materials. Imagine what this can mean for the design of communications devices, or circuitry, or batteries. Imagine the impact on wearables, the design and development of IoT sensors, or large scale retail store windows. Graphene holds a great deal of promise.
Despite the potential graphene promises, it has taken longer than expected to transform research and development into commercialized product.
This webcast will explore both the tremendous potential harbored in those structured carbon atoms and the business reality. The focus will be on the use of the material for the ICT industry. We will also look at other use cases that may be the first steps on graphene’s path to commercial application.
- Dr. Stephen Hodge, Research Associate at the Cambridge Graphene Centre, Engineering Department, University of Cambridge
- Anthony Schiavo, Research Associate, Advanced Materials Team, Lux Research, Inc.
- Limor Schafman, Director of Content Development, TIA (Moderator)
Server virtualization was supposed to consolidate and simplify IT infrastructure in data centers. But, that only “sort of happened”. Companies do have fewer servers but they never hit the consolidation ratios they expected. Why? In one word, performance.
Surveys show that 61% of companies have experienced slow applications after server virtualization with 77% pointing to I/O problems as the culprit.
Now, companies are looking to take the next step to fulfill their vision of consolidating and reducing the complexity of their infrastructure. But, this will only happen if their applications get the I/O performance they need.
This is where DataCore’s Parallel I/O technology comes in. By processing I/Os in parallel leveraging multi-core, multi-processor systems, Parallel I/O delivers industry leading I/O response times as well as price/performance. The net benefit is that fewer storage nodes can provide much better performance, allowing you to reduce and simplify your infrastructure.
Do you run a mix of virtualized and diverse workloads, including block storage? Are you looking to increase density and maintain blazingly fast speeds? If so, this webinar is for you!
In this webinar, speakers from DataCore and SanDisk will discuss the performance and economic advantages of combining software-defined-storage with all-flash storage. We’ll also share two customer stories on how they were able to:
- Achieve effortless and non-disruptive data migration from magnetic to flash storage
- Prevent storage-related downtime
- Dynamically control the movement of data from flash to high-capacity storage
- Strike the right economic balance between fast performance and low cost
Don’t let data growth and complex workloads slow you down. Attend this webinar and learn about new possibilities.
The use of broadband Internet connections in SD-WAN environment has many benefits, however for any enterprise, performance and reliability cannot be compromised. An SD-WAN solution must include all the functionality needed to meet these essential requirements that deliver outstanding performance and Quality of Service by:
•Actually improving the quality of the bandwidth you already have, instead of routing around it
•Enabling centralized control and administration of network-wide policies
•Providing detailed visibility into real-time and historical application and network trends
•Allowing for the modular deployment of WAN optimization to insure performance when you need it, where you need it
This all adds up to an enterprise-grade, performance-centric offering that allows your SD-WAN to rapidly connect users to the applications they need. Deployment times are reduced significantly and enterprises enjoy enhanced performance, visibility and control over the entire network.
OPNFV is an open community project developing solutions for transforming to Network Functions Virtualization (NFV) and Software Defined Networking (SDN). Learn the progress the vendor and service provider communities are making to accelerate the transformation.
Network segmentation is an effective strategy for protecting access to key data assets, and impeding the lateral movement of threats and cyber criminals inside your data center. With network virtualization, such as VMware NSX, now a reality it's now far easier and quicker to set up granular security policies for east-west traffic within the data center. Yet the added granularity of securities policies creates significant complexity.
Presented by renowned industry expert Professor Avishai Wool, this technical webinar will provide strategies and best practices to help organizations migrate and manage security policies efficiently within a micro-segmented data center.
During the webinar Professor Wool will cover how to:
· Identify and securely migrate legacy applications to a micro-segmented data center
· Effectively define and enforce security policies for East-West traffic
· Manage the micro-segmented data center alongside traditional on-premise security devices
·Identify risk and manage compliance in a micro-segmented data center
· Use network segmentation to reduce the scope of regulatory audits
· Identify and avoid common network segmentation mistakes
Are you a storage admins running business-critical workloads on vSphere? Replacing a traditional storage environment with hyper-converged infrastructure (HCI) solutions can give you a simpler, more efficient way to manage resources—and eliminate the guesswork that often leads to overprovisioning.
Learn what HCI can do to alleviate some pressure. We’ll discuss how VMware Virtual SAN 6.2 powers HCI with a new operational model for shared storage, including features that complement high-end SANS.
By offloading just one virtualized workload from SAN to Virtual SAN, you can save more expensive SAN or NAS for higher value workloads.
Topics in the webcast include:
-The advantages of VM-centric, policy-based storage
- Avoiding frequent storage requests for transient workloads
- Simplifying capacity planning by scaling compute and storage in tandem
- Focusing on optimizing production workloads
Easy to learn, and with the broadest set of consumption models, see how Virtual SAN hyper-converged storage powers radically simple HCI solutions that solve critical problems for storage admins.