Tackling the Storage Challenges of Rapid Data Growth
As data continues to grow at an alarming rate, IT will need to be smarter on how they store their data as much as how much storage they purchase. In this panel, experts from HGST and Code42 will discuss how data growth is affecting the storage industry in regards to cold storage, HDDs, back up and archiving and best practices to developing a comprehensive storage strategy.
RecordedMay 16 201360 mins
Your place is confirmed, we'll send you email reminders
Meeting storage-related requirements has been a long-standing challenge for IT organizations, and added workload requirements from cloud- and software-defined architectures can add quickly to the burden. Common goals are to implement solutions that provide high-availability and high performance, with low capital and operational costs. The Windows Server 2016 platform includes a tremendous list of improved and new features that are available "out-of-the-box". That makes the biggest barrier understanding how, when and why you should implement these features.
This presentation will cover a wide array of different features in the Windows Server platform, including Storage Spaces and Storage Spaces Direct; SMB 3.x improvements; storage tiering; Storage QoS; Storage Replica; data de-duplication; and many others. When compared to the costs and administrative complexity of traditional SANs, these tools can provide ready solutions for environments of all sizes and types. The focus will be on technical details about the features and capabilities of the Windows Server platform, and how organizations can make best use of them.
Join Anil Desai, independent consultant with over 20 years of experience in architecting, implementing, and managing IT software and datacenter solutions. He has worked extensively with IT management, development, and database technology. Anil holds many technical certifications and is a twelve-time Microsoft MVP Award recipient (currently Cloud/Datacenter Management).
Anil is the author of over 20 technical books focusing on the Windows Server platform, virtualization, databases, and IT management best practices. He is also a frequent contributor to IT publications and conferences.
Petros Koutoupis, Lead Linux Systems Developer, Cleversafe- an IBM Company, Creator of RapidDisk
While Software Defined Storage (SDS) solutions promise us the world, they have recently begun to showcase their shortcomings, almost all of which seem to focus on the hardware. Not all commodity hardware is created equal. Not all SDS solutions are equipped to handle these variations. This lack of knowledge ends up becoming problematic and in many cases will impact overall functionality to even performance.
Join us to discuss these shortcomings and how to not only resolve but also prevent them from both a hardware and software standpoint.
Greg Schulz, Founder/Sr. Advisory Analyst, Server StorageIO
Data Infrastructures exist to support applications and their underlying resource needs. Software-Defined Infrastructures (SDI) are what enable Software-Defined Data Centers, and at the heart of a SDI is storage that is software-defined. This spans cloud, virtual and physical storage and is at the focal point of today. Join us in this session to discuss trends, technologies, tools, techniques and services around SDI and SDDC- today, tomorrow, and in the years to come.
J Metz, Cisco, Alex McDonald, NetApp, John Kim, Mellanox, Chad Hintz, Cisco
Welcome to this first part of the webcast series, where we’re going to take an irreverent, yet still informative look, at the parts of a storage solution in Data Center architectures. We’re going to star with the very basics – The Naming of the Parts. We’ll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we’ll learn:
•What an initiator is
•What a target is
•What a storage controller is
•What a RAID is, and what a RAID controller is
•What a Volume Manager is
•What a Storage Stack is
With these fundamental parts, we’ll be able to place them into a context so that you can understand how all these pieces fit together to form a Data Center storage environment.
Oh, and why are the parts named after colors, instead of numbered? Because there is no order to these webcasts. Each is a standalone seminar on understanding some of the elements of storage systems that can help you learn about technology without admitting that you were faking it the whole time! If you are looking for a starting point – the absolute beginning place – start with this one. We’ll be using these terms in all the other presentations.
A discussion and examination of Real World Storage Workloads and how different workloads affect Storage and SSD performance. Using free IO capture applets from www.TestMyWorkload.com, see what IO Captures look like and identify the key real world workload metrics.
An example 2,000 outlet retail store web portal 24 hour workload is examined using advanced data analytic tools. Overall 24 hour cumulative workloads and specific process segments are used to test 3 different SSDs to observe their relative performance. See how much performance and endurance different SSDs provide and how to select the best SSD for your application and use case.
IO Captures can be done easily and quickly on any laptop, desktop, server or data center. Join a community dedicated to the capture, analysis and accumulation of user workloads for the benefit of the storage industry.
Different workloads demand different attributes from their storage. These differences lead some to believe flash storage is only good for certain point use cases like accelerating databases. But the performance of flash systems lead others to claim a single flash system can support all workloads. The truth, as usual, is somewhere in the middle. Join Storage Switzerland and IBM for this live interactive webinar where we bust another flash myth and help you select the right flash for the right workload for the right reasons.
Ethernet technology had been a proven standard for over 30 years and there are many networked storage solutions based on Ethernet. While storage devices are evolving rapidly with new standards and specifications, Ethernet is moving towards higher speeds as well: 10Gbps, 25Gbps, 50Gbps and 100Gbps….making it time to re-introduce Ethernet Networked Storage.
This live Webcast will start by providing a solid foundation on Ethernet networked storage and move to the latest advancements, challenges, use cases and benefits. You’ll hear:
•The evolution of storage devices - spinning media to NVM
•New standards: NVMe and NVMe over Fabric
•A retrospect of traditional networked storage including SAN and NAS
•How new storage devices and new standards would impact Ethernet networked storage
•Ethernet based software-defined storage and the hyper-converged model
•A look ahead at new Ethernet technologies optimized for networked storage in the future
Register today for this live Webcast where our experts will be on hand to answer your questions.
Most organizations making an investment in NetApp Filers count on the system to store user data and host virtual machine datastores from an environment like VMware. In addition these organizations want their NetApp systems to do more and be the repository for the next wave of unstructured data; data generated by machines. NetApp systems are busting at the seams, so these organizations are trying to decide what to do next.
To help you find out what to do next, join Storage Switzerland and Caringo for our live webinar and learn:
1. What are the modern unstructured data use cases
2. The challenges NetApp faces in addressing its customers’ issues
3. Other solutions; can all-flash or object storage solve these challenges
4. Making the move - how to migrate from NetApp to other systems
5. How to re-purpose, instead of replacing your NetApp
Cloud storage has transformed the storage industry, however interoperability challenges that were overlooked during the initial stages of growth are now emerging as front and center issues. Join this Webcast to learn the major challenges that businesses leveraging services from multiple cloud providers or moving from one cloud provider to another face.
The SNIA Cloud Data Management Interface standard (CDMI) addresses these challenges by offering data interoperability between clouds. SNIA and Tata Consultancy Services (TCS) have partnered to create a SNIA CDMI Conformance Test Program to help cloud storage providers achieve CDMI conformance.
As interoperability becomes critical, end user companies should include the CDMI standard in their RFPs and demand conformance to CDMI from vendors.
Join us on July 19th to learn:
•Critical challenges that the cloud storage industry is facing
•Issues in a multi-cloud provider environment
•Addressing cloud storage interoperability challenges
•How the CDMI standard works
•Benefits of CDMI conformance testing
•Benefits for end user companies
Nancy Bennis, Director of Alliances, Cleversafe an IBM Company, Alex McDonald, Chair, SNIA Cloud Storage Initiative, NetApp
Object storage is a secure, simple, scalable, and cost-effective means of embracing the explosive growth of unstructured data enterprises generate every day.
Many organizations, like large service providers, have already begun to leverage software-defined object storage to support new application development and DevOps projects. Meanwhile, legacy enterprise companies are in the early stages of exploring the benefits of object storage for their particular business and are searching for how they can use cloud object storage to modernize their IT strategies, store and protect data while dramatically reducing the costs associated with legacy storage sprawl.
This Webcast will highlight the market trends towards the adoption of object storage , the definition and benefits of object storage, and the use cases that are best suited to leverage an underlying object storage infrastructure.
In this webcast you will learn:
•How to accelerate the transition from legacy storage to a cloud object architecture
•Understand the benefits of object storage
•Primary use cases
•How an object storage can enable your private, public or hybrid cloud strategy without compromising security, privacy or data governance
NoSQL databases like Cassandra and Couchbase are quickly becoming key components of the modern IT infrastructure. But this modernization creates new challenges – especially for storage. Storage in the broad sense. In-memory databases perform well when there is enough memory available. However, when data sets get too large and they need to access storage, application performance degrades dramatically. Moreover, even if enough memory is available, persistent client requests can bring the servers to their knees.
Join Storage Switzerland and Plexistor where you will learn:
1. What is Cassandra and Couchbase?
2. Why organizations are adopting them?
3. What are the storage challenges they create?
4. How organizations attempt to workaround these challenges.
5. How to design a solution to these challenges instead of a workaround.
Sam Fineberg, Distinguished Technologist, HPE, Ben Swartzlander, OpenStack Architect, NetApp, Thomas Rivera, SNIA DPCO Chair
This Webcast will focus on the data protection capabilities of the OpenStack Mitaka release, which includes multiple resiliency features. Join Dr. Sam Fineberg, Distinguished Technologist (HPE), and Ben Swartzlander, Project Team Lead OpenStack Manila (NetApp), as they discuss:
- Storage-related features of Mitaka
- Data protection capabilities – Snapshots and Backup
- Manila share replication
- Live migration
- Rolling upgrades
- HA replication
Our experts will be on hand to answer your questions.
This Webcast is co-sponsored by two groups within the Storage Networking Industry Association (SNIA): the Cloud Storage Initiative (CSI), and the Data Protection & Capacity Optimization Committee (DPCO).
Demand for data storage is growing exponentially, but the capacity of existing storage media is not keeping up. Using DNA to archive data is an attractive possibility because it is extremely dense, with a raw limit of 1 exabyte/mm3 (10^9 GB/mm3), and long-lasting, with observed half-life of over 500 years.
This work presents an architecture for a DNA-based archival storage system. It is structured as a key-value store, and leverages common biochemical techniques to provide random access. We also propose a new encoding scheme that offers controllable redundancy, trading off reliability for density. We demonstrate feasibility, random access, and robustness of the proposed encoding with wet lab experiments. Finally, we highlight trends in biotechnology that indicate the impending practicality of DNA storage.
In the era of data explosion in Cloud-Mobile convergence and Internet of Things, traditional architectures and storage systems will not be sufficient to support the transition of enterprises to cognitive analytics. The ever increasing data rates and the demand to reduce time to insights will require an integrated approach to data ingest, processing and storage to reduce end-to-end latency, much higher throughput, much better resource utilization, simplified manageability, and considerably lower energy usage to handle highly diversified analytics. Yet next-generation storage systems must also be smart about data content and application context in order to further improve application performance and user experience. A new software-defined storage system architecture offers the ability to tackle such challenges. It features seamless end-to-end data service of scalable performance, intelligent manageability, high energy efficiency, and enhanced user experience.
Camberley Bates, Managing Director and Senior Analyst, The Evaluator Group
Since the 90’s the storage architectures of SAN and NAS have been well understood and deployed with the focus on efficiency. With cloud-like applications, the massive scale of data and analytics, the introduction of solid state and HPC type applications hitting the data center, the architectures are changing, rapidly. It is a time of incredible change and opportunity for business and the IT staff that supports the change. Welcome to the new world of Enterprise Data Storage.
Organizations looking to move some or all of their workloads to the cloud will at some point look for a way to provide those applications with basic file services. In this live webinar Storage Switzerland and SoftNAS will lead an in-depth discussion of why organizations need cloud based file services and an analysis of the various file services solutions. In this webinar you will learn:
* Why Cloud Based File Services
* What the Use Cases for Cloud File Services are
* What Cloud File Services Solutions are Available
* The Pros and Cons of the Various Cloud File Services Solutions
There are many permutations of technologies, interconnects and application level approaches in play with solid state storage today. It is becoming increasingly difficult to reason clearly about which problems are best solved by various permutations of these. In this webcast Doug Voigt, chair of the SNIA NVM Programming model will outline key architectural principals that may allow us to think about the application of networked solid state technologies more systematically.
Randy Kerns, Senior Strategist and Analyst, Evaluator Group
The movement away from electro-mechanical devices for primary storage to solid state technology continues. Flash technology has been immensely successful and continues to advance with greater density and less cost. It is also driving changes in the interfaces and protocols for storage from disk-based to memory-based. Additional solid state technology is entering the market and will create a hierarchy of different performance and cost storage. This presentation will discuss some of these changes and their potential impacts.
The hottest topics for storage and infrastructure professionals
The Enterprise Storage channel has the most up-to-date, relevant content for storage and infrastructure professionals. As data centers evolve with big data, cloud computing and virtualization, organizations are going to need to know how to make their storage more efficient. Join this channel to find out how you can use the most current technology to satisfy your business and storage needs.
Today, companies are increasingly looking into HCI solutions as server virtualization becomes pervasive, the cost of server-side flash drops, and demand increases for operational efficiency without silos.
Join us to learn about HCI trends and VMware hyper-converged software. We’ll discuss how your environment can benefit, and how you can build a simple, efficient and very cost-effective hyper-converged infrastructure—without starting from scratch.
Enterprises are widely adopting hyperconverged infrastructure to transform the way they deliver IT services. At the same time, with dropping prices and increasing storage density, we’ve reached an inflection point that is transforming decisions around all flash deployments as well. If HCI is the path to the future, shouldn’t your storage decisions reflect that? With emerging technologies such as NVMe and 3D CrossPoint rapidly coming into the market, this session will dig into the new realities for enterprise datacenters and what could possibly be the ideal way to deploy flash.
Join us for this insightful look into object storage for developers with Caringo Product Manager Ryan Meek. Ryan will take a close look at best-of-breed object storage architectures and discuss best practices for product integration through the HTTP REST API and the upcoming Dart SDK module and Search API.
High Availability doesn’t trump Disaster Recovery and there is nothing simple about creating a recovery capability for your business – unless you have a set of data protection and business continuity services that can be applied intelligently to your workload, managed centrally, and tested non-disruptively. The good news is that developing such a capability, which traditionally required the challenge of selecting among multiple point product solutions then struggling to fit them into a coherent disaster prevention and recovery framework, just got a lot easier.
Join us and learn how DataCore’s Software-Defined and Hyper-Converged Storage platform provides the tools you need and a service management methodology you require to build a fully functional recovery strategy at a cost you can afford.
Worried that storage infrastructure can’t support petabyte growth or next-generation workloads? Do you want to move more workloads to the cloud to help reduce costs and enable new opportunities for your business? If so, this webinar is for you!
Red Hat Ceph storage is a massively scalable (we’re talking petabytes and beyond), software-defined storage solution that delivers unified storage (block, file, object) for your cloud environment. However, the challenge with PB scale, is maintaining high-performance and data center efficiency. That’s where Red Hat and SanDisk come to play!
Red Hat and SanDisk have partnered to deliver a Ceph-tested, Red Hat approved, and SanDisk flash-accelerated solution that delivers extreme performance, boundless scale, efficiency, and resiliency for Ceph and OpenStack environments. In this webinar Brent Compton of Red Hat and Venkat Kolli of SanDisk will discuss:
•Challenges faced within cloud environments
•Benefits of Red Hat Ceph for file, block and object storage
•Benefits of Running Ceph on the InfiniFlash™ System
•Configuration and use-cases
Don't let limitations stop you, and imagine the impossible today. To petabytes and beyond!
The industry was surprised when Dell announced its intent to acquire EMC for $67 billion, the largest tech deal ever. Merging two large stagnate companies with very different cultures and high-level of overlap in products can pose significant challenges.
Join this webinar to learn about:
- The acquisition implications and how it’ll affect your long-term storage investment
- The uncertainty on Dell and EMC’s roadmap and which products will continue to be invested in
-Alternate storage solutions that enable you to transform data into insights and value for your organization