Deduplication, the process of eliminating redundant data segments on a storage system, has become an expected feature in the disk backup market. It seemed logical that deduplication would be applied to throughout the data center. For the most part though deduplication is only consistently available where low latency and massive scale are not required, namely backup and Branch WAN optimization.
Specifically missing in action is primary storage deduplication. Why? Attend this web meeting hosted by Storage Switzerland’s Lead Analyst, George Crump and Jered Floyd, CTO of Permabit Technology. Both storage vendors and IT Managers will get significant value as they eavesdrop in on a conversation between Crump and Floyd in which they discuss the top four reasons that deduplication has not ubiquitously made its way to primary storage:
1) Unintelligent use of RAM and CPU storage for deduplication hash tables
2) Inefficient deduplication hash table size
3) The inefficiency of the hash lookup engine
4) The lack of experience in developing and supporting deduplication technology
After these weaknesses are covered, the webinar will conclude with a series of steps that developers can take to make sure their deduplication engines are up to the primary storage challenge and the questions that IT Managers should ask their prospective vendors to make sure they are getting a well vetted deduplication engine that will stand the test of time.
RecordedJun 21 201259 mins
Your place is confirmed, we'll send you email reminders
If your organization is embarking on a refresh of your primary storage system this year, this is the one webinar that you will want to attend. IT evaluators that haven't looked at storage in a few years, will be impressed the use of flash to improve performance, while many claim to integrate file (NFS, SMB) and block (iSCSI, Fibre) protocols. These features though are now table stakes for the modern storage solution.
While all of these features are important (and not created equal), IT professionals should demand capabilities that solve today's organizational challenges like eliminating shadow IT, improving multi-site productivity and long term data retention as well as meeting increasingly strict compliance standards. IT professionals should look for not only high performance and universal protocol support but also enterprise file sync and share, inter-data center sync and secure archive with compliance.
Disaster Recovery as a Service (DRaaS) is changing the way data center administrators think about disaster recovery. A DR site and equipment no longer need to be bought in advance. DRaaS instead enables DR to be on-demand. The concept is so appealing that vendors are racing into the market claiming to offer DRaaS capabilities. But DRaaS is more than a simple check mark for a DR requirement. Organizations need to realize that how well the service performs is critical to successfully surviving a disaster.
Join Storage Switzerland and Panzura in a live webinar on March 2nd as we cover why you should really want to aggressively archive data, what the challenges are to an aggressive strategy and, most importantly, how to use cloud storage to overcome them.
Disaster Recovery as a Service (DRaaS) offers organizations one of the most viable recovery options available to them in recent years. The ability to have an on-demand recovery site that is pre-seeded with your data should dramatically lower costs and improve recovery times, even in the worst of disasters. But IT professionals can't take for granted that DRaaS providers will continue to cover data protection basics while also providing a seamless recovery experience. In this webinar, join Storage Switzerland Lead Analyst George Crump and Carbonite Senior Solutions Engineer Jason Earley as they provide you with the 5 Critical Recovery Steps for using DRaaS.
W. Curtis Preston with StorageSwiss, Lei Yang and Bill Roth with Tintri
The recovery expectations of users and organizations is changing and their tolerance for downtime is lower than ever. IT professionals can no longer rely on the traditional backup and recovery process to meet these new requirements. Primary storage needs to do more, and simple LUN-based replication is not going to get the job done. Instead data centers need to look for primary storage that has advanced replication capabilities and can integrate with multiple hypervisors and existing data protection solutions to create a holistic disaster recovery strategy.
VMware's stretch cluster does an excellent job of protecting against a site failure. If your primary data center fails then it is easy to bring up VMs in the second site. But what if you need more? How can you extend VMware's stretch cluster capability from single site protection to multiple sites? What if you want to extend recovery into the cloud? A multi-site and multi-cloud data distribution strategy not only creates more resilient IT operations, it also empowers workload mobility.
In this live webinar Storage Switzerland and Hedvig will discuss the limitations of VMware stretch cluster, the possibilities of a more highly available approach and how to achieve complete IT resiliency.
They seem to solve the same problem – meeting the constantly growing performance and capacity demands of the enterprise. But Scale-out NAS and Distributed Storage are different. Join our next LIVE podcast to learn what Scale-out NAS and Distributed Storage are, how they are different and which one is right for you. Attendees to the live podcast will be able to ask questions and get answers in real-time. NO REGISTRATION REQUIRED!
Join us for our next live podcast. We will provide analysis on the latest news like:
* Carbonite buying DoubleTake
* Violin Memory being bought at auction
* StorageCraft buying Exablox
* Datto buying OpenMesh
Will also review our recent briefings including:
In this episode’s deep dive our analyst team will discuss the pros and cons of Amazon and Google cloud storage. NO REGISTRATION REQUIRED!
Disaster Recovery as a Service (DRaaS) is a recovery option that is getting a lot of attention right now. In this live podcast, Storage Switzerland and Carbonite cover exactly what DRaaS is and whether or not your organization should consider it? Join us as we cover all things DRaaS. We’ll even answer all your DRaaS related questions.
George Crump, Curtis Preston of Storage Switzerland
Join Storage Switzerland's Analysts George Crump and Curtis Preston for our first live podcast of 2017. We will discuss the big news of the month: HPE buys Simplivity, update on the newest storage products and companies, AND provide a deep dive discussion on containers. Docker and container technology is the hot new thing, and Curtis and George will provide storage guys the information they need about this game changing new technology. No registration required!
George Crump, Storage Switzerland Doug Soltesz, Cloudian
While backup software vendors continue to innovate, hardware vendors have been resting on their deduplication laurels. In the meantime, the amount of data that organizations store continues to grow at an alarming pace and the backup and disaster recovery expectations of users are higher than ever. Most backup solutions today simply will not be able to keep pace with these realities. If organizations don't act now to address the weaknesses in their backup hardware, they will not be able to meet organizational demands by 2020. In this webinar, Cloudian and Storage Switzerland will discuss three areas where IT professionals need to expect more from their backup hardware and where they should demand less.
Four Reasons Why Backup Hardware Will Break by 2020:
1. Not Cost Effective Enough
2. Not Scalable Enough
3. Only Good for Backups - Not Enough Use Cases
4. Too Much Deduplication
Splunk does an excellent job of managing data, moving data between tiers which it calls buckets. But assigning Splunk to data management tasks takes compute power away from the primary objective — rapid data analysis. In this live webinar Storage Switzerland and Tegile will discuss how the default Splunk architecture works today, what the challenges with that architecture are and how to design a storage architecture for Splunk that is both high performance and cost effective.
Any organization that takes a moment to study the data on their primary storage system will quickly realize that the majority (as much as 90 percent) of data that is stored on it has not been accessed for months if not years. Moving this data to a secondary tier of storage could free up massive amount of capacity, eliminating a storage upgrade for years. Making this analysis frequently is called data management, and proper management of data can not only reduce costs it can improve data protection, retention and preservation.
Storage Switzerland and HyperGrid recently held a webinar entitled "How and Why to Containerize Your Legacy Applications.”. It is one of our highest attended webinars this year. The number one request from that webinar? "I want to see this thing work!" To answer that request Storage Switzerland and HyperGrid are conducting another webinar where we will review how to containerize legacy applications, provide a couple of specific examples of how customers are using and benefiting from containerized legacy applications and, most importantly, have a live demo showing a legacy application’s transformation into a modern, containerized application.
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this live webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
In this webinar learn:
1. What's Causing NAS Sprawl
2. How Vendors are putting bandaids on the problem
3. What to look for in a unifying file system
Google has recently announced expansion of their cloud storage service. It offers similar service levels as Amazon S-3 and Glacier, but with simplified pricing. How viable is their cloud storage product for the average customer? How do their service levels compare to Amazon and Azure service levels? What about pricing? Is it really that different? And what are some example use cases?
Other questions center around applications that support these offerings. If an application supports Amazon, will it be easy for them to support Google? Is there anything about Google cloud storage that makes it easier or harder for service providers to work with them?
In this webinar Storage Switzerland and Caringo, providers of cloud and object storage, will discuss why preservation, distribution and delivery is so critical for M&E IT and also why it is so challenging to deliver. More importantly, we will discuss practical solutions to these challenges so IT departments can lead their organizations to more monetization opportunities.
High Performance or Capacity - Making the Right Choice
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Before we had what we now call the cloud, there were services we now call cloud backup services that had been backing up other people’s data over the Internet for several years. Technology has come a long way and many more people are now considering backing up their data to the cloud, and they find themselves asking a number of questions.
Just how much data can you backup to the cloud? How does a cloud backup vendor handle large restores? Will it take weeks to restore my data? What about security? Does it make financial sense to backup to the cloud, or is it just en vogue to do so? Could I actually save money by backing up to the cloud.
Join Storage Switzerland, Micron and Nexenta for our live webinar "Modern Storage Infrastructures for Modernized Data Centers". We will discuss:
* How organizations are leveraging OpenStack, Docker, Splunk and Hadoop to make data centers more agile and competitive
* How default storage architectures limit the potential of modern applications
* How storage needs to change to deliver performance, scaleability and flexibility
* How a modern storage architecture can propel modern applications into new use cases
Storage Switzerland - experts on storage, server virtualization, cloud
Tune into Storage Switzerland's channel to learn from this analyst firm focused on storage, virtualization and the cloud. Storage Switzerland’s goal is to provide unbiased evaluations and interview content on sponsoring and non-sponsoring companies through articles, public events and product reviews.