Storage Switzerland - experts on storage, server virtualization, cloud
Tune into Storage Switzerland's channel to learn from this analyst firm focused on storage, virtualization and the cloud. Storage Switzerland’s goal is to provide unbiased evaluations and interview content on sponsoring and non-sponsoring companies through articles, public events and product reviews.
George Crump, Storage Switzerland Doug Soltesz, Cloudian
While backup software vendors continue to innovate, hardware vendors have been resting on their deduplication laurels. In the meantime, the amount of data that organizations store continues to grow at an alarming pace and the backup and disaster recovery expectations of users are higher than ever. Most backup solutions today simply will not be able to keep pace with these realities. If organizations don't act now to address the weaknesses in their backup hardware, they will not be able to meet organizational demands by 2020. In this webinar, Cloudian and Storage Switzerland will discuss three areas where IT professionals need to expect more from their backup hardware and where they should demand less.
Four Reasons Why Backup Hardware Will Break by 2020:
1. Not Cost Effective Enough
2. Not Scalable Enough
3. Only Good for Backups - Not Enough Use Cases
4. Too Much Deduplication
Splunk does an excellent job of managing data, moving data between tiers which it calls buckets. But assigning Splunk to data management tasks takes compute power away from the primary objective — rapid data analysis. In this live webinar Storage Switzerland and Tegile will discuss how the default Splunk architecture works today, what the challenges with that architecture are and how to design a storage architecture for Splunk that is both high performance and cost effective.
Any organization that takes a moment to study the data on their primary storage system will quickly realize that the majority (as much as 90 percent) of data that is stored on it has not been accessed for months if not years. Moving this data to a secondary tier of storage could free up massive amount of capacity, eliminating a storage upgrade for years. Making this analysis frequently is called data management, and proper management of data can not only reduce costs it can improve data protection, retention and preservation.
Storage Switzerland and HyperGrid recently held a webinar entitled "How and Why to Containerize Your Legacy Applications.”. It is one of our highest attended webinars this year. The number one request from that webinar? "I want to see this thing work!" To answer that request Storage Switzerland and HyperGrid are conducting another webinar where we will review how to containerize legacy applications, provide a couple of specific examples of how customers are using and benefiting from containerized legacy applications and, most importantly, have a live demo showing a legacy application’s transformation into a modern, containerized application.
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this live webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
In this webinar learn:
1. What's Causing NAS Sprawl
2. How Vendors are putting bandaids on the problem
3. What to look for in a unifying file system
Google has recently announced expansion of their cloud storage service. It offers similar service levels as Amazon S-3 and Glacier, but with simplified pricing. How viable is their cloud storage product for the average customer? How do their service levels compare to Amazon and Azure service levels? What about pricing? Is it really that different? And what are some example use cases?
Other questions center around applications that support these offerings. If an application supports Amazon, will it be easy for them to support Google? Is there anything about Google cloud storage that makes it easier or harder for service providers to work with them?
In this webinar Storage Switzerland and Caringo, providers of cloud and object storage, will discuss why preservation, distribution and delivery is so critical for M&E IT and also why it is so challenging to deliver. More importantly, we will discuss practical solutions to these challenges so IT departments can lead their organizations to more monetization opportunities.
High Performance or Capacity - Making the Right Choice
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Before we had what we now call the cloud, there were services we now call cloud backup services that had been backing up other people’s data over the Internet for several years. Technology has come a long way and many more people are now considering backing up their data to the cloud, and they find themselves asking a number of questions.
Just how much data can you backup to the cloud? How does a cloud backup vendor handle large restores? Will it take weeks to restore my data? What about security? Does it make financial sense to backup to the cloud, or is it just en vogue to do so? Could I actually save money by backing up to the cloud.
Join Storage Switzerland, Micron and Nexenta for our live webinar "Modern Storage Infrastructures for Modernized Data Centers". We will discuss:
* How organizations are leveraging OpenStack, Docker, Splunk and Hadoop to make data centers more agile and competitive
* How default storage architectures limit the potential of modern applications
* How storage needs to change to deliver performance, scaleability and flexibility
* How a modern storage architecture can propel modern applications into new use cases
Everyone understands disk has become the primary target for backups in the last several years. It’s also safe to say that the main type of disk storage used as a target for backups would be a purpose-built backup appliance that presents itself to the backup application as an NFS or SMB server and then deduplicates any backups stored on it.
But what about object storage? Object storage vendors tout that their systems are less expensive to buy and less expensive to operate than traditional disk arrays and NAS appliances. So, does it make sense to use them for backups? How much is deduplication a factor and is deduplication even available with object storage? What else can object storage bring to the table that traditional disk backup appliances can’t?