Computer users aren’t top data producers anymore. Machines are. Raw data from sensors, labs, forensics, and exploration are surging into data centers and overwhelming traditional storage. There is a solution: High performance, massively scale-out NAS with data-aware intelligence. Join us as Jeff Cobb, VP of Product Management at Qumulo and Taneja Group Senior Analyst Jeff Kato explain Qumulo’s data-aware scale-out NAS and its seismic shift in storing and processing machine data. We will review how customers are using Qumulo Core, and Nick Rathke of the University of Utah’s Scientific Computing and Imaging (SCI) Institute will join us to share how SCI uses Qumulo to cut raw image processing from months to days.
Jeff Kato, Senior Analyst & Consultant, Taneja Group
Jeff Cobb, VP of Product Management, Qumulo
Nick Rathke, Assistant Director for IT, The Scientific Computing and Imaging Institute (SCI)
The holy grail for any storage solution managing big data, analytics or media streaming is performance, agility and breakthrough economics. As applications grow and workloads change, companies need a highly scalable, high-performance system that can deliver flash at costs comparable to legacy systems.
Your wish has been granted. Learn how SanDisk and IBM have collaborated to deliver a next-generation software-defined all-flash unified storage solution, delivering petabyte scalability and high performance at breakthrough economics for both file and object storage. Discover how with the InfiniFlash™ System from SanDisk with IBM Spectrum Scale you can break new grounds for:
· Financial Services
· Cloud Services
· Oil & Gas/Energy
· Life Sciences
Workflows in life sciences and bioinformatics are characterized by massive volumes of machine-generated file data that is pipelined into downstream processes for analysis. With today’s sequencer technology, most experts agree that about 100 GB of data is generated for each human genome that is sequenced.
With the Earth’s population predicted to eclipse 8 billion people by 2025, some researchers believe that Life Sciences and Genomics in particular will soon become the single largest producer of new data across all media types — outpacing today’s leaders like YouTube, Twitter, and astronomical research.
Legacy file storage fails to provide researchers with acceptable performance and cost effectiveness at petabyte scale, especially with the wide mix of file sizes that characterizes modern research workflows.
But it’s not all bad news.
Balancing researcher, IT, and executive team concerns, watch this video case study about the Department of Embryology at the Carnegie Institution for Science, and see why they turned to Qumulo’s modern scale-out storage to deliver the performance, scalability, and simplicity needed to keep pace with evolving research data requirements.
Scalability is a must-have for any IT environment. This is especially the case where storage is concerned. Between all the files, photos, and videos, the average firm has more unstructured data than it knows what to do with. Clearly the importance of scaling up to meet increasing storage demands can’t be debated. What can and has been argued to great length is how to go about it. Should we scale up, or scale out?
Should you stick to the tried and true method of just adding disks to your arrays ("scale up") or look at software-based systems that can cluster multiple storage servers together over the network ("scale out")? Who's actually using these solutions and when does it make sense to go with one over the other?
Watch this video and get clear on the pros and cons of these two distributed storage solutions.
Commercial High Performance Computing (HPC) workloads are an ideal fit for the advanced technology behind Qumulo’s Scale-Out NAS systems. From Media & Entertainment to Life Sciences to Oil & Gas, Qumulo Core provides data-awareness at incredible scale, helping CIOs and storage administrators store, manage and curate enormous numbers of digital assets.Read more >
Does your storage system scale as you grow?
Should you go with scale-up or scale-out architecture?
Which hybrid scale-out NAS is right for you?
Netapp, EMC Isilon, Panasas, Lustre - which one is the best fit for you?
In this on-demand webinar, Panasas's Andre Franklin, Sr. Product Marketing Manager presents on:
- Scale and Scalability
- Scale-up vs. Scale-out
- Measuring Scalability
Gain competitive advantage with scalability.
Learn how the right parallel data access protocol delivers higher performance than what can be achieved with industry standard protocols such as NFS and SMB. Avoid the load-balancing and congestion side-effects imposed by NFS and SMB and get ready for the exceptional performance of scale-out NAS with parallel data access.
Aggregate workloads with multiple compute clients must be able to access data directly from where it resides on shared-access storage, instead of having data access managed by an intermediary filer head. This was accomplished with first-generation scale-out clustered storage. Making the next cost-effective performance leap requires multiple compute clients to access data and metadata across the entire cluster in parallel, rather than restrict client I/O to the single node connection of a scale-out storage cluster. When you dramatically increase the throughput performance of client applications by accessing all nodes of a scale-out cluster, you boost the productivity of all users, while also eliminating hot spots on the clustered storage.
Get ready to unleash the storage performance your business has been waiting for.
Join us for a webcast to learn why scale-out is critical to growing businesses, get a technical overview of the Isilon Express Scale-out NAS platform, and discuss top use cases.
Get all benefits of Isilon – at the size and price that works for you:
•Simple – Deploys and scales in minutes, with automated management
•Flexible – Multi-protocol, tiers to public clouds, works with hundreds of ISV integrations
•Easy to Manage – Single file system, single volume, global namespace
•Start Small – Entry-level configurations starting at just 33TB
•Expand Easily - Grow up to 68PB in a single cluster without ever overprovisioning
They seem to solve the same problem – meeting the constantly growing performance and capacity demands of the enterprise. But Scale-out NAS and Distributed Storage are different. Join our next LIVE podcast to learn what Scale-out NAS and Distributed Storage are, how they are different and which one is right for you. Attendees to the live podcast will be able to ask questions and get answers in real-time. NO REGISTRATION REQUIRED!Read more >
When your applications need to scale to new heights, don’t let the wrong storage hold you down. There are different models for “scale-out storage,” and depending on your use case, one scale-out model does not fit all. Join Howard Marks from DeepStorage.net and Greybeards of Storage as he reviews the landscape of scale-out storage from traditional models to new emerging models. We will also discuss how loosely coupled federated solutions like Tintri’s VM Scale-out can address problems of scale in a virtualized and cloud datacenter.Read more >
Every cloud or cluster needs to be administered and xCAT is the open source solution to help you manage HPC clusters, RenderFarms, Grids, WebFarms, online gaming infrastructure, clouds, and data centers. Join Jarrod Johnson, a high performance and scale out computing architect, to learn what makes xCAT different from other management open source projects and how you can use it in the enterprise.Read more >
Yesterday’s legacy storage solutions are riddled with fatal flaws that make them inherently unsuitable for today’s business demands.
In 2015 when Qumulo Core was being designed, we interviewed 600+ storage experts—architects, engineers, and administrators, to hear about their current challenges storing files at scale. What they really wanted was basically the same thing they’ve had before—reliable, scalable, easy-to-use storage that was cost-effective, and built for the modern era. Their demands became our product roadmap.
Watch this webinar and learn how Qumulo Core:
> Was created by the best file system engineering team in the world
> Is software-defined, built for on-premises and the cloud
> Is proven in hundreds of mission-critical environments, some with tens of billions of files and petabytes of data
> Has significant market traction—from top global film animation studios, to Fortune 500 telcos, to early-stage software startups.
Join this session to discuss the Scale-Out Storage Master Usage Model and usage scenarios. This session will also cover best practices for adopting the ODCA requirements outlined in the usage model document.Read more >
Fast data applications are growing rapidly – driven by the adoption of IoT, M2M and SaaS platforms. While there’s general recognition that fast streaming data applications can produce significant yet fundamentally different values than Big Data applications, it’s not yet clear what technologies and approaches should be used to derive the value from these fast data streams. Building applications on real-time streaming data has unique requirements. Legacy databases get overwhelmed. A common solution is stitching together a collection of open source projects; however this has a steep learning curve, adds complexity and limits performance and latency.
So how do you combine real-time analytics with real-time decisions in an architecture that's reliable, scalable and simple? The answer is a scale out, in-memory database, that’s fast enough (< 1 millisecond) to support per-event transactions. In addition, a platform that supports streaming aggregation combined with per-event ACID processing and SQL, simplifies development and enhances performance and capability. VoltDB uses materialized views to provide real-time aggregation and summary and support combining real-time analytics with per-event decisions. The basis is SQL, which is used to query and re-aggregate views; and up-to-date views can be queried inside of transactions for per-event, real-time transactions.
In this webinar Ryan Betts, CTO at VoltDB, will explain:
• Why streaming aggregation is a key to streaming analytics
• How SQL can be used in combination with streaming aggregation
• The benefits of up-to-date analytics for per-event transactions and insights
Webcast: Speed up SAP with Scale-out All-Flash Storage
Join us for this informative Kaminario webinar where we will demonstrate how running Sap application on Flash can result with significant performance improvement. We will review the different type of Flash solutions available and address pressing industry questions that directly relate to your SAP application needs and usage.
More companies have started to utilize SSD Flash to improve the performance and reliability of key applications. We will discuss some of the following topics:
•Which SAP applications are good candidate for Flash
•How to identify if your SAP application will run faster on SSD
•Identify what is the optimal Flash solution for SAP applications
•How the Kaminario K2 solution fits into resolving typical SAP performance issues
•ROI for running SAP application on Kaminario K2, and more.
The Scientific Computing and Imaging Institute (SCI) at the University of Utah is on a mission to help us better see the world. Their visualizations range in scope from as large as our solar system to as small as a brain cell. Running such intensive research projects requires a powerful storage solution.
Watch this short customer video to see how SCI is using Qumulo to power their visualizations, image analysis, and scientific computing.
Learn how to:
- Reduce SMB Storage Costs
- Eliminate the complexity and risk of storage sprawl
- Incrementally grow capacity to any size required using simple, low cost storage blocks
- Deploy enterprise-class storage without the cost or complexity
The first server that the small to medium sized business (SMB) purchases is often one that allows for file sharing and collaboration called network attached storage (NAS). As the business grows and becomes a small to medium sized enterprise (SME) the first point of storage trouble is often that standalone NAS. It runs out of capacity, it can't provide enough performance or both. As a result the SME is forced to add standalone NAS after NAS to keep up with these demands which leads to storage sprawl. Storage sprawl places pressure on every aspect of the SME as the storage line item becomes a larger and larger part of the storage budget, while the ability to provide 24/7 uptime decreases.
Join Storage Switzerland's Lead Analyst, George Crump and Kelly Murphy CEO of Gridstore for a discussion of the issues with Storage Sprawl and how you can stop it by taking a technology created for the large enterprise, Scale Out NAS, and applying it to the mid-sized market.