University of Utah Powers Scientific Computing with Qumulo Scale-Out NAS
The Scientific Computing and Imaging Institute (SCI) at the University of Utah is on a mission to help us better see the world. Their visualizations range in scope from as large as our solar system to as small as a brain cell. Running such intensive research projects requires a powerful storage solution.
Watch this short customer video to see how SCI is using Qumulo to power their visualizations, image analysis, and scientific computing.
RecordedJan 12 20174 mins
Your place is confirmed, we'll send you email reminders
Ben Gitenstein - Vice President of Product, Qumulo
Many storage vendors focus on what’s easiest to characterize in a system when they give you a quote, which is typically raw storage capacity. But raw capacity, as quoted by most storage vendor, does not tell you how much space you’ll actually have for your users’ files.
Join us on April 5th, as Ben Gitenstein, Vice President of Product Management at Qumulo, gives you four questions that will get you the best possible quote for your next storage array. We will discuss:
- Raw vs usable capacity
- The costs of power and cooling
- Time spent managing storage
- Potential storage downtime
Stashka Lepera, Sr Engineer at Qumulo | Brian Lahoue, VP of Technology & Sales at Strategic Integrators
With analysts predicting that 80% of primary workflows will be in the cloud by 2020, now is the time to start making plans for a cloud strategy that supports your data intensive file based workloads. Because of this, Strategic Integrators and Qumulo will be introducing the Qumulo File Fabric (QF2), a modern, scalable, and high performance file-storage system that spans the data center and the public cloud.
Data growth is happening across all industries, and is driving demand to run file-based workloads in the cloud. However, cloud-only file systems don’t connect with other data footprints and lack important enterprise features.
So what can companies looking to expand their most demanding workloads to the cloud do?
Learn how Qumulo File Fabric (QF2) on AWS provides companies a way to scale storage capacity and performance in a single file system that spans their data center and public cloud.
- Why you should be considering the cloud for file storage
- How QF2 on AWS is designed to handle the most data-intensive workloads no matter where the datasets are located
- Real-world examples of how QF2 on AWS is expanding the most demanding workflows to the cloud
Nick Rathke, Assistant Director of IT - Scientific Computing and Imaging Institute
Storage I/O bottlenecks were dramatically slowing imaging projects for the University of Utah’s SCI Institute, while lack of insight into data usage hampered effective capacity management.
Listen to Nick Rathke, Assistant Director of IT for the Scientific Computing and Imaging Institute (SCI), explain his organization's work in image-based modeling. He also describes the tools and software used by researchers, and the challenges he faces in managing their storage.
All-flash is already gaining prevalence in the data center, but many vendors lock customers in with proprietary hardware. Find out why now is the best time for Qumulo to offer the Qumulo File Fabric (QF2) on standard all-flash hardware, and how we use software to drive continual performance improvements.
Taneja Group Senior Analysts Jeff Kato & Jeff Byrne with Qumulo Senior Product Manager Justin Mahood
In over 400 interviews with IT decision makers, the Taneja Group learned that an increasing number of organizations want not just to store files in the cloud, but to easily move those files between the cloud and on-premise.
In response, the Taneja Group has identified an emerging set of storage products they call “multi-cloud primary storage” that can span multiple environments. One of the vendors at the forefront of this trend is Qumulo, whose “universal-scale storage” product, Qumulo File Fabric (QF2), allows cloud instances and computing nodes on standard hardware to work together to form a single, unified file fabric.
Join Taneja Group Senior Analysts Jeff Kato and Jeff Byrne as they discusses the research behind the Taneja Group’s Multi-Cloud Primary Storage report. They will be joined by Qumulo employee Justin Mahood, who will describe why the “universal-scale” of QF2 is the best way to store files in the cloud and on-premise.
Eric Scollard, Vice President of Worldwide Sales at Qumulo, introduces Qumulo File Fabric, a modern, highly scalable file storage system that runs in the data center and the public cloud, on IBC TV at the IBC Show 2017 in Amsterdam.
Qumulo offers a free tier for using QF2 on AWS up to 5TB, giving businesses the freedom to store, manage and share file-based data across on-premises data centers and the cloud. Try it online: https://qumulo.com/evaluate/download/
Today’s artists, architects, administrators, and end users working in the media and entertainment industry today may experience pain points when managing media storage. Workloads are expanding as postproduction businesses try to squeeze more from less. Clients demand ever higher project resolution and frame rates with faster turnaround times.
As technology evolves, postproduction must minimize the challenges presented by the storage environment. This webcast will discuss how to handle media storage for challenging workloads in a fast, efficient, and scalable manner.
This webcast was originally presented as a SMPTE Education Webcast on 8 June 2017. For more information about SMPTE please visit www.smpte.org.
File-based data is the engine of innovation for the modern business. Increasingly, data-intensive enterprises with file-based workflows want to take advantage of the elastic compute resources, geographic reach and advanced services that the public cloud offers.
Customers have a clear need to use the public cloud, but they are blocked today from using it for their file-based workloads. What customers need is a modern file storage system that gives them scalable performance, scalable capacity and access from any locale.
Watch this webinar and learn:
- What’s driving customers to the cloud
- How the cloud has stranded users of large-scale file systems
We introduce and demonstrate a solution to this problem - the world’s first universal-scale file storage system - Qumulo File Fabric.
Modern workloads are putting new strains on enterprise storage, regardless of the industry you're in. The many competing priorities you and your team face every day can lead to difficult decisions on where to focus your resources.
Join Qumulo Principal Systems Engineer Mike Bott as he helps you pinpoint the severity of your storage issues and gives suggestions on how to resolve:
- Capacity pain
- Performance pain
- Budget pain
- Scaling pain
- Legacy software pain
- Data blindness
- Availability pain
- Data loss pain
Paul Merrifield, Business Unit Storage CTO, North America, HPE & Joel Groen, Product Manager, Qumulo
As data footprints continue to grow, so do the demands on enterprise storage. Scale-out NAS could be seen as a perfect solution. However, the demands of today's data-intensive file-based workloads have revealed the limitations of scale, performance, and visibility and control of legacy scale-out architectures.
To help their customers with this challenge, Hewlett Packard Enterprise (HPE), one of the world’s premiere enterprise technology companies, has partnered with Qumulo. The modern design of the Qumulo Core filesystem matched the best-of-breed HPE Apollo Servers, provide customers with a truly modern scale-out storage solution.
Join HPE and Qumulo as we discuss the attributes of our joint solution and what it means for the modern enterprise technology consumer.
Joel Groen is a seasoned Product Manager at Qumulo with over 15 years of experience building enterprise, cloud, and mobile technology products. At Qumulo, he is focused on driving technical alignments within the storage industry to help companies grow into petabyte scale infrastructures.
Paul Merrifield is the Business Unit Storage CTO for North America at HPE. He is responsible for studying and understanding the broad industry changes impacting information technology, the business implications associated with an industry in transition, and the translation of those challenges into HPE’s technology strategy and point-of-view.
Across industries, enterprises with data-intensive workloads are being challenged by the explosion of file-based data and the cost of storing and effectively accessing that data at petabyte scale
Yet when you look at the storage technologies that enterprise orgs rely on today for file-based storage, most were built 20 years ago. NetApp, still a market leader, started in 1994. Open Source file systems -- GPFS, Lustre, and Sun were all created before the year 2000. In fact, the most modern scale-out file system in the world right now is Isilon OneFS, and that product was built in 2002. Qumulo Core was introduced in 2015, and we built it specifically to address the fatal flaws of legacy file storage products.
Watch and learn about the benefits that enterprise customers like DreamWorks Animation are experiencing running Qumulo Core software on Hewlett Packard Enterprise (HPE) Apollo servers — they’re managing their massive and rapidly growing unstructured data footprint with maximum storage efficiency using the low cost, enterprise-grade solution provided by Qumulo and HPE.
Mike Matchett, Taneja Group; Dave Shuman, Cloudera; Joel Groen, Qumulo; Ishu Verma, Red Hat
This exciting panel explores the kinds of storage that IoT solutions demand. We talk about what’s different about data storage for IoT compared to existing enterprise applications, what capabilities are required to support massive, distributed IoT networks, and how and why existing storage solutions may or may not be the best IoT application storage. Plan on getting into unique IoT data protection concerns, real-time data pipelines, machine learning, data accessibility, distributed processing, and of course, what’s actually practical for the IoT already emerging in today’s data center.
Cloudera: Dave Shuman, Industry Lead for IoT & Manufacturing
Qumulo: Joel Groen, Senior Product Manager
Red Hat: Ishu Verma, IoT Technical Evangelist
Enterprises with data-intensive workloads are being challenged by the explosion of file-based data and the cost of storing and effectively accessing that data at petabyte scale.
Take a modern, scale-out storage solution built from the ground up for the multi-petabyte era. Combine it with HPE’s legendary availability, efficiency, scaling, and provisioning, and you take the data center to an entirely new level of performance.
This partnership gives customers a modern solution for storing and managing file-based workloads, achieving high efficiency and extreme performance, while gaining real-time visibility into usage, activity and throughput at any level of the unified directory structure, no matter how many files are in the file system.
Watch this product demo of Qumulo Core software-defined storage on an HPE Apollo 4200 Gen9 Server, and see how the built-in real-time analytics bring visibility and control at scale — up to tens of billions of files using petabytes of storage capacity.
Workflows in life sciences and bioinformatics are characterized by massive volumes of machine-generated file data that is pipelined into downstream processes for analysis. With today’s sequencer technology, most experts agree that about 100 GB of data is generated for each human genome that is sequenced.
With the Earth’s population predicted to eclipse 8 billion people by 2025, some researchers believe that Life Sciences and Genomics in particular will soon become the single largest producer of new data across all media types — outpacing today’s leaders like YouTube, Twitter, and astronomical research.
Legacy file storage fails to provide researchers with acceptable performance and cost effectiveness at petabyte scale, especially with the wide mix of file sizes that characterizes modern research workflows.
But it’s not all bad news.
Balancing researcher, IT, and executive team concerns, watch this video case study about the Department of Embryology at the Carnegie Institution for Science, and see why they turned to QF2's universal scale file storage to deliver the performance, scalability, and simplicity needed to keep pace with evolving research data requirements.
The massive opening shot to the hit movie La La Land presented a unique and highly complicated challenge. At more than five minutes long and over 8,000 frames, the opening sequence dwarfs the film industry’s average of two to five seconds and a hundred frames between cuts.
Powerhouse visual effects studio Crafty Apes- known for films like Marvel's Doctor Strange and Disney's Pete's Dragon- turned to Qumulo to deliver cost-effective performance, scalability and support for this Academy Award winning film.
In this webinar, Anh Quach, Qumulo Director of Customer Success, and 12-year media & entertainment tech veteran, tells the true Hollywood story of creative challenge and technical breakthrough that contributed to 7 Golden Globes and 5 Academy Awards.
David Bailey, Qumulo’s Director of Systems Engineering
Welcome to the Era of Machine Data. Today, an estimated 1 trillion sensors are embedded in a nearly limitless landscape of networked sources, from health monitoring devices to municipal water supplies, and everything in between.
The massive amounts of data being generated hold the promise of ever-greater insight, but only for those who successfully ingest, process and harness the flood of information.
Join Dave Bailey, Qumulo’s Director of Systems Engineering, for a breakdown of the changes and challenges our data storage customers are seeing with increasing volume, variety, and velocity of data.
David Bailey, Qumulo Director of Systems Engineering
What’s in a number? Well, when that number is 10 billion, I’d say it can mean quite a lot.
For some, 10 billion represents:
- The number of photos posted on Facebook (by 2008)
- The number of song downloads from Apple iTunes (by 2010)
- The number of Tweets issued by Twitter (by 2010)
- The number of times Shazam was used to tag songs (by 2013)
- The number of monthly emails being sent by MailChimp (by 2014)
- The number of monthly music video views on VEVO (by 2015)
- The number of quarterly streaming hours of content on Netflix (by 2015)
For Qumulo, we like to demonstrate how and why our customers create and manage 10+ billion files (in 1.2 million directories) in a single scale-out file system.
Join David Bailey, Qumulo’s Director of Systems Engineering, for this demo, and see why customers call Qumulo Core scale-out storage “the smartest storage product ever built.”
Qumulo is the leader in universal-scale file storage
Qumulo is the leader in universal-scale file storage. Qumulo File Fabric (QF2) gives data-intensive businesses the freedom to store, manage and access file-based data in the data center and on the cloud, at petabyte and global scale. Founded in 2012 by the inventors of scale-out NAS, Qumulo serves the modern file storage and management needs of Global 2000 customers. For more information, visit www.qumulo.com.
University of Utah Powers Scientific Computing with Qumulo Scale-Out NASNick Rathke, Assistant Director IT, University of Utah Scientific Computing & Imaging Institute[[ webcastStartDate * 1000 | amDateFormat: 'MMM D YYYY h:mm a' ]]3 mins