Paul Bruton discusses the move to a holistic approach to next gen data management. Looking at digital transformation strategies, he explains how Hitachi Vantara’s object storage can address common challenges - from cloud complexity to data governance and compliance - with its advanced custom metadata architecture to make data more intelligent.Read more >
Cloud and flash storage are still leading significant changes in today's data storage industry. With the amount of data and number of devices that organizations are experiencing, implementing a strategy that employs “cheap and deep” storage behind high performance flash is a must for 2018.
Join Kieran Maloney, Product Marketing Manager at Quantum as he discusses how today’s archive solutions complement flash storage by providing low cost, long-term data preservation and protection while maintaining data visibility and access.
You will learn:
- How companies deploy storage tiers to optimize performance, data preservation and cost
- A partner use case with Pure Storage that delivers a comprehensive tiered storage solution for large unstructured data sets
- Trends and predictions for the flash storage market in 2018
Many storage vendors focus on what’s easiest to characterize in a system when they give you a quote, which is typically raw storage capacity. But raw capacity, as quoted by most storage vendor, does not tell you how much space you’ll actually have for your users’ files.
Join us on April 5th, as Ben Gitenstein, Vice President of Product Management at Qumulo, gives you four questions that will get you the best possible quote for your next storage array. We will discuss:
- Raw vs usable capacity
- The costs of power and cooling
- Time spent managing storage
- Potential storage downtime
The concept of the container as technology is not new; In recent years it has seen remarkable attention from every industry. The adoption of containers is increasing beyond just stateless conditions such as a load balancer or a web application server. For many adopters of container technology, the persistent storage and data management are the top pain points.
The way how storage is consumed has indeed changed - This talk is to take you through the journey of data storage evolution. You'll understand the challenges in data storage world due to the demand of new way of consuming storage by the container. Specific use cases, the solution of storage in container environments such as Docker Swarm and Kubernetes will be discussed.
Kumar Nachiketa is a data storage consultant in IBM Systems Lab Services, ASEAN based in Singapore. In his 11 years of career, he's been helping customers from various industry solving typical data storage challenges in several ways - deployment, consulting, finding ways to evolve. He is currently focusing on IBM Software Defined Storage and cloud technologies. He has co-authored IBM Redbooks on IBM storage cloud and OpenStack integration with IBM Spectrum Scale.
The shelf life of data is shrinking. A streaming shift is taking place and use cases such as IoT connected cars, real-time fraud detection and predictive maintenance using streaming analytics are becoming commonplace. You too can switch to the fast data lane with Informatica, leveraging Kafka and other big data technologies. So shift gears and change lanes with us while we take you on a journey into the world of streaming data.Read more >
The shift to the cloud is modernizing government IT, but are agencies' storage models keeping up with that transition? When it comes to big data, the proper system is necessary to avoid major data bottlenecks and accessibility challenges, allowing agencies to get the right information to the right people at the right time. Flash storage is the latest technology that improves scale, speed, and efficiency of data storage. Join us for a panel discussion on the challenge of scale, increased demand for user-focused data management tools, and security and risk reduction with sensitive data.
- Paul Krein, Chief Technology Officer, Red River
- Joe Paiva, Chief Information Officer, International Trade Administration, U.S. Department of Commerce
- Linda Powell, Chief Data Officer, Consumer Financial Protection Bureau
- Ashok Sankar, Director, Solutions Strategy, Public Sector and Education, Splunk
- Nick Psaki, Principal, Office of the CTO, Pure Storage
As digitalization and the Internet of Things (IoT) become commonplace, big data has the potential to transform business processes and reshape entire industries. But antiquated and expensive data storage solutions stand in the way.
A new generation of cloud storage has arrived, bringing breakthrough pricing, performance and simplicity. Cloud Storage 2.0 delivers storage as an inexpensive and plentiful utility, so you no longer have to make difficult decisions about which data to collect, where to store it and how long to retain it. This talk takes a look into how you can cost-effectively store any type of data, for any purpose, for any length of time. Join us to learn about the next great global utility, Cloud Storage 2.0.
-The next biggest cloud storage trends and technologies that are shaping the industry
-How to embrace the era of digital transformation and IoT without breaking the bank
-Best practices for storing, analyzing and utilizing big data
Do you know that your existing investments in Informatica PowerCenter can fast track you to Big Data and data lake technologies? We will demonstrate why our customers are moving from data warehouses to data lakes, leveraging big data and cloud ecosystems and how to do this rapidly, leveraging your existing investments in Informatica technology.Read more >
For research data to be truly useful, it must be easy to access, share and manage without requiring expensive, custom infrastructure. What organizations need is turnkey storage that won't break the bank, with a unified interface for fast, reliable data transfer and sharing.
This webinar introduces Globus for ActiveScale, a cost-effective solution for on-premise object storage that’s simple to deploy and use. With Globus for ActiveScale, researchers have access to advanced capabilities for managing data across a broad range of systems, while administrators gain a cost-effective, scalable, and durable solution they can deploy quickly to help their researchers innovate faster.
In this webinar, attendees will:
- Learn how to deploy and use Globus for ActiveScale
- See a product demonstration
- Engage in a live Q&A session with the Globus Chief Customer Officer
The data contained in the data lake is too valuable to restrict its use to just data scientists. It would make the investment in a data lake more worthwhile if the target audience can be enlarged without hindering the original users. However, this is not the case today, most data lakes are single-purpose. Also, the physical nature of data lakes have potential disadvantages and limitations weakening the benefits and possibly even killing a data lake project entirely.
A multi-purpose data lake allows a broader and greater use of the data lake investment without minimizing the potential value for data science or for making it a less flexible environment. Multi-purpose data lakes are data delivery environments architected to support a broad range of users, from traditional self-service BI users to sophisticated data scientists.
Attend this session to learn:
* The challenges of a physical data lake
* How to create an architecture that makes a physical data lake more flexible
* How to drive the adoption of the data lake by a larger audience
Business is experiencing an increased dependence on unstructured data, such as complex business documents, work products and large media files. This presents a new and growing set of challenges for IT when it comes to preserving, protecting, monitoring and managing unstructured data at scale; and it’s a problem that affects enterprises and service providers alike. We believe that object storage’s massive scalability and rich metadata capabilities supports the collection of the extended information about the data itself—in the form of metadata—to provide customizable and specific reference points for identifying the contents and automating the management of unstructured data at storage-level throughout its entire lifecycle. To document these challenges, 451 Research – in cooperation with Western Digital’s HGST division – has developed and fielded an annual survey that reaches out to a vetted group of 100 enterprise and 100 service provider customers to evaluate the current nature of the unstructured data problem. This program looks to better establish market perceptions about modern object storage, explore current use cases, and outline customer expectations based on responses to our 2017 poll.Read more >
With new technologies such as Hive LLAP or Spark SQL, do you still need a data warehouse or can you just put everything in a data lake and report off of that? No! In the presentation, James will discuss why you still need a relational data warehouse and how to use a data lake and an RDBMS data warehouse to get the best of both worlds.
James will go into detail on the characteristics of a data lake and its benefits and why you still need data governance tasks in a data lake. He'll also discuss using Hadoop as the data lake, data virtualization, and the need for OLAP in a big data solution, and he will put it all together by showing common big data architectures.
Data is collected in IoT solutions for a purpose - it is transformed into information which is subsequently used to produce actionable insights.
The three primary types of IoT data, in order of volume, are:
- Time based (time series, time interval), e.g. power, voltage, current, temperature and humidity
- Geospatial, e.g. person/device location
- Asset specific data
These types of data have special characteristics that need to be catered to. Join this webinar with Cloud Technology Partners Joey Jablonski, VP of Big Data & Analytics and Ken Carroll, VP of IoT, as they discuss some important aspects of how such data can be ingested, modeled, stored and used in IoT solutions.
CFOs rejoice! CEOs take to the streets in celebration! Ok, maybe it’s not quite that exciting, but did you know that you can get the best of both worlds in storage? One of the biggest challenges in storage has been paying for it. Thanks to trying to plan for exactly how much storage you need right now versus how much you need in the future, people often just overbuy with the expensive hopes that they’ll grow into it.
You actually have a whole lot of financing options at your disposal to pay for storage, from buying to leasing to simply paying for what you use, just like the cloud. Why pay for storage that you’re never going to actually use?
And, what happens when your storage gets too old? You buy new. What if you didn’t have to? What if you could pay a bit more in maintenance on your current system in exchange for an upgrade when the time comes?
Join Rob Commins, Sr. Director of Product Marketing for Tegile Systems, as he takes a deep dive into:
- Best practices for storing your data in the cloud
- How to keep cloud storage costs to a minimum
- How to scale data growth and storage capacity
Rob Commins has been instrumental in the success of some of the storage industry's most interesting companies over the past twenty years including HP/3PAR, Pillar Data Systems, and StorageWay. At Western Digital, he leads the Data Center System's business unit's product marketing team.
NVMe adoption has taken the Data Center by storm. And while the technology has proven itself to outperform all other competing SSD implementation, it is still quite limited and restricted to the local server it is attached to. This is where NVMe Targets come into the picture. In this presentation, we will explore how NVMe devices can be exported across a network and attached to remote server nodes.Read more >
Most industry analysts agree that big data warehouses, built on a relational database, will continue to be the primary analytic database for storing much of the company’s core transactional data. These data warehouses will be augmented by big data systems. Even though this new information architecture consists of multiple physical data repositories and formats, the logical architecture is a single integrated data platform, spanning the relational data warehouse and the data lake.
Join the discussion to find out more about how high performance, high density all-flash storage can help.
See first hand how Cohesity delivers a web-scale platform that consolidates all secondary storage and data services onto one unified, efficient solution. Cohesity simplifies data protection, converges NAS and object storage, provides instant access to test/dev copies, and performs in-place searches and analytics - all on a software-defined platform that spans from the edge to the cloud.
In 30 minutes, you’ll learn everything you need to know in order to evaluate what Cohesity Data Platform can do for your organization, and how we can help you take back control of your data, from edge to cloud.
This webinar is part of BrightTALK's Ask the Expert Series.
Join Christopher Brown, CTO and Mark Harris, SVP Marketing at Uptime Institute, as they take a technical deep dive into data center infrastructure management in 2018.
Chris will answer questions related to:
- Data center design and strategy
- Colocation and management
- Infrastructure hardware and software
- Software-defined Data Centers
- Data center tools, technologies and teams of the future
Audience members are encourage to send questions to the expert which will be answered during the live session.
As data analytics becomes more embedded within organizations, as an enterprise business practice, the methods and principles of agile processes must also be employed.
Agile includes DataOps, which refers to the tight coupling of data science model-building and model deployment. Agile can also refer to the rapid integration of new data sets into your big data environment for "zero-day" discovery, insights, and actionable intelligence.
The Data Lake is an advantageous approach to implementing an agile data environment, primarily because of its focus on "schema-on-read", thereby skipping the laborious, time-consuming, and fragile process of database modeling, refactoring, and re-indexing every time a new data set is ingested.
Another huge advantage of the data lake approach is the ability to annotate data sets and data granules with intelligent, searchable, reusable, flexible, user-generated, semantic, and contextual metatags. This tag layer makes your data "smart" -- and that makes your agile big data environment smart also!