Data lakes are centralized data repositories. Data needed by data scientists is physically copied to a data lake which serves as a one storage environment. This way, data scientists can access all the data from only one entry point – a one-stop shop to get the right data. However, such an approach is not always feasible for all the data and limits it’s use to solely data scientists, making it a single-purpose system.
So, what’s the solution?
A multi-purpose data lake allows a broader and deeper use of the data lake without minimizing the potential value for data science and without making it an inflexible environment.
Attend this session to learn:
• Disadvantages and limitations that are weakening or even killing the potential benefits of a data lake.
• Why a multi-purpose data lake is essential in building a universal data delivery system.
• How to build a logical multi-purpose data lake using data virtualization.
Do not miss this opportunity to make your data lake project successful and beneficial.
DeepStorage Labs is known in the storage industry for pushing equipment to its limits, and for reporting what really happens at the edge of a system’s performance. Tegile’s IntelliFlash T4000, unlike a few previous occupants of the DeepStorage Labs ThunderDome, stood up to our testing and delivered high IOPS at a maximum of 1MS latency.
DeepStorage subjected the InteliFlash T4000 workloads from the usual 4KB “hero number” random read to workloads that simulate OLTP and OLAP database servers, a file server and an Exchange server. We determined the systems performance individually and in combination finally determining the system’s ability to support the kind of mixed workload environment
In this webinar we will:
- Introduce the IntelliFlash array
- Describe the testing process
- Present the results
- Review the test environment
- Provide links to the test workload VDbench configurations
Paul Bruton discusses the move to a holistic approach to next gen data management. Looking at digital transformation strategies, he explains how Hitachi Vantara’s object storage can address common challenges - from cloud complexity to data governance and compliance - with its advanced custom metadata architecture to make data more intelligent.Read more >
The shelf life of data is shrinking. A streaming shift is taking place and use cases such as IoT connected cars, real-time fraud detection and predictive maintenance using streaming analytics are becoming commonplace. You too can switch to the fast data lane with Informatica, leveraging Kafka and other big data technologies. So shift gears and change lanes with us while we take you on a journey into the world of streaming data.Read more >
Cloud and flash storage are still leading significant changes in today's data storage industry. With the amount of data and number of devices that organizations are experiencing, implementing a strategy that employs “cheap and deep” storage behind high performance flash is a must for 2018.
Join Kieran Maloney, Product Marketing Manager at Quantum as he discusses how today’s archive solutions complement flash storage by providing low cost, long-term data preservation and protection while maintaining data visibility and access.
You will learn:
- How companies deploy storage tiers to optimize performance, data preservation and cost
- A partner use case with Pure Storage that delivers a comprehensive tiered storage solution for large unstructured data sets
- Trends and predictions for the flash storage market in 2018
More and more, companies are unlocking the value of data by treating it as a business asset. Enterprises looking to increase their competitiveness and gain valuable insights face challenges in managing, scaling, storing, and analyzing these massive amounts of data. Learn how the next generation of ActiveScale delivers new features and capabilities to help you realize the potential of your unstructured data.
Join this live webinar on June 28, 2018 at 9am Pacific Time with Phil Bullinger and Stefaan Vervaet of Western Digital and Mark Peters of ESG to learn:
- How ActiveScale can help you build a data-forever architecture
- How your company can realize the potential of your unstructured data
- How to get the scale and economics you need to take control of your data
Join Esther Spanjer, Director of Business Development EMEIA at Western Digital and Janusz Bak, CTO of Open-E for this webinar.
They will discuss the challenges that Aviation Accounting Center LLC, an engineering company in Geospatial data processing was facing when planning for expansion of their IT infrastructure. Their existing standalone servers did not provide the capacity, availability and performance needed for storing and accessing its geospatial data. Esther and Janusz will walk you through the proposed solution and how it met the customer’s needs on scalability, capacity, throughput, connectivity and high-availability requirements.
The concept of the container as technology is not new; In recent years it has seen remarkable attention from every industry. The adoption of containers is increasing beyond just stateless conditions such as a load balancer or a web application server. For many adopters of container technology, the persistent storage and data management are the top pain points.
The way how storage is consumed has indeed changed - This talk is to take you through the journey of data storage evolution. You'll understand the challenges in data storage world due to the demand of new way of consuming storage by the container. Specific use cases, the solution of storage in container environments such as Docker Swarm and Kubernetes will be discussed.
Kumar Nachiketa is a data storage consultant in IBM Systems Lab Services, ASEAN based in Singapore. In his 11 years of career, he's been helping customers from various industry solving typical data storage challenges in several ways - deployment, consulting, finding ways to evolve. He is currently focusing on IBM Software Defined Storage and cloud technologies. He has co-authored IBM Redbooks on IBM storage cloud and OpenStack integration with IBM Spectrum Scale.
Getting your company ready for GDPR isn’t about putting a few new processes in place — it’s about rethinking your entire approach to personal data, including how to get value from it. For decades, companies have collected and stored all kinds of personal information “just in case” they ever needed it.
GDPR requires a different approach. You need to be proactive in thinking about how to get value from your data, and you need to understand exactly what your company is doing with personal data and why.
Join Jill Reber and Kevin Moos of Primitive Logic to learn:
- How to work with third parties who process personal data on your behalf
- How preparing for GDPR helps you understand your data on a whole new level (and why that’s a good thing)
For research data to be truly useful, it must be easy to access, share and manage without requiring expensive, custom infrastructure. What organizations need is turnkey storage that won't break the bank, with a unified interface for fast, reliable data transfer and sharing.
This webinar introduces Globus for ActiveScale, a cost-effective solution for on-premise object storage that’s simple to deploy and use. With Globus for ActiveScale, researchers have access to advanced capabilities for managing data across a broad range of systems, while administrators gain a cost-effective, scalable, and durable solution they can deploy quickly to help their researchers innovate faster.
In this webinar, attendees will:
- Learn how to deploy and use Globus for ActiveScale
- See a product demonstration
- Engage in a live Q&A session with the Globus Chief Customer Officer
Do you know that your existing investments in Informatica PowerCenter can fast track you to Big Data and data lake technologies? We will demonstrate why our customers are moving from data warehouses to data lakes, leveraging big data and cloud ecosystems and how to do this rapidly, leveraging your existing investments in Informatica technology.Read more >
The data contained in the data lake is too valuable to restrict its use to just data scientists. It would make the investment in a data lake more worthwhile if the target audience can be enlarged without hindering the original users. However, this is not the case today, most data lakes are single-purpose. Also, the physical nature of data lakes have potential disadvantages and limitations weakening the benefits and possibly even killing a data lake project entirely.
A multi-purpose data lake allows a broader and greater use of the data lake investment without minimizing the potential value for data science or for making it a less flexible environment. Multi-purpose data lakes are data delivery environments architected to support a broad range of users, from traditional self-service BI users to sophisticated data scientists.
Attend this session to learn:
* The challenges of a physical data lake
* How to create an architecture that makes a physical data lake more flexible
* How to drive the adoption of the data lake by a larger audience
If your information infrastructure is not robust and flexible, then business agility is reduced, managing applications is time consuming, and total cost of ownership is higher than necessary. Learn how Dell EMC’s storage and data protection solutions help organizations like yours by leveraging an information infrastructure that is designed for change and delivers both high performance and high availability.Read more >
The shift to the cloud is modernizing government IT, but are agencies' storage models keeping up with that transition? When it comes to big data, the proper system is necessary to avoid major data bottlenecks and accessibility challenges, allowing agencies to get the right information to the right people at the right time. Flash storage is the latest technology that improves scale, speed, and efficiency of data storage. Join us for a panel discussion on the challenge of scale, increased demand for user-focused data management tools, and security and risk reduction with sensitive data.
- Paul Krein, Chief Technology Officer, Red River
- Joe Paiva, Chief Information Officer, International Trade Administration, U.S. Department of Commerce
- Linda Powell, Chief Data Officer, Consumer Financial Protection Bureau
- Ashok Sankar, Director, Solutions Strategy, Public Sector and Education, Splunk
- Nick Psaki, Principal, Office of the CTO, Pure Storage
As digitalization and the Internet of Things (IoT) become commonplace, big data has the potential to transform business processes and reshape entire industries. But antiquated and expensive data storage solutions stand in the way.
A new generation of cloud storage has arrived, bringing breakthrough pricing, performance and simplicity. Cloud Storage 2.0 delivers storage as an inexpensive and plentiful utility, so you no longer have to make difficult decisions about which data to collect, where to store it and how long to retain it. This talk takes a look into how you can cost-effectively store any type of data, for any purpose, for any length of time. Join us to learn about the next great global utility, Cloud Storage 2.0.
-The next biggest cloud storage trends and technologies that are shaping the industry
-How to embrace the era of digital transformation and IoT without breaking the bank
-Best practices for storing, analyzing and utilizing big data
Data is collected in IoT solutions for a purpose - it is transformed into information which is subsequently used to produce actionable insights.
The three primary types of IoT data, in order of volume, are:
- Time based (time series, time interval), e.g. power, voltage, current, temperature and humidity
- Geospatial, e.g. person/device location
- Asset specific data
These types of data have special characteristics that need to be catered to. Join this webinar with Cloud Technology Partners Joey Jablonski, VP of Big Data & Analytics and Ken Carroll, VP of IoT, as they discuss some important aspects of how such data can be ingested, modeled, stored and used in IoT solutions.
Organizations looking to deploy database applications, like Cassandra, Postgres and Couchbase, need persistent storage so if the instance fails data survives. Many storage vendors are jumping on the “containers” bandwagon, but the level of support and understanding of the true need varies.
Join experts from Datera, Portworx, StorageOS, Storage Switzerland, Virtuozzo and WekaIO for a roundtable discussion on “The State of Persistent Storage for Containers.” In this webinar we will discuss why persistent storage for containers are still needed, what are today’s use cases for it and what the future holds.
Register now to join us for the live roundtable event on April 17th at 1 p.m. ET / 10 a.m. PT
NVMe adoption has taken the Data Center by storm. And while the technology has proven itself to outperform all other competing SSD implementation, it is still quite limited and restricted to the local server it is attached to. This is where NVMe Targets come into the picture. In this presentation, we will explore how NVMe devices can be exported across a network and attached to remote server nodes.Read more >
CFOs rejoice! CEOs take to the streets in celebration! Ok, maybe it’s not quite that exciting, but did you know that you can get the best of both worlds in storage? One of the biggest challenges in storage has been paying for it. Thanks to trying to plan for exactly how much storage you need right now versus how much you need in the future, people often just overbuy with the expensive hopes that they’ll grow into it.
You actually have a whole lot of financing options at your disposal to pay for storage, from buying to leasing to simply paying for what you use, just like the cloud. Why pay for storage that you’re never going to actually use?
And, what happens when your storage gets too old? You buy new. What if you didn’t have to? What if you could pay a bit more in maintenance on your current system in exchange for an upgrade when the time comes?
Join Rob Commins, Sr. Director of Product Marketing for Tegile Systems, as he takes a deep dive into:
- Best practices for storing your data in the cloud
- How to keep cloud storage costs to a minimum
- How to scale data growth and storage capacity
Rob Commins has been instrumental in the success of some of the storage industry's most interesting companies over the past twenty years including HP/3PAR, Pillar Data Systems, and StorageWay. At Western Digital, he leads the Data Center System's business unit's product marketing team.
Many storage vendors focus on what’s easiest to characterize in a system when they give you a quote, which is typically raw storage capacity. But raw capacity, as quoted by most storage vendor, does not tell you how much space you’ll actually have for your users’ files.
Join us on April 5th, as Ben Gitenstein, Vice President of Product Management at Qumulo, gives you four questions that will get you the best possible quote for your next storage array. We will discuss:
- Raw vs usable capacity
- The costs of power and cooling
- Time spent managing storage
- Potential storage downtime
Its no secret that data quantities are increasing relentlessly, well expectations for fast performance are making Flash memory more prevalent than ever. At the same time, organizations of all sizes are relying on coming to the cloud to augment their IT Infrastructure. In this challenging environment, data protection technologies have never been more important
Experts from Pure Storage and Cohesity will discuss the industry’s first all-flash, scale-out file storage, purpose-built for high-performance and immediate access to mission critical file and object data.
Pure Storage FlashBlade provides unparalleled performance across a broad range of environments. Cohesity, the leader of hyperconverged secondary storage makes your data work for you by consolidating secondary storage silos onto a web-scale data platform that spans both private and public clouds.
Join Aaron Delp, Director of Technical Solutions Marketing at Cohesity for a report explaining the state of today’s storage and the need for its modernization, Cohesity’s unique architecture for DP and DR, a brief summary of a customer success story, as well as the next steps for Cohesity.Read more >
This webinar is part of BrightTALK's Ask the Expert Series.
Join Christopher Brown, CTO of Uptime Institute and Kelly Harris, Senior Content Manager at BrightTALK, as they take a technical deep dive into data center infrastructure management in 2018.
Chris will answer questions related to trends from the field:
- What really makes a well-run data center?
- The changes we are seeing in the industry
- What Tier level do I need for my data center(s)?
- What can you tell us about the typical issues we see every day?
- What are the challenges ahead for data centers?
Audience members are encourage to send questions to the expert which will be answered during the live session.
Today's enterprises need broader access to data for a wider array of use cases to derive more value from data and get to business insights faster. However, it is critical that companies also ensure the proper controls are in place to safeguard data privacy and comply with regulatory requirements.
What does this look like? What are best practices to create a modern, scalable data infrastructure that can support this business challenge?
Zaloni partnered with industry-leading insurance company AIG to implement a data lake to tackle this very problem successfully. During this webcast, AIG's VP of Global Data Platforms, Carlos Matos, and Zaloni CEO, Ben Sharma will share insights from their real-world experience and discuss:
- Best practices for architecture, technology, data management and governance to enable centralized data services
- How to address lineage, data quality and privacy and security, and data lifecycle management
- Strategies for developing an enterprise-wide data lake service for advanced analytics that can bridge the gaps between different lines of business, financial systems and drive shared data insights across the organization