Consumers are engaging with brands across multiple touchpoints, channels, and devices, generating massive amounts of valuable data. Organizations are quickly adopting a number of solutions to keep up with this explosion of customer data and better capture and correlate user behavior.
Two common solutions brands are leveraging to house and analyze all of this customer data are Enterprise Data Warehouses (EDW) and Data Lakes. Register now for this 30-minute webinar and learn:
- Key benefits of each and which is best for your brand
- Why pairing your enterprise data storage solution with customer data initiatives makes your tech stack even more powerful
- How an automated data supply chain fits in a modern EDW and data lake environment
- And more!
The webinar will conclude with a live Q&A Chat with questions from the audience on all things enterprise data storage.
Learn the origin of big data applications, how new data pipelines require a new infrastructure toolset and why both containers and shared storage are the fundamental infrastructure building blocks for future data pipelines.
We will first discuss the factors driving changes in the big-data ecosystem: ever-greater increases in the three Vs of data volume, velocity, and variety. The data lake concept was originally conceived as a single location for all data, but the reality is that multiple pipelines and storage systems quickly lead to complex data silos. We then contrast the legacy Hadoop applications, which are built only for volume, and the next generation of applications, like Spark and Kafka, which solves for all three Vs. Finally, we end with how to build infrastructure to support this new generation of applications, as well as applications not yet in existence.
About the Speakers:
Ivan Jibaja, Tech Lead, Pure Storage Ivan Jibaja is currently a tech lead for the Big Data Analytics team inside Pure Engineering. Prior to this, he was a part of the core development team that built the FlashBlade from the ground-up. Ivan graduated with a PhD in Computer Science from the University of Texas at Austin, with a focus on systems and compilers.
Joshua Robinson, Founding Engineer, FlashBlade, Pure Storage Joshua builds Pure's expertise in big-data, advanced analytics, and AI. His focus is on organizing a cross-functional team, technical validation, performance benchmarking, solution architectures, collecting customer feedback, customer consultations, and company-wide trainings. Joshua specializes in several data analytics tools, including Hadoop, Spark, ElasticSearch, Kafka, and TensorFlow.
This webinar is part of BrightTALK's Ask the Expert Series.
Many organizations are drowning in unstructured data, which can break traditional storage infrastructure. We'll take a look at different ways to handle unstructured data with particular emphasis on object storage.
Join this live Q&A with Erik Ottem, Director of Product Marketing at Western Digital, to:
- Understand the definition of unstructured data
- Review storage infrastructure block/file/object strengths and weaknesses
- Discuss data integrity and system availability
- Learn about data management at scale
In this live webinar join experts from Storage Switzerland and StorOne as we explain how IT can create a storage infrastructure that is more nimble, performs better and is less expensive than cloud storage.
Attend this webinar to learn:
- The Storage Architectures Behind Cloud Storage Tiers
- How Cloud Providers Fake Frictionless Storage Infrastructure
- The Intrinsic Advantages of On-Premises Storage
- How to Enable On-Premises Storage to Beat the Cloud with a True Frictionless Infrastructure
Data lakes are centralized data repositories. Data needed by data scientists is physically copied to a data lake which serves as a one storage environment. This way, data scientists can access all the data from only one entry point – a one-stop shop to get the right data. However, such an approach is not always feasible for all the data and limits it’s use to solely data scientists, making it a single-purpose system.
So, what’s the solution?
A multi-purpose data lake allows a broader and deeper use of the data lake without minimizing the potential value for data science and without making it an inflexible environment.
Attend this session to learn:
• Disadvantages and limitations that are weakening or even killing the potential benefits of a data lake.
• Why a multi-purpose data lake is essential in building a universal data delivery system.
• How to build a logical multi-purpose data lake using data virtualization.
Do not miss this opportunity to make your data lake project successful and beneficial.
Expectations of IT’s ability to return mission-critical applications to production are higher than ever. The data protection process shouldn’t always be playing catch up to the recovery of production data. We’ll be outlining three fundamental steps to ensuring that IT has all the elements needed to evolve the data protection process. Hear from thought leaders in the industry and learn how you can develop a future-proof protection strategy that suits your business needs.
Presenter: George Crump, Storage Switzerland
More and more, companies are unlocking the value of data by treating it as a business asset. Enterprises looking to increase their competitiveness and gain valuable insights face challenges in managing, scaling, storing, and analyzing these massive amounts of data. Learn how the next generation of ActiveScale delivers new features and capabilities to help you realize the potential of your unstructured data.
Join this webinar with Phil Bullinger and Stefaan Vervaet of Western Digital and Mark Peters of ESG to learn:
- How ActiveScale can help you build a data-forever architecture
- How your company can realize the potential of your unstructured data
- How to get the scale and economics you need to take control of your data
DeepStorage Labs is known in the storage industry for pushing equipment to its limits, and for reporting what really happens at the edge of a system’s performance. Western Digital’s IntelliFlash T4000, unlike a few previous occupants of the DeepStorage Labs ThunderDome, stood up to our testing and delivered high IOPS at a maximum of 1MS latency.
DeepStorage subjected the IntelliFlash T4000 workloads from the usual 4KB “hero number” random read to workloads that simulate OLTP and OLAP database servers, a file server and an Exchange server. We determined the systems performance individually and in combination finally determining the system’s ability to support the kind of mixed workload environment
In this webinar we will:
- Introduce the IntelliFlash array
- Describe the testing process
- Present the results
- Review the test environment
- Provide links to the test workload VDbench configurations
Paul Bruton discusses the move to a holistic approach to next gen data management. Looking at digital transformation strategies, he explains how Hitachi Vantara’s object storage can address common challenges - from cloud complexity to data governance and compliance - with its advanced custom metadata architecture to make data more intelligent.Read more >
An organization’s data is constantly under attack. Whether it’s through ransomware attacks, cyber-threats or employee misguidedness, all expose organizational data and put data protection and privacy at risk.
Encryption and access control are the keys to securing data and cyber resiliency, but most storage systems throughout the infrastructure (primary, secondary and protection storage) treat security as an afterthought, reducing flexibility and increasing complexity.
In this live webinar we will discuss the three reasons why storage security is failing and how to fix it.
The shelf life of data is shrinking. A streaming shift is taking place and use cases such as IoT connected cars, real-time fraud detection and predictive maintenance using streaming analytics are becoming commonplace. You too can switch to the fast data lane with Informatica, leveraging Kafka and other big data technologies. So shift gears and change lanes with us while we take you on a journey into the world of streaming data.Read more >
Cloud and flash storage are still leading significant changes in today's data storage industry. With the amount of data and number of devices that organizations are experiencing, implementing a strategy that employs “cheap and deep” storage behind high performance flash is a must for 2018.
Join Kieran Maloney, Product Marketing Manager at Quantum as he discusses how today’s archive solutions complement flash storage by providing low cost, long-term data preservation and protection while maintaining data visibility and access.
You will learn:
- How companies deploy storage tiers to optimize performance, data preservation and cost
- A partner use case with Pure Storage that delivers a comprehensive tiered storage solution for large unstructured data sets
- Trends and predictions for the flash storage market in 2018
The first generation of all-flash arrays has come and gone. For many organizations, the time is now to consider the "next" all-flash array.
In our next live webinar, Storage Switzerland and X-IO Storage discuss five things to look for in your next all-flash array. Topics include:
- Lessons learned from the first generation
- How to make sure those lessons apply to next-generation all-flash arrays
Even if you’re considering your first all-flash array, this webinar provides the information you need to make the right decision for your enterprise.
Join Esther Spanjer, Director of Business Development EMEIA at Western Digital and Janusz Bak, CTO of Open-E for this webinar.
They will discuss the challenges that Aviation Accounting Center LLC, an engineering company in Geospatial data processing was facing when planning for expansion of their IT infrastructure. Their existing standalone servers did not provide the capacity, availability and performance needed for storing and accessing its geospatial data. Esther and Janusz will walk you through the proposed solution and how it met the customer’s needs on scalability, capacity, throughput, connectivity and high-availability requirements.
Join Esther Spanjer, Director of Business Development EMEIA, for this webinar where she will introduce you to Western Digital’s newly released Ultrastar Serv60+8 hybrid storage server platform. With up to 780TB in a 4U enclosure, the Ultrastar Serv60+8 hybrid server is designed for capacity hungry environments and balances high density, compute and performance with low cost in Software-Defined Storage (SDS) environments. With the ability to mix both HDD and SSDs in the same compute platform, and a choice of CPUs and memory configurations, the Ultrastar® Serv60+8 hybrid storage server is targeted at backup, archive, private cloud and surveillance environments. We will share with you the benefits of this type of high density compute platform in various use case scenarios.Read more >
A majority of the data collected by organizations today is wasted. Whether through poor analytics, lack of resources, or because they have too much of it. So how can organizations turn this around and actually start utilizing their data for powerful results?
More and more companies are taking their customer, product, patient, or other data and providing a 360-degree view using a governed and actionable data lake. By breaking down the silos associated with traditional data located in disjointed systems and databases, companies are finding new ways to improve loyalty programs, product development, marketing campaigns, and even find a new source of revenue from their data.
Join Jatin Hansoty, Director of Solutions Architecture at Zaloni, as he dives into real-world use cases from several of the world’s top companies. Learn from their architecture and the results they achieved.
Topics covered include:
- Best practices
- Common pitfalls to avoid
- Real-world use cases
- Future-proof architecture
The concept of the container as technology is not new; In recent years it has seen remarkable attention from every industry. The adoption of containers is increasing beyond just stateless conditions such as a load balancer or a web application server. For many adopters of container technology, the persistent storage and data management are the top pain points.
The way how storage is consumed has indeed changed - This talk is to take you through the journey of data storage evolution. You'll understand the challenges in data storage world due to the demand of new way of consuming storage by the container. Specific use cases, the solution of storage in container environments such as Docker Swarm and Kubernetes will be discussed.
Kumar Nachiketa is a data storage consultant in IBM Systems Lab Services, ASEAN based in Singapore. In his 11 years of career, he's been helping customers from various industry solving typical data storage challenges in several ways - deployment, consulting, finding ways to evolve. He is currently focusing on IBM Software Defined Storage and cloud technologies. He has co-authored IBM Redbooks on IBM storage cloud and OpenStack integration with IBM Spectrum Scale.
Join us to learn how we've made an award-winning file system even better, Quantum's StorNext 6.2.
-Cloud-based monitoring tools maximize your uptime
-Automation applied to best practices
-New comprehensive SAN and NAS access
-Improved integration with cloud-enabled workflows
Join this webinar with experts from SolarWinds and Storage Switzerland as they discuss how IT can have the best of both worlds: a mixed storage system environment that targets the specific requirements of their various workloads, while also expertly managing that environment using its resources efficiently and optimally.
Key Webinar Takeaways:
* The Pros and Cons of a Mixed Storage Vendor Data Center
* The Value of a Global Storage Dashboard (detect performance and capacity problems across storage systems, deliver and end-to-end / application to storage view, free up wasted disk capacity, reduce time to resolution with visibility to the IT issue source and what it impacts, and adequately place workloads for their required performance)
* The Requirements of a Global Storage Dashboard (works across multiple storage vendors, provides end-to-end views, does not replace but compliments vendor tools)
Its no secret that data quantities are increasing relentlessly, well expectations for fast performance are making Flash memory more prevalent than ever. At the same time, organizations of all sizes are relying on coming to the cloud to augment their IT Infrastructure. In this challenging environment, data protection technologies have never been more important
Experts from Pure Storage and Cohesity will discuss the industry’s first all-flash, scale-out file storage, purpose-built for high-performance and immediate access to mission critical file and object data.
Pure Storage FlashBlade provides unparalleled performance across a broad range of environments. Cohesity, the leader of hyperconverged secondary storage makes your data work for you by consolidating secondary storage silos onto a web-scale data platform that spans both private and public clouds.
There are already a lot of challenges with IT. But, the explosive growth of data and need to ensure protection has made managing and scaling associated data storage more challenging than ever. Security, compliance and privacy needs as well as public cloud integration add further complexities.
How do you address these challenges cost-effectively while also ensuring you meet service level, application and other requirements?
Join this webinar with experts from Nutanix and Cloudian. Learn how to:
• Deploy a single, petabyte-scale object storage architecture across one or more locations
• Create simple, bucket-level policy-based storage management
• Obtain a unified view of your data across locations
• Ensure compliance, privacy and security
• Take advantage of flexible data protection options
• Leverage multiple compatible data backup and recovery options
Getting your company ready for GDPR isn’t about putting a few new processes in place — it’s about rethinking your entire approach to personal data, including how to get value from it. For decades, companies have collected and stored all kinds of personal information “just in case” they ever needed it.
GDPR requires a different approach. You need to be proactive in thinking about how to get value from your data, and you need to understand exactly what your company is doing with personal data and why.
Join Jill Reber and Kevin Moos of Primitive Logic to learn:
- How to work with third parties who process personal data on your behalf
- How preparing for GDPR helps you understand your data on a whole new level (and why that’s a good thing)
Do you know that your existing investments in Informatica PowerCenter can fast track you to Big Data and data lake technologies? We will demonstrate why our customers are moving from data warehouses to data lakes, leveraging big data and cloud ecosystems and how to do this rapidly, leveraging your existing investments in Informatica technology.Read more >