The storage community on BrightTALK is made up of thousands of storage and IT professionals. Find relevant webinars and videos on storage architecture, cloud storage, storage virtualization and more presented by recognized thought leaders. Join the conversation by participating in live webinars and round table discussions.
Backing up VMware successfully has always been a challenge. The introduction of the cloud ever increasing scale of VMware infrastructure continues to give backups fits and makes it even harder.
Please join George Crump, Lead Analyst at Storage Switzerland and our guest speaker W. Curtis Preston, Chief Technical Architect at Druva, for a discussion on the new challenges with VMware backups, and how to address them successfully.
The Top 5 Reasons VMware Backups Still Break
* Lack of Auto-Discovery
* Lack of Full Cloud Support
* Too Much Backup Infrastructure
* Expensive and Complicated Disaster Recovery
* No VMware on AWS Support
The way in which we work is changing. Technology, social norms and economic influences are impacting the working world every day.
Did you know the right to work is a human right? It allows men and women of all ages and backgrounds to become self-reliant in dignity and free from discrimination. At a time when technology is taking over the workplace and a younger demographic is entering most workforces, it's more imperative than ever to understand the future of work and how multi-generational teams can make a meaningful impact on your business.
Tune in with Monique Morrow, Dedicated Futurist, Technologist and Advisor, as she discusses these ever-important topics, opportunities and calls to action.
Legacy storage systems, like NAS, were architected when spinning disk and slower networking technologies were the industry standard. In this webinar, we’ll present five reasons why NAS can’t keep pace with the I/O demands of new deep learning workloads. To support these workloads, the data processing layer has to have immediate access to, and a constant supply of, data. Here NAS falls short, because the data gets bottle-necked between the compute and storage. WekaIO Matrix™ is a next-generation shared, distributed file system that visualizes the SSDs into one logical pool of fast storage presenting a global namespace to the host applications. Matrix was written from scratch to leverage the benefits of standard Intel x86 architecture combined with NVMe. The result is an easy to deploy, easy to manage storage architecture that is a radical departure from legacy NAS systems. By optimizing Matrix for flash, the storage solution is ideal for deep learning and high-performance computing workloads.
When your applications slow down, you’ve hit the app-data gap. This session will go beyond what’s new with HPE 3PAR StoreServ flash arrays to highlight how HPE 3PAR can close that gap. With predictive analytics and cloud-ready flash, HPE 3PAR delivers fast and reliable access to data both on-premises and off. Hear from a HPE 3PAR user on how all-flash improves business operations and addresses requirements such as risk mitigation and operational simplicity, making the all-flash data center a reality.
Organizations will spend more than $55 billion to store and manage an average of 13 copies of object data that they create in 2020, according to IDC researchers; This does not include the cost of data governance risks associated with uncontrolled copies of data, especially in a time of heightened data privacy regulations. In this webinar, you will learn:
- Understanding of the industry trends on copy data management - Recovery, Agile and Governance Copy of Services
- A new holistic approach to automating the creation, refreshing, access controls and expiration of copy data
- How to gain more value from backup data
Join storage industry veteran and co-founder of Pure Storage Europe, Lee Angel, as he explains why backup appliances are artefacts of an outdated infrastructure.
What if backup appliances were no longer an entire industry, but simply an application running on a single, powerful platform? Whether it’s fast backup and recovery or rapid restore for test/dev, a modern data platform from Pure Storage can consolidate and accelerate these workloads.
Customers need the ability to optimize the placement of applications and data workloads to ensure both performance and availability. These challenges are further complicated when applications, data, and people are spread across multiple storage platforms, data centers, and remote office locations.
As a fileserver built for the cloud era, Peer and Nutanix Acropolis File Services (AFS) work in concert to conquer the toughest challenges. Featuring a globally-distributed Active-Active file services fabric, the combined solution seamlessly weaves AFS clusters and existing storage systems together with real-time synchronization and distributed file locking. Data can now be made local to users and applications, ensuring optimal performance, integrity, and high availability.
Key use cases include:
- Active-Active Global File Sharing and Collaboration
- Active-Active Continuous Availability and Load Balancing for VDI Implementations
- Extending the Reach of Nutanix Across Multiple Storage Platforms
Containers are increasingly becoming an important technology across a wide variety of industries and use cases. But oftentimes, provisioning of persistent storage is an afterthought, which leaves application developers stuck in a manual world waiting for provisioning requests, and IT operations teams scrambling to meet those requests. Project Trident is a dynamic storage provisioner which eliminates these issues and delivers the persistence applications require. Compatible with Kubernetes, Docker, Rancher, and OpenShift, Trident eliminates the manual provisioning of old and delivers a truly automated and dynamic persistent storage provisioning infrastructure.
In this session, you’ll learn more about the challenge posed by persistent storage provisioning, how Trident solves those challenges, and view a demo of Trident in action.
W ramach Letniej Akademii SUSE 2018 organizowanej on-line poznasz najnowsze rozwiązania open source. Przedstawiamy krok po kroku najbardziej interesujące funkcje rozwiązań SUSE oraz pokazujemy, jak zacząć z nich korzystać.
W ramach tych zajęć Akademii prezentujemy najnowszą wersję SUSE Enterprise Storage 5 - inteligentnego rozwiązania pamięci masowej definiowanej jako oprogramowanie (SDS), wykorzystującego technologię Ceph. Pomaga ono organizacjom w dostosowaniu do zmieniających się potrzeb biznesowych i zapotrzebowania na przechowywanie danych poprzez przekształcenie posiadanej infrastruktury pamięci masowej w ekonomiczną, wysoce skalowalną i elastyczną pamięć masową zbudowaną na zwykłych serwerach wyposażonych w standardowe dyski twarde.
As organizations continue their digital transformation to capitalize on the benefits of big data, many have already discovered that the key to success lies in selecting the right infrastructure solution that also enables economic transformation. INFINIDAT’s InfiniBox, based on a software-defined storage architecture, delivers the performance, availability, and continuous data protection at an exceptional value necessary to support both goals.
In April 2018, INFINIDAT commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study to illustrate the economic benefits of implementing the InfiniBox for high performance, high capacity use cases including virtualization, analytics, cloud services and backup. Forrester uncovered some remarkable results and shares them with you in this comprehensive presentation.
The study revealed the following business benefits:
• Payback in less than six months
• A 125% Return on Investment (ROI)
• An investment with a very positive Net Present Value (NPV) of $10.2M
• A total benefit of $18.4 million over three years
• Downtime cost savings of over $1.1 million
We’re increasingly in a multi-cloud environment, with potentially multiple private, public and hybrid cloud implementations in support of a single enterprise. Organizations want to leverage the agility of public cloud resources to run existing workloads without having to re-plumb or re-architect them and their processes. In many cases, applications and data have been moved individually to the public cloud. Over time, some applications and data might need to be moved back on premises, or moved partially or entirely from one cloud to another.
That means simplifying the movement of data from cloud to cloud. Data movement and data liberation – the seamless transfer of data from one cloud to another – has become a major requirement.
In this webcast, we’re going to explore some of these data movement and mobility issues with real-world examples from the University of Michigan. Register now for discussions on:
•How do we secure data both at-rest and in-transit?
•Why is data so hard to move? What cloud processes and interfaces should we use to make data movement easier?
•How should we organize our data to simplify its mobility? Should we use block, file or object technologies?
•Should the application of the data influence how (and even if) we move the data?
•How can data in the cloud be leveraged for multiple use cases?
Digital transformation (DX) enables firms to expand their competitive differentiation by embracing data-driven decision-making processes, increasing customer satisfaction and retention, and/or getting better intelligence on the market. Intel, IBM, and Lenovo have partnered to deliver a unified, single platform approach to address an organization’s analytics needs – including Big Data, Enterprise Data Warehouse, and Data Science/AI. Join this webinar to learn the technical differentiation and business value the Lenovo Converged Analytics Platform provides. This data modernization platform enables Enterprises to rise to the challenge of DX through the convergence of data analytics workloads onto a single platform.
Transform your workplace with virtual desktops and applications. Learn how Dell EMC VDI Complete Solutions enables you to plug in, power up, and provision virtual desktops in less than an hour and reduce the time needed to plan, design, and scale your virtual desktop and application environment.
Why a chameleon? The rich enterprise features, robustness, and adaptability of HPE Nimble Storage arrays is sometimes unknown or misunderstood. This overview summarizes product value while positioning HPE Nimble Storage as the bulletproof and flexible enterprise product that it is. Yes, InfoSight is key, but is only half of the benefit provided by the new next-gen platform HPE Nimble Storage platform.
Summary bullet points:
•Predictive, cloud ready, and timeless benefits
•What’s new in the next-gen platform?
•Unspoken benefits of the predictive flash platform
•Data protection for the data loss paranoid IT manager
•App-centric and scale-to-fit benefits
With Pure Service Orchestrator, customers can now extend their shared infrastructure beyond existing scale-up and virtualised applications to support containerised, persistent applications – all on Shared Accelerated Storage infrastructure.
Join Principal Systems Engineer, Jon Owings for this overview and demonstration.
Ransomware is the universal threat. No matter an organization's data center location, or its size, it can be devastated by a ransomware attack. While most organizations focus on the periphery, they also need to be prepared for a breach, something that ransomware is particularly adept. In case of a breach, an advanced backup and disaster recovery solution can ensure safe and timely recovery of data without paying ransom.
In this live webinar join experts from Storage Switzerland and Micro Focus as they discuss the impact of ransomware and the core features of a backup solution that can mitigate the associated risks.
This webinar will cover the present and the future of storage networking including the status of NVMe and NVMe over Fabrics within the SAN and across the ecosystem, 32Gb/s adoption, a Connectrix update and the latest on preventing SAN congestion. You will also learn how to leverage Connectrix and PowerPath for the best all-flash experience.
With massively growing data sets and expectations of 100% uptime with long term retention, companies are struggling to meet data protection and disaster recovery demands. These problems are prevalent, no matter the size of the organization. To compound that, IT organizations simply don’t have time to research and compare the latest software and hardware solutions available for enterprise data protection.
In this webinar you'll learn:
-How Cohesity provides enterprise-grade data protection
-How we're solving legacy data protection challenges
-Customer use cases & success
This webinar is part of BrightTALK's Founders Spotlight Series, where each month we feature inspiring founders and entrepreneurs from across industries.
In this episode, Gleb Budman, CEO and Co-Founder of Backblaze, will share his behind-the-scenes insight into what it's really like to found a tech startup.
Backblaze is a cloud storage and backup solutions for developers and IT teams. Backblaze stores over 150 petabytes of data, is profitable and won a spot on Deloitte's Fast 500 for 917% five-year revenue growth.
Gleb is founder of three companies, leader in two startups from pre-launch through acquisition, and a seasoned executive. He is on a mission to make storing data astonishingly easy and low-cost.
Meet Cohesity CloudSpin, a new feature on our DataPlatform that accelerates your test/dev initiatives in the cloud!
In this on-demand webinar you will learn how Cohesity CloudSpin is used in VMware infrastructures to:
-Make on-premises backup data easily reusable in the public cloud
-Deliver on the promise of application mobility for test/dev
-Ensure that format conversion need no longer be a cumbersome process
Out of a variety of projects, NetApp has developed a five-staged model to solve the challenge of data management in the Internet of Things domain. In this session you will learn how SAP helps customer get ready for the digital journey using SAP Business Objects for Analytics and SAP HANA. We will show, based on a real-world scenario, how IoT data are processed for predictive analytics and how a hybrid model with SAP Leonardo can be deployed.
- Understand how SAP, Hadoop, and NetApp work well together
- Learn how predictive analytics could be applied to IoT data
- See the integration of NetApp into several SAP IoT related products
The new VxBlock 1000 breaks the physical boundaries of traditional converged infrastructure to give businesses unprecedented simplicity and flexibility to meet all their application data services needs. Learn how this next-generation engineered system provides unmatched choices of market-leading storage, data protection, and compute for all workloads to maximize system performance and utilization.
Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.
What new security requirements apply to Persistent Memory (PM)? While many existing security practices such as access control, encryption, multi-tenancy and key management apply to persistent memory, new security threats may result from the differences between PM and storage technologies. The SNIA PM security threat model provides a starting place for exposing system behavior, protocol and implementation security gaps that are specific to PM. This in turn motivates industry groups such as TCG and JEDEC to standardize methods of completing the PM security solution space.
You've heard about the many benefits of object storage, but do you know which one is the right solution for your use case and organization? Object storage experts John Bell, Sr. Consultant, and Ben Canter, VP Global Sales, will discuss the criteria that should be used when evaluating object storage solutions for various use cases and share the knowledge gleaned from the 100s of successful installations Caringo has managed over the past dozen years. This interactive presentation will be followed by a live Q&A so you can ask questions that address your specific areas of interest.
Network-intensive applications, like networked storage or clustered computing, require a network infrastructure with high bandwidth and low latency. Remote Direct Memory Access (RDMA) supports zero-copy data transfers by enabling movement of data directly to or from application memory. This results in high bandwidth, low latency networking with little involvement from the CPU.
In the next SNIA ESF “Great Storage Debates” series webcasts, we’ll be examining two commonly known RDMA protocols that run over Ethernet; RDMA over Converged Ethernet (RoCE) and IETF-standard iWARP. Both are Ethernet-based RDMA technologies that reduce the amount of CPU overhead in transferring data among servers and storage systems.
The goal of this presentation is to provide a solid foundation on both RDMA technologies in a vendor-neutral setting that discusses the capabilities and use cases for each so that attendees can become more informed and make educated decisions.
Join to hear the following questions addressed:
•Both RoCE and iWARP support RDMA over Ethernet, but what are the differences?
•Use cases for RoCE and iWARP and what differentiates them?
•UDP/IP and TCP/IP: which uses which and what are the advantages and disadvantages?
•What are the software and hardware requirements for each?
•What are the performance/latency differences of each?
Join our SNIA experts as they answer all these questions and more on this next Great Storage Debate
Many studies indicate that organizations have a shallow confidence level in their ability to recover from a disaster. The disaster recovery plan is often an ad-hoc plan that requires IT to scramble when disaster strikes. If the recovery effort succeeds, it is often well behind schedule and well over budget.
The first step in creating a great disaster recovery plan is creating one that isn't bad at the very least. In this live webinar join Storage Switzerland, Veeam, and KeepItSafe as we discuss how to go from a bad disaster recovery situation to a great disaster recovery plan for better business continuity.
Tape is as relevant as ever, but it may not be for the reasons that you think. In fact, as part of delivering hyper-availability, one best practice is to use tape as an “offline” storage copy to protect against ransomware and malware, which is a key part of a comprehensive data protection strategy.
In this webinar, join experts from Veeam and Quantum to learn about tape’s role in tomorrow’s infrastructures, best practices for tape in a Veeam environment, and to introduce Quantum’s new converged tape appliance for Veeam – which makes it easier to create tape to protect against ransomware.
Business continuity, especially across data centers in nearby locations often depends on complicated scripts, manual intervention and numerous checklists. Those error-prone processes are exponentially more difficult when the data storage equipment differs between sites.
Such difficulties force many organizations to settle for partial disaster recovery measures, conceding data loss and hours of downtime during occasional facility outages.
In this webcast and live demo, you’ll learn about:
• Software-defined storage services capable of continuously mirroring data in
real-time between unlike storage devices.
• Non-disruptive failover between stretched cluster requiring zero touch.
• Rapid restoration of normal conditions when the facilities come back up.
Attend live for a chance to win a $200 Amazon gift card!*
*Gift card will be sent within 10 days of the webinar date. Void where prohibited.
Building a data lake is easy. Architecting a successful data lake that is flexible enough to accept multiple data sources, volumes, and types all while being able to scale with your business is harder.
Do it wrong and you've created a data swamp. Do it right and you turn data into the most valuable asset in your business.
Join us and learn from Rajesh Nadipalli, Zaloni’s Director of Product Support and Professional Services, how to:
- Set your data lake up for success with the right architecture
- Build guard rails to ensure the accuracy of data in your lake with proper data governance
- Provide visibility into your lake with a robust data catalog (or tie in with your favorite BI tools)
While everyone agrees data is the new currency, it can quickly turn to an unforeseen liability if not managed well. Complexity of data silos, intense pressure from digital disruptors, business reliance on reaping digital dividends, and increasingly rigid regulatory environment mandates a well governed data strategy.
Today’s data governance frameworks leaves a lot to be desired. Intelligent Data Governance ensures that data management is automated as per appropriate organizational policies, business standards, and global regulations while allowing business to extract in-time insights for agile decision making.
Join us to learn how can you tame the increasing amount of data diversity, meet with the emerging data privacy regulations & compliance standards, automate the key governance policies, and modernize data protection.
Don’t miss the highlights of IBC2018 with this inside preview of the big themes and must-see tech on this year’s show floor.
Our panel of broadcasters and exhibitors guide us through their predictions of the hot topics from IP to AI, the rise-and-rise of OTT, and the impact of 5G, blockchain and ATSC 3.0.
Plus the new focus on building an efficient supply chain to create, manage, store and distribute content.
Join us to help make your IBC experience the best yet.
Jeremy Dujardin, CTO Global Media & Entertainment, Tata Communications
Tim Felstead, Director of Strategic and Operation Marketing Broadcast & Media, Rohde & Schwarz
Kathy Bienz, Director North America, IABM
Jurgita Rhodes, Partner, Marquis Media Partners
Machine and Deep Learning could not be hotter concepts today, driving powerful innovations across industries. But like so many hot concepts surrounding digital transformation, they seem to be done well by a select few organizations who have cracked the code of designing and deploying these platforms to support their critical transformation agendas. While many of the tools in this ecosystem are open-source, deploying them and running them well is certainly not. During this session we will show how organizations can enable their development and data science team with rapid deployments of the most popular tools for ML and AI based on proven best practices and the highly engineered Ready Solutions from Dell EMC.
94% of the Fortune Global 100 use SAS analytics. Through innovative software and services, SAS empowers and inspires customers around the world to transform data into intelligence. IDC predicts by 2019, 40% of Data Transformation initiatives will use AI services; by 2021, 75% of commercial enterprise apps will use AI, over 90% of consumers will interact with customer support bots, and over 50% of new industrial robots will leverage AI.
Data-driven applications and Machine Learning (ML) workloads, using SAS analytics are increasing in volume and complexity as organizations look to reduce training and operational timelines for artificial intelligence (AI) use cases. To enable predictive and cognitive analytics, you need to accelerate training and operations by delivering ultra-low latency with massive ingest bandwidths when faced with heavy mixed random and sequential read/write workloads.
Vexata is teaming up with Destiny Corporation, a Business and Technology firm who is a SAS gold partner and reseller. This webinar is targeted towards SAS Line of Business and IT owners who are challenged with
- new use cases like IoT, machine and deep learning across FSI, Insurance, Healthcare and Lifesciences verticals
- handling growing data-sets and deriving actionable intelligence from these data-sets as well as
-optimizing their existing long running jobs and IT infrastructure without a rip and replace policy.
Learn how to identify your SAS analytics IO bottlenecks and leverage Vexata VX-100 with its transformative VX-OS purpose built to overcome these challenges.
Interoperability is a primary basis for the predictable behavior of a Fibre Channel (FC) SAN. FC interoperability implies standards conformance by definition. Interoperability also implies exchanges between a range of products, or similar products from one or more different suppliers, or even between past and future revisions of the same products. Interoperability may be developed as a special measure between two products, while excluding the rest, and still be standards conformant. When a supplier is forced to adapt its system to a system that is not based on standards, it is not interoperability but rather, only compatibility.
Every FC hardware and software supplier publishes an interoperability matrix and per product conformance based on having validated conformance, compatibility, and interoperability. There are many dimensions to interoperability, from the physical layer, optics, and cables; to port type and protocol; to server, storage, and switch fabric operating systems versions; standards and feature implementation compatibility; and to use case topologies based on the connectivity protocol (F-port, N-Port, NP-port, E-port, TE-port, D-port).
In this session we will delve into the many dimensions of FC interoperability, discussing:
•Standards and conformance
•Validation of conformance and interoperability
•FC-NVMe conformance and interoperability
•Use case examples of interoperability
Sales is a scientific art; it takes someone who can translate their experience as a sales manager into an artful set of best practices. The science part requires the application of advanced data models to standard CRM data to understand their pipeline at greater depth. Blending the quantitative expertise of the data team with the experience, intuition and knowledge of the sales leadership ensures companies are able to use that pipeline analysis to correctly predict revenue. More importantly, that process helps sales managers manage and invest in their team and have data-driven conversations that will lead to exceeding revenue goals.
Join Sam Schuster, analytics team lead at Periscope Data, and Ben Loeffler-Little, head of sales at Periscope Data, as they discuss how analytics and sales have partnered to forecast new business revenue and manage pipeline. Together, they have built a tool that models new business revenue and identifies key pipeline trends to ensure sales leadership will hit their goals.
In this webinar, they will discuss:
- Why visualizing sales pipeline matters
- How to bridge the gap between technical tools and non-technical users
- How to model Salesforce data
- Business impact from using these visualization
All attendees will also get access to the Salesforce SQL data models.
By leveraging the cloud, Disaster Recovery as a Service (DRaaS) eliminates many of the costs associated with a DR site, changing it from a capital cost to an operational cost. Does DRaaS sound too good to be true? Can organizations trust the cloud to provide as critical of a function as DR?
In our live webinar, join Storage Switzerland and Infrascale as we recap how DRaaS works and what to look for in a DRaaS provider. You’ll also see an interactive demo to watch Infrascale DRaaS in action.
The integration between Nutanix and Zenoss enables enterprises to ensure the highest levels of performance and availability for applications and services that power their businesses. Zenoss and Nutanix subject matter experts will discuss and showcase how Nutanix uses Zenoss to monitor their HCI and other critical environments at Nutanix data centers. Zenoss enables IT organizations to eliminate blind spots across hybrid IT environments, predict impacts to critical services, resolve issues faster, and operate at any scale the business requires.
The session will include a demonstration of the Zenoss and Nutanix environments working together via the Nutanix ZenPack. Both teams will answer questions about Zenoss, the Nutanix ZenPack integration and future roadmap.
In this live webinar join experts from Storage Switzerland and StorOne as we explain how IT can create a storage infrastructure that is more nimble, performs better and is less expensive than cloud storage.
Attend this webinar to learn:
- The Storage Architectures Behind Cloud Storage Tiers
- How Cloud Providers Fake Frictionless Storage Infrastructure
- The Intrinsic Advantages of On-Premises Storage
- How to Enable On-Premises Storage to Beat the Cloud with a True Frictionless Infrastructure