The storage community on BrightTALK is made up of thousands of storage and IT professionals. Find relevant webinars and videos on storage architecture, cloud storage, storage virtualization and more presented by recognized thought leaders. Join the conversation by participating in live webinars and round table discussions.
Enterprise IT is facing huge volumes of new data that is overwhelming legacy SAN solutions. Rather than get trapped with the outcomes of an aging legacy system, learn how you can leave slower data access, increased costs, and the inflexibility of a legacy infrastructure behind. Join the webcast “Demanding Workloads Demand NVMe” and find out how moving to an end-to-end NVMe infrastructure can clear your path toward data center modernization.
You will hear directly from three leading experts: Andrew Grimes from NetApp, Eric Burgener from IDC, and Naem Saafein from Brocade.
They will discuss how you can:
• Speed time to innovation by maximizing your network resources with the latest all-flash storage technologies
• Become a key part of your company’s digital transformation and future success
• Easily move from a siloed solution to a cloud-connected all-flash infrastructure
According to IDC, data creation will reach a total of 163 zettabytes by the year 2025. Traditional data centers are no longer able to address the need for scale, high availability and cost efficiency.
Watch this recorded webinar to learn more about best practices for enabling maximum availability and cost efficiency of your data center. During the session we’ll discuss:
• Typical challenges many service providers are facing
• Key criteria to consider as you evaluate a software-defined storage
• A variety of use cases for software-defined storage
• Live product demo of Virtuozzo Storage
Pure Data-Centric Architecture bietet sämtliche Storage-Services, die Sie benötigen, sei es Block-, VM-, Datei- oder Objekt-Storage. Sie können alles konsolidieren. Unterstützen Sie Ihre Datenbanken, virtuellen Maschinen, Container, Analysen und Webanwendungen durch die problemlose Performance und Verfügbarkeit der Enterprise-Klasse, die ein Shared Accelerated All-Flash-Storage bietet.
Join Caringo CEO Tony Barbagallo and VP Marketing Adrian “AJ” Herrera as they talk about what is new in Swarm 10, a landmark release that enhances every part of the Caringo product suite with unrivaled performance and cost-savings enabled by our unique pure-object approach. Learn how Caringo has set a new precedent in on-premises object storage with blazingly fast S3 throughput and sustained petabyte-scale NFS to object read and write—all on standard hard drives, server and networking infrastructure.
The existing “Do-It-Yourself AI” solutions involve enterprises to procure, integrate, test and continuously perform maintenance of hardware and (open source) software - all by themselves. In the process, they lose valuable months to jumpstart their AI initiatives and underutilized their resources during this crucial phase.
In this session, you will get the best practices for the design and deployment of AI infrastructure, and learn how to build an AI platform that will deliver faster-time-to-insights and enable your data scientists to be more productive.
Ramnath is Senior Manager, Product Marketing for AI and Deep Learning at Pure. Previously, he worked as a Marketing Manager at Mellanox Technologies, leading the market development activities for "ABC" - AI, BigData & Cloud. Before that, he was the RDMA Solutions Evangelist and led Cloud & Big Data strategy at Emulex. Prior to joining Emulex, he worked in two of the most prestigious research labs in Europe - Brain Mind Institute at EPFL, Switzerland and Barcelona Supercomputing Center in Spain. He has 15+ publications in leading conference and journals.
With the whirlwind pace of artificial intelligence (AI) and deep learning technology, many enterprises are challenged with how to advance new AI projects from proof of concept to production. Join the webcast “Drive Disruptive Innovation at Scale with AI” and find out how you can enable a secure and smooth flow of data for your AI workflows, from edge to core to cloud.
You will hear directly from three leading experts: Monty Barlow from Cambridge Consultants, Ritu Jyoti from IDC, and Santosh Rao from NetApp.
They will discuss how you can:
• Build a successful deep learning pipeline, whether in the cloud or on-premises
• Discover lessons learned from industry case studies, including automotive, retail, and healthcare
• Scale infrastructure to keep pace with your data-hungry AI applications
With a future-proof design, the VxBlock 1000 now supports PowerMax, modern storage for applications of today and tomorrow. Learn what sets PowerMax apart from other storage options, how and why you should consider a VxBlock 1000 for your next IT infrastructure purchase, and hear what a leading analyst has to say about converged systems and the VxBlock 1000.
Protocol Analysis for High-Speed Fibre Channel Fabrics in the Data Center: Aka, Saving Your SAN (& Sanity)
The driving force behind adopting new tools and processes in test and measurement practices is the desire to understand, predict, and mitigate the impact of Sick but not Dead (SBND) conditions in datacenter fabrics. The growth and centralization of mission critical datacenter SAN environments has exposed the fact that many small yet seemingly insignificant problems have the potential of becoming large scale and impactful events, unless properly contained or controlled.
Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) for purposes of expedited data delivery place additional analytical demands on the datacenter manager.
To be sure, all tools have limitations in their effectiveness and areas of coverage, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. To that end, recognizing and reducing the effect of those limitations is essential.
This webinar will introduce participants to Protocol Analysis tools and how they may be incorporated into the “best practices” application of SAN problem solving. We will review:
•The protocol of the Phy
•Use of “in-line” capture tools
•Benefits of purposeful error injection for developing and supporting today’s high-speed Fibre Channel storage fabrics
Join us on October 10, 2018. FCIA experts will be on hand to answer your questions.
The Long Term Retention Technical Working Group and the Data Protection Committee will review the results of the 2017 100-year archive survey. In addition to the survey results, the presentation will cover the following topics:
· How the use of storage for archiving has evolved in ten years
· What type of information is now being retained and for how long
· Changes in corporate practices
· Impact of technology changes such as Cloud
You’ve heard about the benefits of hybrid cloud and how it allows businesses to take advantage of the best features of public and private clouds to deliver new revenue, better customer experiences, and lower costs. Now learn how including a carefully considered data strategy in your hybrid cloud architecture removes barriers and enables data mobility, ultimately allowing you to generate insights and create better business outcomes.
In this webcast you will learn:
•How to design a hybrid cloud that secures your data without creating silos that limit business value
•How to make your data accessible anywhere and at any time, while maintaining security
•How to maximize ROI on your data and critical apps
•How a partnership with Hitachi and VMware accelerates your cloud adoption while delivering business flexibility and control
This webinar, brought to you by Rohde & Schwarz, explores the future of uplink amplifiers, including solid-state designs, linearity, signal quality in order to help you select the right amplifier for uplink scenarios.
Christian Baier, Product Manager Satellite Amplifiers, Rohde & Schwarz
Dr Florian Ohnimus, Director R&D RF Power Components, Rohde & Schwarz
SUSE Enterprise Storage 5.5 is the latest release of the award winning SUSE software-defined storage solution, powered by Ceph Technology. Based on the Ceph Luminous release and built on SUSE Linux Enterprise Server 12 SP3, it broadens the scope and use cases for the SUSE software-defined storage solution. It enables IT organizations to reduce IT operational expenses while delivering enterprise-grade storage with our intelligent software-defined storage management solution.
As more organizations turn to AWS to run their mission critical workloads like SAP, Oracle, MSSQL, MongoDB, MySQL, & NoSQL, learn how DELL EMC Data Protection along with AWS is positioned to:
•Provide a seamless data protection experience on AWS for Enterprise Business Applications
•Optimize for Performance leveraging Dell EMC Application Direct Technology
•Significantly Lower Costs by Leveraging Industry Leading Data Domain Deduplication on AWS S3
All-Flash Arrays are the model of inefficiency and as flash media increases in density and performance, the cost of this inefficiency becomes more obvious. Enterprise solid-state drives (SSD) deliver 70,000 IOPS per drive but most AFAs need 24 drives or more to achieve 70,000 IOPS. Those same systems also need high-end processors to move IO through them at those speeds. The cost of these systems is hundreds of thousands of dollars when in reality they should cost less than $100,000.
In this webinar we will discuss why their software bottlenecks current all-flash arrays. We will explain how inefficient storage software requires vendors to use more powerful and more expensive CPUs, as well as a greater quantity of flash drives. We'll also explain why technology advancements like NVMe and increasing flash density will only make the situation worse.
Registrants for the webinar will receive a copy of Storage Switzerland's eBook "Why Does Storage Cost So Much - How to Dramatically Lower the Cost of Storage".
- What is Edge computing and why it will be key to broadcasting, media and entertainment
- How edge computing is rapidly evolving to provide content anywhere and everywhere through the IoT and soon across 5G networks
- What mobile edge computing means for production, processing and video distribution
- How new consumer experiences, VR, AR, UHD streaming will be driven by the availability of edge infrastructure
- Protecting the physical and digital assets. How to secure the critical IT infrastructure which will house valuable content at the edge of the network
Jefferson Wang, Managing Director, Accenture Strategy, Communications, Media and Technology
Steven Carlini, Senior Director Data Centre Global Solutions, Schneider Electric
Damon Neale, Chief Technology Office, BASE Media Cloud
This talk will provide a general overview over the HPE StoreEver Tape Library Family and will include specifics on the MSL3040. The speakers will cover HPE’s tape library management platform Command View for Tape Libraries Software, what it can do for you, and how it will assist in driving your business more efficient.
Achieving actionable insights from data is the goal of any organization. To help in this regard, data catalogs are being deployed to build an inventory of data assets that provides both business and IT users a way to discover, organize and describe enterprise data assets. This is a good first step that helps all types of users easily find relevant data to extract insights from.
Increasingly, end users want to take the next step in provisioning or procuring this data into a sandbox or analytics environment for further use. Attend this session to see how organizations are looking to build actionable data catalogs via a data marketplace, that allow self-service access to data without sacrificing data governance and security policies.
Learn how to provide governed access and visibility to the data lake while still staying on track and within budget. Join Scott Gidley, Zaloni’s Vice President of Product, as he discusses:
- Architecting your data lake to support next-gen data catalogs
- Rightsizing governance for self-service data
- Where a data catalog falls short and how to address
- Success use cases
Attend this webinar to learn about the first Azure-based solution that combines limitless NAS, Archiving, Backup, and Disaster Recovery and that automatically reduces your costs as your files age.
See how Nasuni Cloud File Services and Azure object storage enable you to:
- Store active and inactive data in one global file system that has no capacity limits
- Cache actively used files locally to minimize egress charges and cloud latency
- Provide access to active files at local LAN speeds from any office location
- Avoid ever having to migrate or tier file data again
SISCIN from Waterford Technologies allows the creation of policies based on data profile for retention, deduplication or archiving, enabling full control in managing your file data. With flexible storage control to archive directly to the Cloud or locally. Giving organisations the performance and scalability of the Cloud with their existing server infrastructure.
For the bulk of enterprise data, the answer for cost-effective data storage is not always high-performance primary storage, meant for real-time applications, but rather a more-than-adequate-performance afforded by object-based storage.
If you're involved with enterprise storage, even on the periphery, you've probably heard someone talk about Object Storage. If you feel you don't actually know what object storage is, what the buzz is all about and if, and where, it would be appropriate to deploy in your organization - you're not alone.
Join us to learn why object storage matters, the limitations it solves within the data center, why it's become so prevalent in the age of Big Data, and how customers are using object storage today.
Dell EMC recently launched PowerEdge MX, the industry’s newest high performance, modular infrastructure, designed to support a wide variety of traditional and emerging data center workloads. PowerEdge MX offers the first modular infrastructure architecture designed to easily adapt to future technologies and server disaggregation. With its unique kinetic infrastructure, customers can break free from the bounds of technology silos and time-consuming, routine operational management while also dynamically assigning IT to optimally match different applications and needs.
Join us for a webcast and hear from Dell EMC experts how PowerEdge MX can reduce operating expenses, improve IT productivity and drive new business models that accelerate growth.
Even with a cloud-first strategy, enterprise IT is increasingly concluding that there will always be an on-premises component, leading them to the conclusion that the hybrid cloud is the only long-term end state. This presentation focuses on the data aspect of hybrid cloud. Storage is foundational to computing. The same statement is just as true for hybrid cloud computing. This presentation looks briefly at the data aspect by identifying use cases (DR, data protection, archive, etc.) and then looks into how users are implementing and managing hybrid cloud storage.
As more organizations adopt a cloud-first strategy, the task of migrating high-volume transactional workloads presents a unique set of challenges, particularly in handling the large amounts of data involved. Join Primitive Logic and Actifio as we discuss the most pressing challenges around transactional data migrations … and the solutions that can help address them.
You will learn:
The unique challenges in migrating transactional data to the cloud
How to handle data for applications with both on-prem and cloud components
How to approach transactional data as part of a multi-cloud strategy
How data virtualization helps resolve issues of security, governance, multi-cloud coordination, and more
The General Data Protection Regulation (GDPR) makes specific demands on organizations based in and doing business in the European Union (EU). Now several US states are considering similar legislation and California has already passed a GDPR-like law. Clearly this is not solely an EU problem.
From a data management standpoint, GDPR presents IT with two challenges. First it has to ensure the on-going protection of data which given the growth of unstructured data is increasingly a challenge. The size and number of files is an ongoing data management problem but meeting the specific demands on retention of discrete files within the data set is a bigger problem.
An even bigger challenge comes from the right to be forgotten aspect of these regulations where a user can request the removal of all their data from a backup set. Vendors are working on several potential solutions like delete on restore and isolated recovery zones but each of these creates their own challenges.
In this live webinar, join Storage Switzerland and Aparavi as we dive deep into the impact of GDPR and similar regulations to data management and the data protection process. The time is now to get prepared to meet the ever-increasing demands on data retention and data privacy.
Dell EMC VxBlock 1000 incorporates all-in-one, high-performance, highly reliable data protection that is pre-tested, pre-validated and supported by Dell EMC. In this webinar, learn about the business value that Dell EMC Integrated Data Protection for CI customers have reported through interviews with Taneja Group. Jeff Kato (Senior Analyst and Consultant, Taneja Group) is joined by Shad Stark (Systems Architect, IT, Palmer College of Chiropractic) and Jason Kahn (Product Manager, Dell EMC Integrated Data Protection for CI).
IT managers are increasingly seeking out suppliers of high performance, cost-effective and energy-efficient Green IT products to primarily reduce skyrocketing data center operational costs, a large proportion of which are energy related costs. Supermicro’s Resource Saving Architecture continues our tradition of leading the market with Green IT innovation that provides TCO savings for our customers and helps reduce TCE – the Total Cost to the Environment. We have introduced an overall architecture that optimizes datacenter power, cooling, shared resources and refresh cycles by enabling the modular refresh of subsystems and using optimized extended life subsystems.
In this webinar we will provide an intro to the key elements of this architecture and the benefits it provides including:
• Disaggregated Server Architecture
• Multi-node Power & Cooling
• Resource Pooling
• Rack Scale Management
Join us to learn how Resource-Saving Architecture can benefit your datacenter deployment today.
In order to deliver immediate value back to the business, it’s critical to ensure that an organization’s financial systems are running at full strength, but in most cases, I/O bottlenecks throttle performance and delay analytic outcomes. Vexata and Levyx have collaborated on a joint solution that achieves increased performance with less infrastructure, resulting in a 300% improvement in the price/performance ratio over the industry's next best alternative solution. In this webinar, you’ll learn:
•How to utilize the Levyx low-latency software and Vexata’s NVMe-based storage systems
•Best practices to eliminate bottlenecks for tick-analytics, strategic back-testing, algorithmic modeling, etc.
•Real-world results from customer trials and the recent STAC A3 test benchmarks
•Matt Meinel, Senior Vice President of Solutions Architecture, Levyx Inc.
•Rick Walsworth, VP of Product & Solution Marketing at Vexata
Sie haben eine dynamisch wachsende Storage-Umgebung? Sie stehen bei Backup, Desaster Recovery und Archivierung aktuell vor großen Herausforderungen? Sie suchen nach einer zukunftsfähigen Backup-Lösung, die entsprechend Ihrer Anforderungen skalierbar ist? Dann sollten Sie an unserem Webcast teilnehmen!
Gemeinsam mit Empalis zeigen wir Ihnen, wie Sie mit Hilfe von SUSE Enterprise Storage Ihre TSM / Spectrum Protect Storage Umgebung optimieren und dabei die Grenzen traditioneller Systeme (mangelnde Skalierbarkeit, hohe Kosten, keine Cloudfähigkeit) umgehen. Die intelligente, softwaredefinierte Storage- Lösung auf Basis von Ceph-Technologie ermöglicht Ihnen die Transformation der Storage Infrastruktur Ihres Unternehmens, um Kosten zu senken und unbegrenzte Skalierbarkeit bereitzustellen.
To borrow a phrase from a popular song from REM, “It’s the end of the LUN as we know it and I feel fine”. VMware VVols changes everything we know about storage for vSphere in a good way, with VVols LUN management is a thing of the past. VMware VVols represents the future of external storage for vSphere and that future is here right now. VVols also represents many years of engineering work by both VMware and its storage partners. The result of that work is a new storage architecture for vSphere that solves many of the hidden complexities inherent in VMFS and levels the playing field between file and block protocols. Learn from experts at HPE & VMware how VVols transforms external storage in vSphere, eliminates complexities and provides very real benefits to customers.
We know keeping end users productive is top of mind for IT, and that’s why good systems management is so important. But it can be difficult to stay on top of deployment, configuration, compliance and access.
Dell can simplify it all for you. We’ve developed products and services that will keep your systems running at peak efficiency so you can spend time and labor on the more important and strategic parts of your business.
Join us for this webinar to learn how Dell simplifies systems management, allowing our customers to save time, money and resources across their entire PC environment.
• Streaming and automating with Dell Client Command Suite
• How Intel vPro Out of Band integrates into the solution
• Powerful new provisioning capabilities
Massive scalability is one thing, but what if you want to run Object-Based Storage for a smaller shop? Can you start with a small hardware investment and run object-based storage on just one server? John Bell, Sr. Consultant, and Jamshid Afshar, Caringo Engineer, will explain how you can store, manage, search and deliver data with just one server, while maintaining the ability to scale out by simply plugging in additional servers as your data storage needs grow. They will explain how this “pay-as-you-grow” model can benefit organizations as they start to outgrow traditional SAN, NAS and Tape storage solutions.
The leading companies of tomorrow will be technology enabled. Whether in personalized medicine, mobility, entertainment, financial services, retail, or any other experience, their chance to "win their market" will be largely determined by an ability to leverage technology as a weapon and deploy and operate that technology-based arsenal across the global.
Join Esther Spanjer, Director of Business Development EMEIA, for this webinar where she will explain the latest developments in Western Digital’s enterprise SSD/HDD product offering. First, she will explain to you the transition from WD Gold™ to Western Digital Ultrastar® HDDs and how you can best migrate to the new offering. Furthermore, she will introduce you to 2 newly released enterprise SSDs, enabling the latest flash technology in your datacenter. The new Ultrastar DC SS530 SAS SSD is the offers best in class performance among current dual-port 12Gb/s SAS SSDs offerings in the market, and can help to drive faster data analytics, drive higher productivity, and power business decision-making. The Ultrastar SN620 NVMe™ SSD completes Western Digital’s NVMe portfolio, by delivering an essential NVMe drive that performs at 3x the bandwidth of enterprise SATA SSDs, providing a path to server consolidation.
As organizations continue to migrate to Office 365 for their email, productivity and collaboration tools, they’re quickly realizing that Office 365’s native capabilities do not provide the essential data protection capabilities they need.
Join us for a technical webinar on Tuesday, August 21st at 3PM SGT/ 5PM AEST and learn best practices for safeguarding your Office 365 data, including:
- Gaps within OneDrive, Exchange Online and SharePoint Online that lead to increased risk of data loss
- How a third party backup solution can automate data protection and ensure data recoverability from user error, malicious behavior or malware
- How to build a data management strategy for the future that leverages the cloud and improves alignment with organizational policies and SLAs
Copy Data Management promises to not only improve the data protection process, it promises to provide value to the organization even without a looming disaster. It can reduce storage costs by presenting virtual copies of data to test/dev, analytics and reporting. It also can make sure those copies are refreshed so those use cases are always dealing with the latest copy of data.
Join Storage Switzerland and Hitachi Vantara for our live panel discussion where we will show how copy data management better protects the organization and makes protection more than an insurance policy by adding value to the organization. We’ll show how Copy Data Management can reduce the cost of storage throughout the data center while at the same time improving processes like test/dev, analytics and reporting. Finally we’ll also explain why copy data management solutions may actually be better at protection since they store data in its native form, enabling it to be recovered more quickly.
The webinar will also include a live demo of Hitachi’s Data Instance Director. See Copy Data Management in action!
The recent data explosion is a huge challenge for storage and IT system designers. How do you crunch all that data at a reasonable cost? Fortunately, your familiar SAS comes to the rescue with its new 24G speed. Its flexible connection scheme already allows designers to scale huge external storage systems with low latency. Now the new high operating speed offers the throughput you need to bring big data to its knobby knees! Our panel of storage experts will present practical solutions to today’s petabyte problems and beyond.
NVMe and all-flash systems can solve any performance, floor space and energy problems. At least this is the marketing message many vendors and analysts spread today – but actually, sounds too good to be true, right?
Like always in real life, there is no clear black or white, but some circumstances you should be aware of – especially if you intend to leverage these technologies.
You may ask yourself: Do I need to rip and replace my existing storage? What is the best way to integrate both? What benefits do I receive?
Well, just join our brief webinar, which also includes a live demo and audience Q&A so you can get the most out of these technologies, make your storage great again and discover:
• How to integrate Flash over NVMe in real life
• How to benefit of some Flash/NVMe for your entire applications
Enterprise preparation for AI has centered almost exclusively on data prep and data science talent. While without data there would be no AI, enterprises that fail to ready the broader organization, chiefly people, process, and principles, don’t just stunt their capacity for good AI, they risk sunk investment, jeopardize employee trust, brand backlash, or worse.
Ensuring sustainable deployment starts with assessing enterprise data strategy, aligning myriad stakeholders, technological feasibility assessment, and a coordinated approach to ethics.
Join VentureBeat and industry analyst and founding partner of Kaleido Insights, Jessica Groopman for discussion on the five fundamentals of AI readiness at our upcoming VB Live event!
Attend this webinar and learn:
* What you need to do to prepare for AI-- beyond the data science team
* Real-world examples and research findings
* Top 5 best practices for strategic AI implementation
* Nathan Decker, Director of eCommerce, evo
* Ken Natori, President, Natori Company
* Jessica Groopman, Industry analyst and founding partner of Kaleido Insights
* Rachael Brownell, Moderator, VentureBeat
Learn the origin of big data applications, how new data pipelines require a new infrastructure toolset and why both containers and shared storage are the fundamental infrastructure building blocks for future data pipelines.
We will first discuss the factors driving changes in the big-data ecosystem: ever-greater increases in the three Vs of data volume, velocity, and variety. The data lake concept was originally conceived as a single location for all data, but the reality is that multiple pipelines and storage systems quickly lead to complex data silos. We then contrast the legacy Hadoop applications, which are built only for volume, and the next generation of applications, like Spark and Kafka, which solves for all three Vs. Finally, we end with how to build infrastructure to support this new generation of applications, as well as applications not yet in existence.
About the Speakers:
Ivan Jibaja, Tech Lead, Pure Storage Ivan Jibaja is currently a tech lead for the Big Data Analytics team inside Pure Engineering. Prior to this, he was a part of the core development team that built the FlashBlade from the ground-up. Ivan graduated with a PhD in Computer Science from the University of Texas at Austin, with a focus on systems and compilers.
Joshua Robinson, Founding Engineer, FlashBlade, Pure Storage Joshua builds Pure's expertise in big-data, advanced analytics, and AI. His focus is on organizing a cross-functional team, technical validation, performance benchmarking, solution architectures, collecting customer feedback, customer consultations, and company-wide trainings. Joshua specializes in several data analytics tools, including Hadoop, Spark, ElasticSearch, Kafka, and TensorFlow.
For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.
The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.