The storage community on BrightTALK is made up of thousands of storage and IT professionals. Find relevant webinars and videos on storage architecture, cloud storage, storage virtualization and more presented by recognized thought leaders. Join the conversation by participating in live webinars and round table discussions.
Storage Switzerland, KeepItSafe, VeeamRecorded: Feb 15 201917 mins
Most data centers still use a legacy DR strategy of replicating or even physically transporting backups to a dedicated disaster recovery (DR) site, or a secondary site owned by the organization. Disaster Recovery as a Service (DRaaS) delivers a compelling alternative to traditional DR, return on investment (ROI). It eliminates the costs associated with a dedicated disaster recovery site like paying for and equipping the site. Organizations though are hesitant to transition to DRaaS following the “if it ain’t broke, don't fix it” philosophy.
In this live 15 minute webinar join Storage Switzerland, Veeam and KeepItSafe to learn how to transition from a legacy DR strategy to DRaaS without risking data protection downtime.
In 15 minutes, we cover the areas critical to success:
1. DRaaS Software selection
2. Cloud Provider selection
3. Creating and executing a transition plan
UK Public sector organisations have IT requirements that are evolving. Techniques such as containerisation, continuous integration and infrastructure-as-code have significantly increased the pace and agility of modern development teams.
Correct deployment of these techniques can enable deployment of new features many times a day, so that business value is achieved in the shortest time possible.
As a result, UKCloud recognise that there is a need for a fully optimised Platform as a Service offering which enables Developers to push applications straight to the cloud all without having to concern themselves with the management, support and configuration of underlying infrastructure
Richard McCormack, Head of Product Solutions at Fujitsu, joins Bill Borsari, Systems Architect Director at DateraRecorded: Feb 14 201956 mins
Discover the architecture behind high-performance storage with Datera data automation on Fujitsu PRIMERGY servers.
Datera is the only true, software-driven data services platform that:
•Powers modern application environments at global scale
•Provides game-changing data orchestration and automation with enterprise-class performance
•Joint solution takes full advantage of the performance, security and reliability of Fujitsu PRIMERGY servers
•Delivers up to 70% lower total cost of ownership and operation compared to traditional solutions
Richard McCormack, Head of Product Solutions at Fujitsu, joins Bill Borsari, Systems Architect Director at Datera, to share how Datera and Fujitsu PRIMERGY systems are leading the software-defined revolution by providing the most powerful and flexible data center innovations across a vast ecosystem of solutions to turn your IT into a business advantage.
Sam Fawaz, Cloud Solutions Architect; Amit Rawlani, Dir. of Tech Alliance, Cloudian; William Bell, EVP of Product, PhoenixNAPRecorded: Feb 14 201941 mins
Storage admins today manage ever-increasing data volumes in a 24x7, zero-downtime environment. Protecting these critical assets – while maintaining service levels, compliance, and security across all locations – has never been more costly and complex. Excessive backup times and unmet RPO/RTO SLAs potentially lead to data loss and downtime.
Join us for a live webcast to learn how PhoenixNAP streamlined their backup and storage services with Veeam and Cloudian to provide:
• Flexible data protection options for all of your workloads – including for VMware and Hyper-V
• Improved operational efficiencies, including RPO/RTO service levels
• Optimized data storage for scale and performance
• Simple and unified control and visibility of data
• Protection from ransomware
Live Webcast: Thursday, February 14th at 10am PST / 1pm EST / 6pm GMT
Aashish Majethia, Senior Solutions EngineerRecorded: Feb 13 201938 mins
Analysts need timely access to enterprise data in order to stay competitive in today’s rapidly changing environment. Typically, business users need to request access through the IT department, which can be a waiting game, either because of technological roadblocks, governance restrictions or both. This adds more work, more process, and more frustration on both sides. Having the ability to find data sets, examine, update, and provision the data themselves allows business users to move quickly and frees IT to work on higher priority items.
A modern data platform should provide a self-service data marketplace that gives right-sized governed access to data. The security permissions allow IT to define who needs access to the correct data at the appropriate stage of the data pipeline. This becomes quite complicated in regulated environments. Users should be able to search for data they have access to, explore and potentially update the metadata associated, and provision it into a sandbox when ready.
Join us as Aashish Majethia, a Senior Solutions Engineer, dives into the self-service data marketplace and what is required to make it successful. He will cover topics including:
- Self-service data preparation
- Governance considerations and how they can enable a more agile data-driven enterprise
George Crump, Storage Switzerland and W. Curtis Preston, DruvaRecorded: Feb 13 201960 mins
Managing and protecting critical data across servers and applications in multiple locations around the globe is challenging. And the more decentralized and complex your infrastructure, the more difficult it is to manage your data. The potential bad news? Data loss, site outages, revenue loss, and potential non-compliance with regulations.
But here’s the good news: centralizing data protection in the cloud can make all the difference. That’s why you should join our webinar and hear from storage expert, George Crump, from Storage Switzerland and Druva’s W. Curtis Preston, Chief Technologist, as they discuss:
• Why protecting a distributed data center is challenging with traditional methods
• How a cloud-centralized backup strategy can be a game changer for your organization
• How Druva can help you drastically improve data protection quality, reduce costs, and simplify global management and configuration?
Jacque Istok, Head of Data, Pivotal and Kelly Carrigan, Principal Consultant, EON CollectiveRecorded: Feb 13 201959 mins
This webinar is for IT professionals who have devoted considerable time and effort growing their careers in and around the Netezza platform.
We’ll explore the architectural similarities and technical specifics of what makes the open source Greenplum Database a logical next step for those IT professionals wishing to leverage their MPP experience with a PostgreSQL-based database.
As the Netezza DBMS faces a significant end-of-support milestone, leveraging an open source, infrastructure-agnostic replacement that has a similar architecture will help avoid a costly migration to either a different architecture or another proprietary alternative.
Philip Kufeldt, Univ. of California, Santa Cruz; Mike Jochimsen, Kaminario; Alex McDonald, NetAppRecorded: Feb 13 201960 mins
Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:
•What prompted the development of composable infrastructure?
•What are the solutions?
•What is composable infrastructure?
•Enabling technologies (not just what’s here, but what’s needed…)
•Status of composable infrastructure standards/products
•What’s on the horizon – 2 years? 5 Years
•What it all means
Jacob Smith Co-founder and CMO at Packet & Ihab Tarazi Chief Technology Officer at PacketRecorded: Feb 13 201930 mins
Data Centers have been around for decades but ease of access and data center management (for individual companies) has never been easy. Yet 80% of the world is still on-premise.
With the rise of software-defined data centers (SDDC), Data Centers as a Service providers such as Packet have made it easy for companies to leverage the resources that a Data Center has to offer (vs. the public cloud) without the headache.
Learn how the evolution of SDDC is changing the infrastructure market and how you can make your company's infrastructure strategy your competitive advantage.
Adam Sharp, CIO, International, TBWA | Andy Hardy, VP, EMEA, NasuniRecorded: Feb 13 201936 mins
Learn from the International CIO of one of the top 10 ad agencies in the world how consolidating infrastructure has become key to streamlining costs, enhancing productivity, and increasing cash flow.
With operations spanning 295 offices and 11,000 employees, TBWA’s IT staff faced increasing demands to bring storage and data protection costs under control without reducing flexibility or security. Join Adam Sharp, CIO International for TBWA, The Disruption® Company, and Adweek’s 2018 Global Agency of the Year, as he discusses why his team opted for a “disruptive” cloud strategy of its own and what the results to date have been:
- How the business strategy drove TBWA to the cloud
- What key decisions enabled them to reduce costs and meet capacity requirements at the same time
- How they went about a worldwide roll-out of their cloud storage strategy
- Why multi-cloud capability is important
Johna Till Johnson, CEO & Founder, Nemertes ResearchRecorded: Feb 12 201949 mins
Selecting a vendor partner (or partners) is one of the most critical decisions enterprises will make on their IoT journeys. The right partner makes all the difference: enterprises with top-ranked partners report greater success in generating revenue, cutting costs, and optimizing business processes via IoT.
• Who are the right providers?
• What are the critical factors to consider in selecting one?
This webinar reviews the provider landscape and highlights critical selection factors for companies of all sizes and industries.
Parker Sinclair Sales Engineer, Druva Inc.Recorded: Feb 12 201920 mins
Did you know that you can simplify data protection for enterprise workloads with a single cloud solution for backup, archival, and disaster recovery?
Attend a 30-minute live demo of Druva Phoenix and our product expert will answer all of your questions!
During this demo you'll learn how to:
- Provide global protection for enterprise server and virtual workloads
- Reduce total cost of ownership (TCO) by up to 60%
- Isolate and quickly restore data during infrastructure attacks
- Get started within minutes — Druva Phoenix is offered as-a-service and can be provisioned on-demand
Mike Harding, Product Manager - Microsoft Storage Solutions, HPERecorded: Feb 7 201952 mins
Abstract: Learn how HPE hardware brings out the best in Microsoft Exchange Server 2019. This newest version of the leading email product relies on specific hardware features as never before. This session highlights the key features and benefits of Microsoft Exchange on HPE Apollo Gen 10 storage, and what you can expect for improved performance, security, and administration for your email system.
Joshua Robinson - Founding Engineer, FlashBladeRecorded: Feb 7 201939 mins
Learn how Pure Storage engineering manages streaming 190B log events per day and makes use of that deluge of data in our continuous integration (CI) pipeline. Our test infrastructure runs over 70,000 tests per day creating a large triage problem that would require at least 20 triage engineers. Instead, Spark’s flexible computing platform allows us to write a single application for both streaming and batch jobs to understand the state of our CI pipeline for our team of 3 triage engineers. Using encoded patterns, Spark indexes log data for real-time reporting (Streaming), uses Machine Learning for performance modeling and prediction (Batch job), and finds previous matches for newly encoded patterns (Batch job).
Resource allocation in this mixed environment can be challenging; a containerized Spark cluster deployment, and disaggregated compute and storage layers allow us to programmatically shift compute resources between the streaming and batch applications.
This talk will go over design decisions to meet SLAs of streaming and batching in hardware, data layout, access patterns, and containers strategy. We will also go over the challenges, lessons learned, and best practices for this kind of setup.
Leighton James, CTO at UKCloud, Jan Mietle, Partner Technology Strategist at MicrosoftRecorded: Feb 7 201945 mins
This webinar will explore the opinion of public sector organisations facing the necessity to modernise their data centres as reported in the 2018 iGOV Survey.
Over the course of the webinar our CTO Leighton James will take a deep dive into the biggest concerns of public sector organisations surrounding Data Centre Modernisation, the implications on how cloud changes IT operations and the relative security of cloud services.
Joining Leighton will be special guest Jan Mietle, Partner Technology Strategist, Microsoft
Jan joined Microsoft in 2014 where he works with the top service providers in the UK, using technology enablement to help shape their business models and drive cloud-based services to their customers. He has 21 years IT industry experience with an extensive background in designing and delivering value-based services.
•Cloud adoption trends in public sector
•The implications & perception of Data Centre Modernisation on operational costs
•How cloud affects the IT culture & service to end users
•Data loss & security when moving away from in-house IT
This Q&A session will conclude our webinar.
Priya Gill, Product Marketing, Box | Rena Mashintchian, Product Management, BoxRecorded: Feb 6 201934 mins
Organizations still rely on network file shares and complicated VPN setups to safeguard corporate content.
But your teams need what network file shares can't deliver: greater mobility, easier collaboration and better security.
Watch this webinar to learn:
-Why businesses like yours are replacing their legacy technology with Cloud Content Management
-How Box helps teams be more productive while reducing costs and simplifying IT management
-How Box Drive, our new desktop app, smooths the transition when migrating your users to the cloud
Christine Nagy, Implementation Consultant, Box | Matthew Wu, Implementation Consultant, BoxRecorded: Feb 6 201929 mins
Two-thirds of business executives believe they must speed up the pace of digitization to remain competitive. Yet only 42% have a digital-first strategy.
That’s why we think you should know about Box Transform, a white-glove program offered to ensure your success, that delivers the team and experience to quickly bring your organization into the digital age.
Andrew Grimes @ NetApp; Eric Burgener @ IDCRecorded: Feb 6 201937 mins
Join IDC’s Eric Burgener as he shares his expertise on why enterprises should plan for new persistent memory (PMEM) technologies. You’ll learn about:
The evolution and adoption of PMEM
Challenges and opportunities for application owners
How to evaluate today’s PMEM solutions
You’ll also find out how NetApp® MAX Data software leverages persistent memory in the server and fuels ultra-low latency, support for huge datasets, and enterprise data services for in-memory applications.
Christine McMonigal, Intel; J Metz, Cisco; Alex McDonald, NetAppRecorded: Feb 5 201961 mins
“Why can’t I add a 33rd node?”
One of the great advantages of Hyperconvergence infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking.
In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition.
In this webinar, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:
•The impact of metadata on the network
•What happens as we add additional nodes
•How to right-size the network for growth
•Tricks of the trade from the networking perspective to make your HCI work better
Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will necessarily vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place.
Sarah Beaudoin, Product Marketing Manager, DruvaRecorded: Feb 5 201944 mins
According to Cisco ransomware is growing at a yearly rate of 350% and is estimated to have cost organizations $5 billion in 2017. As attacks become more sophisticated and prevalent, IT organizations need to ensure they have a strategy to mitigate the risks of ransomware and other malware attacks.
Join us and discover how 82% of organisations recovered from a ransomware attack by restoring data from cloud backup.
- Proactive strategies to protect data before a malicious attack occurs
- Factors and issues that can complicate your organisation’s risks
- Measures to gain immediate access to data during and after an attack
Sarah Beaudoin, Product Marketing Manager, DruvaRecorded: Feb 5 201962 mins
According to Cisco, ransomware is growing at a yearly rate of 350% and is estimated to have cost organizations $5 billion in 2017. As attacks become more sophisticated and prevalent, IT organizations need to ensure they have a strategy to mitigate the risks of ransomware and other malware attacks.
Join us and discover how 82% of organisations recovered from a ransomware attack by restoring data from cloud backup.
- Proactive strategies to protect data before a malicious attack occurs
- Factors and issues that can complicate your organisation’s risks
- Measures to gain immediate access to data during and after an attack
Alex McDonald, Vice-Chair SNIA Europe and NetAppRecorded: Feb 5 201966 mins
When it comes to storage, a byte is a byte is a byte, isn’t it? One of the enduring truths about simplicity is that scale makes everything hard, and with that comes complexity. And when we’re not processing the data, how do we store it and access it?
In this webcast, we will compare three types of data access: file, block and object storage, and the access methods that support them. Each has its own set of use cases, and advantages and disadvantages. Each provides simple to sophisticated management of the data, and each makes different demands on storage devices and programming technologies.
Perhaps you’re comfortable with block and file, but are interested in investigating the more recent class of object storage and access. Perhaps you’re happy with your understanding of objects, but would really like to understand files a bit better, and what advantages or disadvantages they have compared to each other. Or perhaps you want to understand how file, block and object are implemented on the underlying storage systems – and how one can be made to look like the other, depending on how the storage is accessed. Join us as we discuss and debate:
•How different types of storage drive different management & access solutions
•Where everything is in fixed-size chunks
•SCSI and SCSI-based protocols, and how FC and iSCSI fit in
•When everything is a stream of bytes
•NFS and SMB
•When everything is a blob
•HTTP, key value and RESTful interfaces
•When files, blocks and objects collide
Xavier Stern, Territory Director Southern Europe, HYCU, Inc.Feb 19 20199:00 amUTC90 mins
Nous organisons un Workshop de notre solution HYCU solution dédiée pour NUTANIX
Hycu est une solution de sauvegarde dédiée NUTANIX ( AHV, ESX), permettant de venir en complément de la Data protection délivrée en natif par NUTANIX.
Certifiée AHV ( HYCU développe en premier sur Acropolis) et ESX , Nutanix Files, Volume Groupes, CALM….
Hycu, Strategic Technology Partner Nutanix.
Nutanix utilise la solution Hycu pour ses sauvegardes
Sarah Beaudoin, Product Marketing Manager, DruvaFeb 19 20194:00 pmUTC44 mins
As organizations continue to migrate to Office 365 for their email, productivity and collaboration tools, they’re quickly realizing that Office 365’s native capabilities do not provide the essential data protection capabilities they need.
Join us for a technical webinar on Tuesday, August 21st at 3PM SGT/ 5PM AEST and learn best practices for safeguarding your Office 365 data, including:
- Gaps within OneDrive, Exchange Online and SharePoint Online that lead to increased risk of data loss
- How a third party backup solution can automate data protection and ensure data recoverability from user error, malicious behavior or malware
- How to build a data management strategy for the future that leverages the cloud and improves alignment with organizational policies and SLAs
Xavier Stern, Territory Director Southern Europe, HYCU, Inc.Feb 20 20199:00 amUTC90 mins
Deseamos invitarle a nuestro workshop de HYCU, la única solución de recuperación y respaldo dedicada a NUTANIX
HYCU es la única solución de respaldo y seguridad dedicada a NUTANIX (AHV, ESX), para complementar la protección de datos proporcionada de forma nativa por NUTANIX.
Certificado en AHV (HYCU se desarrolla en primer lugar en Acrópolis) ESX, Nutanix Files, Volume Groups, y CALM .
HYCU, socio tecnológico estratégico de Nutanix.
Nutanix utiliza la solución de HYCU para sus copias de seguridad.
Dr. Ann McNamara – Associate Professor, Texas A&M and Jennifer Sigmund – Sr. Higher Education Strategist, Dell EMCFeb 20 20195:00 pmUTC60 mins
Today’s employers are demanding graduates who are well versed in research, problem-solving and collaboration. Hear from Dr. McNamara from Texas A&M on how augmented and virtual reality innovations are used in the classroom to prepare students for today’s rapidly changing workplace.
Dr. Ann McNamara – Associate Professor, Associate Dept Head, Graduate Program Coordinator, Dept of Visualization, Texas A&M
Jennifer Sigmund – Sr. Higher Education Strategist, Dell EMC
Tony Palmer, Enterprise Strategy Group; Marcus Thordal, Brocade; Eduardo Freitas & Tony Huynh, Hitachi VantaraFeb 20 20195:00 pmUTC60 mins
Join Tony Palmer, senior analyst from Enterprise Strategy Group, and technical experts from Hitachi Vantara and Broadcom as we discuss how customers can achieve strict zero RTO/RPO objectives for mission-critical Oracle database deployments.
•Join experts from Enterprise Strategy Group, Hitachi Vantara and Broadcom as we discuss how you can achieve strict zero RTO/RPO SLO’s for your Oracle database deployment.
•Learn best practices to maximize Oracle performance on Hitachi Virtual Storage Platform (VSP) and Hitachi Unified Compute Platform (UCP).
•Ask your toughest Oracle questions and, if you can stump the experts, win a prize!
Patty Driever, IBM; Howard Johnson, Broadcom; Joe Kimpler, ATTO TechnologiesFeb 20 20196:00 pmUTC75 mins
FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilizes Fibre Channel as the underlying transport.
The FCIA FICON 101 webcast (on-demand at http://bit.ly/FICON101) described some of the key characteristics of the mainframe and how FICON satisfies the demands placed on mainframes for reliable and efficient access to data. FCIA experts gave a brief introduction into the layers of architecture (system/device and link) that the FICON protocol bridges. Using the FICON 101 session as a springboard, our experts return for FICON 201 where they will delve deeper into the architectural flow of FICON and how it leverages Fibre Channel to be an optimal mainframe transport.
Join this live FCIA webcast where you’ll learn:
- How FICON (FC-SB-x) maps onto the Fibre Channel FC-2 layer
- The evolution of the FICON protocol optimizations
- How FICON adapts to new technologies
Join this webinar on Thursday 21 February to learn how 5G will transform the way content is created, produced and distributed.
More than just a faster mobile data connection, 5G reinvents connectivity. The technology enables new types of remote productions, and coverage of more live events, news and sports in higher 4K/HDR quality, and will revolutionise the way consumers receive content, combining broadcast, OTT and data to create a seamless experience regardless of network or device.
That's the theory - but how is 5G being deployed in practice? What are the early adopters doing, and what results are they achieving?
This webinar will explore a series of exciting use cases for 5G with hands-on case studies, including:
- Enriching production and storytelling
- Revitalising newsgathering and live event coverage
- Blending broadcast and live data for mobile audiences in the European 5G-Xcast project
- Dr Jordi Gimenez, 5G research engineer & project manager, IRT Germany
- Matt Stagg, Director of mobile strategy, BT Sport
- Marios Nicolaou, 5G and digital transformation senior strategy advisor
Chris Tinker, Chief Technologist, HPE and Hal Woods, CTO, DateraFeb 21 20195:00 pmUTC105 mins
Chris Tinker, Chief Technologist at Hewlett Packard Enterprise joins Hal Woods, CTO at Datera to discuss some of the leading technologies in the Software Defined Revolution, and how HPE and Datera help cultivate the vast ecosystem of solutions.
Data is at the center of the Software Defined Data Center, where Datera reimagined enterprise storage and partnered with HPE to deliver a complete solution.
What you will learn from this webinar:
• Architecting your data center for high-performance
• Ease of deployment and management
• Advanced automation and future-ready choice to orchestrate data for VMs
George Crump, Storage Switzerland and W. Curtis Preston, DruvaFeb 22 20196:00 pmUTC15 mins
If you think the cloud provides enough protection for your critical data, you’re putting that data at risk. You can’t assume data is protected because it’s “in the cloud” -- you need to ensure all of the data in your critical applications, including Office 365 and Salesforce.com, get the protection they deserve.
Join George Crump, Founder, and Lead Analyst at Storage Switzerland, and W. Curtis Preston (a.k.a. Mr. Backup), Chief Technologist at Druva, where they will discuss:
- What level of protection do cloud services provide?
- Is the provided level of protection enough for the enterprise?
- What does the enterprise need to add to achieve complete protection?
Register Now and get Storage Switzerland’s latest eBook “Protecting the Organization From Its Endpoints.”
Keith Hudgins, Docker; Alex McDonald, NetAppFeb 26 20196:00 pmUTC75 mins
Containers are a big trend in application deployment. The landscape of containers is moving fast and constantly changing, with new standards emerging every few months. Learn what’s new, what to pay attention to, and how to make sense of the ever-shifting container landscape.
This live webcast will cover:
•Container storage types and Container Frameworks
•An overview of the various storage APIs for the container landscape
•How to identify the most important projects to follow in the container world
•The Container Storage Interface spec and Kubernetes 1.13
•How to get involved in the container community
Nathan Swetye - Sr. Manager of Platform Engineering - Cox AutomotiveFeb 26 20196:00 pmUTC62 mins
Cox Automotive comprises more than 25 companies dealing with different aspects of the car ownership lifecycle, with data as the common language they all share. The challenge for Cox Automotive was to create an efficient engine for the timely and trustworthy ingest of data capability for an unknown but large number of data assets from practically any source. Working with StreamSets, they are populating a data lake to democratize data, allowing analysts easy access to data from other companies and producing new data assets unique to the industry.
In this webinar, Nathan Swetye from Cox Automotive will discuss how they:
-Took on the challenge of ingesting data at enterprise scale and the initial efficiency and data consistency struggles they faced.
-Created a self-service data exchange for their companies based on an architecture that decoupled data acquisition from ingestion.
-Reduced data availability from weeks to hours and developer time by 90%.
Ryan Meek, Solution ArchitectFeb 26 20197:00 pmUTC60 mins
Metadata is data about the data, but how does it work with object storage? What benefits can you reap by using metadata, and do all object storage solutions use metadata the same way? In this Tech Tuesday webinar, Ryan Meek will take a deep dive on metadata and explain how it can be used to unlock the intelligence potential that resides in large data repositories.
Storage Switzerland, Virtual Instruments, SANBlazeFeb 27 20196:00 pmUTC60 mins
NVMe storage systems and NVMe networks promise to reduce latency further and increase performance beyond what SAS based flash systems and current networking technology can deliver. To take advantage of that performance gain however, the data center must have workloads that can take advantage of all the latency reduction and performance improvements that NVMe offers. Vendors emphatically state that NVMe is the next must-have technology, yet many still continue to provide SAS based arrays using traditional networks.
How do IT planners know then, that investing in NVMe will truly provide their organizations the benefits of NVMe for their demanding applications and see a measurable return on investment? Just creating a test environment to perform an NVMe evaluation can break the IT budget!
Register now to join Storage Switzerland, Virtual Instruments, and SANBlaze as we look at the state of the data center and provide IT planners with the information they need to decide if NVMe is an investment they should make now or if they should wait a year or more. The key is determining which applications can benefit from NVMe-based approaches.
In this live event, IT professionals will learn
- About NVMe, NVMe Storage Systems and NVMe over Fabric Networking
- The Performance Potential of NVMe Storage and Networks
- What attributes are needed for a workload to take advantage of NVMe
- Why NVMe creates problems for current IT testing strategies
- Why a Workload Simulation approach is the only practical way to test NVMe
- How to build a storage performance validation practice
Don Deel, NetApp, SNIA; Moderated by Richelle Ahlvers, Broadcom, SNIAFeb 27 20196:00 pmUTC45 mins
Tools for speeding your implementation of the next-generation storage management standard
The SNIA Swordfish™ specification for the management of storage systems and data services is an extension of the DMTF Redfish® specification. Together, these specifications provide a unified approach for the management of servers and storage in converged, hyper-converged, hyperscale and cloud infrastructure environments.
To help speed your Swordfish development efforts, SNIA has produced open source storage management tools available now on GitHub for your use. Join this session for an overview of these open source tools, which include a Swordfish API Emulator, a Swordfish Basic Web Client, an example Swordfish plugin for the Microsoft Power BI business analytics service, and an example Swordfish plugin for the Datadog monitoring service.
Scott Gidley, Vice President of ProductFeb 27 20197:00 pmUTC40 mins
Achieving actionable insights from data is the goal of any organization. To help in this regard, data catalogs are being deployed to build an inventory of data assets that provides both business and IT users a way to discover, organize and describe enterprise data assets. This is a good first step that helps all types of users easily find relevant data to extract insights from.
Increasingly, end users want to take the next step in provisioning or procuring this data into a sandbox or analytics environment for further use. Attend this session to see how organizations are looking to build actionable data catalogs via a data marketplace, that allow self-service access to data without sacrificing data governance and security policies.
Learn how to provide governed access and visibility to the data lake while still staying on track and within budget. Join Scott Gidley, Zaloni’s Vice President of Product, as he discusses:
- Architecting your data lake to support next-gen data catalogs
- Rightsizing governance for self-service data
- Where a data catalog falls short and how to address
- Success use cases
Murali Selvaraj, CIO, Perkins + Will | Henry Axelrod, Solutions Architect, AWSFeb 28 20194:00 pmUTC60 mins
Learn from the CIO and Director of IT Infrastructure from one of the world’s top architecture and design firms as they discuss how they built a high-performance infrastructure to support the firm’s global growth.
Learn from Murali Selvaraj, CIO and Gregory Fait, Director of IT Infrastructure at Perkins + Will, one of the world’s top architecture and design firms, as they discuss how they built a high-performance infrastructure to support the firm’s global growth. Using a solution from AWS and Nasuni, the firm has been able to scale its ability to store and protect unstructured data across 2,500 employees and 28 locations.
Attend this webinar and hear:
- How the firm scaled its storage and protection strategy across multiple locations using Amazon S3 and Nasuni
- How the combined platform increased the firm’s ability to pursue a global growth strategy
- What capabilities enabled them to increase productivity across distributed design teams
Storage Switzerland and StorONEFeb 28 20195:00 pmUTC60 mins
Most storage consolidation strategies fail because they attempt to consolidate to a single piece of storage hardware. To successfully consolidate storage, IT professionals need to look at consolidation strategies that worked. Server consolidation was VMware’s first use case. It was successful because instead of consolidating hardware, VMware consolidated the environment under a single hypervisor (ESXi) and console (vCenter) but still provided organizations with hardware flexibility. A successful storage consolidation strategy needs to follow a similar formula by providing a single software solution that controls a variety of storage hardware, but that software also has to extract maximum performance and value from each hardware platform on which it sits.
Join Storage Switzerland and StorOne in which we discuss how to design a storage consolidation strategy for today, the future and the cloud.
In this webinar learn:
- The problems with a fragmented approach to storage
- Why storage fragmentation promises to get worse because of AI, ML, and the Cloud
- Why consolidating to a single storage system won’t work
- Why hyperconverged architectures fall short
- Why Software Defined Storage falls short
- Why the organization needs a Storage Hypervisor
Russell Ruben, Director, Automotive MarketingFeb 28 20196:00 pmUTC45 mins
NAND flash storage is moving from infotainment to many key applications as the automotive industries drives to autonomous vehicles. As the applications change, so does the usage from single storage systems to domain, shared storage. With these additional use cases come additional challenges and requirements for NAND flash.
Arvind Prabhakar - Co-Founder and CTO - StreamSetsFeb 28 20196:00 pmUTC62 mins
Modern data infrastructures are fed by vast volumes of data, streamed from an ever-changing variety of sources. Standard practice has been to store the data as ingested and force data cleaning onto each consuming application. This approach saddles data scientists and analysts with substantial work, creates delays getting to insights and makes real-time or near-time analysis practically impossible.
Zhiqi Tao, Intel; John Kim, MellanoxFeb 28 20196:00 pmUTC75 mins
This webcast will present an overview of scale-out file system architectures. To meet the increasingly higher demand on both capacity and performance in large cluster computing environments, the storage subsystem has evolved toward a modular and scalable design. The scale-out file system is one implementation of the trend, in addition to scale-out object and block storage solutions. This presentation will provide an introduction to scale-out-file systems and cover:
•General principles when architecting a scale-out file system storage solution
•Hardware and software design considerations for different workloads
•Storage challenges when serving a large number of compute nodes, e.g. name space consistency, distributed locking, data replication, etc.
•Use cases for scale-out file systems
•Common benchmark and performance analysis approaches