Hi [[ session.user.profile.firstName ]]

Storage

  • FCoE vs. iSCSI vs. iSER
    FCoE vs. iSCSI vs. iSER J Metz, Cisco; Saqib Jang, Chelsio; Rob Davis, Mellanox; Tim Lustig, Mellanox Recorded: Jun 21 2018 62 mins
    The “Great Storage Debates” webcast series continues, this time on FCoE vs. iSCSI vs. iSER. Like past “Great Storage Debates,” the goal of this presentation is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.

    One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.

    Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.

    That leads to several questions about FCoE, iSCSI and iSER:

    •If we can run various network storage protocols over Ethernet, what
    differentiates them?
    •What are the advantages and disadvantages of FCoE, iSCSI and iSER?
    •How are they structured?
    •What software and hardware do they require?
    •How are they implemented, configured and managed?
    •Do they perform differently?
    •What do you need to do to take advantage of them in the data center?
    •What are the best use cases for each?

    Join our SNIA experts as they answer all these questions and more on the next Great Storage Debate.
  • Slash Storage TCO for Rapidly Scaling Data Sets
    Slash Storage TCO for Rapidly Scaling Data Sets Glen Olsen, Caringo Product Manager & Krishna Subramanian, Komprise COO Recorded: Jun 21 2018 57 mins
    The nature of enterprise data is rapidly changing and existing storage infrastructures can’t keep up. Network Attached Storage (NAS) devices were designed for performance and single-site collaboration but file creation and access is different now. Many turn to the cloud to offload data, however, for rapidly scaling data sets, daily transfer rates and bandwidth constraints are an issue. In addition, some sensitive information can’t leave your data center. Komprise and Caringo have partnered to solve these issues by pairing intelligent data management technology with hassle-free, limitless storage.

    Attend this webinar to learn how you can slash TCO for rapidly scaling data sets by identifying data to move from NAS. Then securely transferring it based on value to Caringo Swarm scale-out object storage were it is protected without backups and instantly and securely available internally or externally.
  • A New Era in Data Storage for Machine Learning and AI Ecosystems
    A New Era in Data Storage for Machine Learning and AI Ecosystems Ashish Gupta Recorded: Jun 21 2018 45 mins
    As enterprises transition their Business Intelligence and Analytics environments to Machine Learning and Artificial Intelligence driven ecosystems, their core data infrastructure has to scale. Focusing only on the compute layers, creates a highly inefficient infrastructure. Vexata with its VX-OS version 3.5 release brings to market transformative economics and breakthrough performance to power these next-generation workloads at scale.e.

    You will learn about:
    • How to scale core data infrastructures for the transition to Machine Learning and Artificial Intelligence workloads
    • What are the key considerations before creating an AI/ML-centric storage infrastructure
    • How Vexata's new VX-OS version 3.5 release addresses these challenges
  • Moving the Enterprise Backup to the Cloud - A Step-By-Step Guide
    Moving the Enterprise Backup to the Cloud - A Step-By-Step Guide Storage Switzerland, Veeam, KeepItSafe Recorded: Jun 21 2018 60 mins
    Making sure everything in the data center is properly protected is a common struggle that all data centers face. The cloud, cloud backup, seems like an answer to all those struggles. But, how exactly does IT make the conversion from on-premises backup to cloud backup? Join experts from Storage Switzerland, Veeam and KeepItSafe and learn; a method to determine if cloud backup is right for your organization and if it is, how to create a plan to begin the transfer to cloud based data protection operations.
  • VDI with XtremIO X2
    VDI with XtremIO X2 Chhandomay Mandal Recorded: Jun 21 2018 53 mins
    Hundreds of customers are running millions of virtual desktops on XtremIO today. The new XtremIO X2 platform offers opportunities to start with even smaller configurations and scale more granularly. In this session, we will present a holistic overview of an XtremIO X2-enabled VMware VDI environment. You will also learn about XtremIO X2 sizing and best practices for VMware VDI deployments.
  • Is Your Storage Ready for Commercial HPC? - Three Steps to Take
    Is Your Storage Ready for Commercial HPC? - Three Steps to Take Storage Switzerland, Panasas Recorded: Jun 20 2018 61 mins
    In this webinar, join Storage Switzerland and Panasas to learn:

    - Why HPC workloads are on the rise in the enterprise
    - Why common enterprise storage can’t keep up with HPC demands
    - Why traditional HPC storage is a poor fit for the enterprise
    - A three-step process to designing an enterprise-class HPC storage architecture
  • [Ep.17] Ask the Expert: Data Migration and Third Party Maintenance
    [Ep.17] Ask the Expert: Data Migration and Third Party Maintenance Chris Crotteau, Director of Strategic Product Development & Glenn Fassett, GM - EMEA, Curvature Recorded: Jun 20 2018 61 mins
    How easy is it to move data?

    Between traditional, on-premises data centers, hosted/collocated data centers, and infrastructure as a service (IaaS) through the large cloud providers, where to store your data and run your business critical applications is now a source of significant complexity in the IT world. Understanding your application set and business needs, then determining where best to run those applications and store their data presents a host of challenges both old and new.

    So where does Third Party Maintenance fit into all of this? Or better yet….what IS Third Party Maintenance?

    Join us for an engaging discussion with Curvature’s IT infrastructure experts to learn:

    •What Third Party Maintenance (TPM) is and how it has become a vital market segment focused first and foremost on the client’s needs
    •How TPM can cost-effectively extend the life of IT assets and align the lifetime of those assets to your application and software lifecycles
    •How TPM can enable you to manage major IT infrastructure transitions in a much more cost effective manner

    About the experts:

    Chris primarily works in the Services department as a Solutions Architect developing technological solutions and strategies for clients. As a technology expert, Chris leads a team of technical engineers to develop new tactics and processes for clients from development to design. Chris was named Employee of the Year in 2005 and 2009.

    Glenn is responsible for the strategic growth and expansion throughout Europe. Previously, Glenn managed Curvature’s international operations and led the company’s entry into Europe in 2002, successfully launching Curvature’s Asia-Pacific division in 2007. Prior to leading this international expansion, Glenn managed enterprise accounts as a distinguished member of Curvature’s U.S. sales organization beginning in 1996.
  • Unlock the Power of Data Capital to Accelerate Digital Transformation
    Unlock the Power of Data Capital to Accelerate Digital Transformation Ritu Jyoti, Research Director, IDC and Varun Chhabra, Sr. Director - Product Marketing, Dell EMC Recorded: Jun 20 2018 47 mins
    IDC estimates by 2021 at least 50% of global GDP will be digitized. Data Capital is about helping customers to maximize the value of the insights and information in their data centers today to reach new audiences and revenue potential.

    In this webinar, we’ll be discussing the role data plays and why an emphasis on unlocking the power of your Data Capital will drive success for organizations. We’ll also cover real world examples that show the value of digital transformation, and how you can start your journey.

    In this webcast you will learn:

    Why Data Capital is so important to your organization
    How to unlock your Data Capital and overcome common challenges
  • Gain Control of Copy Data To Reduce Costs and Mitigate Risks
    Gain Control of Copy Data To Reduce Costs and Mitigate Risks Marketing Manager, and Phil Goodwin, Research Director Recorded: Jun 20 2018 47 mins
    Organizations will spend more than $55 billion to store and manage the average 13 copies of each data object they create in 2020, according to IDC researchers. This does not include the costs of data governance risks associated with uncontrolled copies of data, especially in a time of heightened data privacy regulations. In this webcast, you will learn:
    • The industry trends on copy data management.
    • A new holistic approach to automating the creation, refresh, access controls and expiration of data copies.
    • How to gain more value from backup data
  • Dell EMC PowerMax – Tier-0 хранилище на NVMe
    Dell EMC PowerMax – Tier-0 хранилище на NVMe Игорь Виноградов, старший системный инженер Dell EMC Recorded: Jun 20 2018 60 mins
    На вебинаре мы рассмотрим новую СХД от Dell EMC – PowerMax, объявленную на Dell Technologies World. Основное внимание будет уделено архитектурным особенностям PowerMax и выгоде, которую они дают, основным отличиям от предыдущих моделей СХД, а так же рассмотрены текущие тренды в хранении данных. Вебинар проводится при поддержке компании Intel®
  • When to Choose Object Storage over NAS for Digital Video Workflows
    When to Choose Object Storage over NAS for Digital Video Workflows John Bell, Senior Consultant & Jose Juan Gonzalez, Engineer Recorded: Jun 19 2018 44 mins
    To keep pace with today’s media and digital asset management workflows, you need a cost-effective secondary tier of storage (active archive) that provides instant accessibility and unrelenting data protection—while scaling to store petabytes of unstructured data and billions of files. Caringo Senior Consultant John Bell and Engineer Jose Juan Gonzalez will explain how object storage (using NoSQL, unstructured methods of search like Elasticsearch, and advanced metadata and content management capabilities) can be used to build this active archive and will illustrate use with a live demo of how Caringo Swarm integrates with leading industry tools such as the CatDV media asset management (MAM).
  • DeepStorage Test Drive: Tegile IntelliFlash T4000AFA
    DeepStorage Test Drive: Tegile IntelliFlash T4000AFA Howard Marks Founder and Chief Scientist Deepstorage; Gokul Sathiacama, Director, Product Marketing, Data Center Systems Recorded: Jun 19 2018 46 mins
    DeepStorage Labs is known in the storage industry for pushing equipment to its limits, and for reporting what really happens at the edge of a system’s performance. Tegile’s IntelliFlash T4000, unlike a few previous occupants of the DeepStorage Labs ThunderDome, stood up to our testing and delivered high IOPS at a maximum of 1MS latency.

    DeepStorage subjected the InteliFlash T4000 workloads from the usual 4KB “hero number” random read to workloads that simulate OLTP and OLAP database servers, a file server and an Exchange server. We determined the systems performance individually and in combination finally determining the system’s ability to support the kind of mixed workload environment

    In this webinar we will:
    - Introduce the IntelliFlash array
    - Describe the testing process
    - Present the results
    - Review the test environment
    - Provide links to the test workload VDbench configurations
  • FICON 101
    FICON 101 Patty Driever, IBM; Howard Johnson, Broadcom; J Metz, Cisco Recorded: Jun 19 2018 62 mins
    FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilize Fibre Channel as the underlying transport. Mainframes are built to provide a robust and resilient IT infrastructure, and FICON is a key element of their ability to meet the increasing demands placed on reliable and efficient access to data. What are some of the key objectives and benefits of the FICON protocol? And what are the characteristics that make FICON relevant in today’s data centers for mission-critical workloads?

    Join us in this live FCIA webcast where you’ll learn:

    • Basic mainframe I/O terminology
    • The characteristics of mainframe I/O and FICON architecture
    • Key features and benefits of FICON
  • How to manage a Datastore
    How to manage a Datastore Reduxio Systems Recorded: Jun 19 2018 2 mins
    A short guide on how to manage a datastore using the Reduxio StorApp for VMWare vSphere V2.0.
  • Protecting personal data with the Secure Content Management Suite
    Protecting personal data with the Secure Content Management Suite Simon Dugard, Presales, Micro Focus Recorded: Jun 19 2018 50 mins
    The Equifax data breach, Cambridge Analytics and GDPR  are all recent examples of the risks which today’s organisations face around the personal information they store. Come for a journey as we explore how Micro Focus can help you discover, secure, pseudonymize and control personally identifiable information within your organisation using the SCM suite. Learn how Structured Data Manager can target structured data, ControlPoint can target unstructured data, and Content Manager can secure both.
  • HPE 3PAR for unified storage with 3PAR File Persona
    HPE 3PAR for unified storage with 3PAR File Persona Vivek Anand Pamadi - HPE Product Management Recorded: Jun 18 2018 43 mins
    Today’s data centers are expected to deploy, manage, and report on different tiers of business applications, databases, virtual workloads, home directories, and file sharing simultaneously.
    HPE 3PAR StoreServ is highly efficient, flash-optimized storage engineered for the true convergence of block, file, and object access to help consolidate diverse workloads efficiently.
    This session provides an overview HPE 3PAR File Persona and core file data services.
  • Store More, Spend Less, and Grow Your Business Faster
    Store More, Spend Less, and Grow Your Business Faster Brian Carmody, CTO, INFINIDAT Recorded: Jun 14 2018 36 mins
    Your company is embracing big data, analytics and IoT, and your infrastructure can’t keep up. How can recent breakthroughs in data storage help you gain a competitive advantage?  

    In this session, you’ll discover a new storage architecture that makes it possible to store more and spend less while accelerating your path to innovation. Learn how this novel approach solves the age-old problems of performance, availability, scalability AND affordability, in a simply better way.

    About the speaker:

    Brian Carmody is Chief Technology Officer at INFINIDAT, where he leads the research and emerging tech group. Prior to joining INFINIDAT, he worked on the XIV storage system at IBM. A 15-year tech veteran, his experience also includes system engineering roles at MTV Networks and Novus Consulting Group.
  • Automating Load Balancing in App Defined Multi-Cloud
    Automating Load Balancing in App Defined Multi-Cloud Jeevan Sharma - Sr Solution Architect at A10 , Maryam Sanglaji - Principal Product Marketing Manager at Nutanix Recorded: Jun 14 2018 57 mins
    Given modern application API services and the dynamic nature of containers, traditional load balancers fail to handle rapid config change. In this webinar we will share real world A10 & Ansible customer use cases in Nutanix Acropolis platform. Come and learn how to deploy A10’s load balancers and deliver advanced app services and lifecycle management—including auto scaling, automation and per-app analytics all through A10’s Harmony Controller within Nutanix Cloud. Simplify complex network and application delivery with A10’s recommended best practices for effective automation.
  • Running Data Platforms Like Products
    Running Data Platforms Like Products Dormain Drewitz, Pivotal & Mike Koleno, Solstice Recorded: Jun 14 2018 58 mins
    Applications need data, but the legacy approach of n-tiered application architecture doesn’t solve for today’s challenges. Developers aren’t empowered to build and iterate their code quickly without lengthy review processes from other teams. New data sources cannot be quickly adopted into application development cycles, and developers are not able to control their own requirements when it comes to data platforms.

    Part of the challenge here is the existing relationship between two groups: developers and DBAs. Developers are trying to go faster, automating build/test/release cycles with CI/CD, and thrive on the autonomy provided by microservices architectures. DBAs are stewards of data protection, governance, and security. Both of these groups are critically important to running data platforms, but many organizations deal with high friction between these teams. As a result, applications get to market more slowly, and it takes longer for customers to see value.

    What if we changed the orientation between developers and DBAs? What if developers consumed data products from data teams? In this session, Pivotal’s Dormain Drewitz and Solstice’s Mike Koleno will speak about:

    - Product mindset and how balanced teams can reduce internal friction
    - Creating data as a product to align with cloud-native application architectures, like microservices and serverless
    - Getting started bringing lean principles into your data organization
    - Balancing data usability with data protection, governance, and security
  • Three Reasons Storage Security is Failing and How to Fix It
    Three Reasons Storage Security is Failing and How to Fix It Storage Switzerland, RackTop Systems Recorded: Jun 14 2018 60 mins
    An organization’s data is constantly under attack. Whether it’s through ransomware attacks, cyber-threats or employee misguidedness, all expose organizational data and put it at risk. Encryption and access control are the keys to securing data and cyber resiliency, but most storage systems throughout the infrastructure (primary, secondary and protection storage) treat security as an afterthought reducing flexibility and increasing complexity. In this live webinar we will discuss the three reasons why storage security is failing.
  • Cloud-based disaster recovery: discover StorageCraft Cloud Services’ benefits
    Cloud-based disaster recovery: discover StorageCraft Cloud Services’ benefits Jaap Van Kleef, EMEA Technical Manager & Florian Malecki, International Product Marketing Director Recorded: Jun 14 2018 49 mins
    Businesses of all sizes can’t be without their critical data for long. Yet, a large-scale disaster can readily disrupt systems and make doing business impossible. Building out your own data center for disaster recovery can be very costly.

    Join StorageCraft for an exclusive webcast and learn how to best protect your on-premises business systems and data in a cloud purpose-built for total business continuity.