Hi [[ session.user.profile.firstName ]]

Network Infrastructure

  • Date
  • Rating
  • Views
  • Introduction to Incast, Head of Line Blocking, and Congestion Management
    Introduction to Incast, Head of Line Blocking, and Congestion Management
    Sathish Gnanasekaran, Broadcom; John Kim, Mellanox; J Metz, Cisco; Tim Lustig, Mellanox Recorded: Jun 18 2019 61 mins
    For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think.

    The three main storage network transports - Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, placing a protocol such as NVMe over Fabrics can mean very different things when looking at one networking method in comparison to another.

    Unfortunately, many network administrators may not understand how different storage solutions place burdens upon their networks. As more storage traffic traverses the network, customers face the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput. Watch this webinar to learn:

    •Typical storage traffic patterns
    •What is Incast, what is head of line blocking, what is congestion, what is a slow drain, and when do these become problems on a network?
    •How Ethernet, Fibre Channel, InfiniBand handle these effects
    •The proper role of buffers in handling storage network traffic
    •Potential new ways to handle increasing storage traffic loads on the network
  • 5G and Automated Driving: A Match Made for Driverless Cars
    5G and Automated Driving: A Match Made for Driverless Cars
    Mahbubul Alam, CTO & CMO, Movimento, an APTIV company Recorded: Jun 4 2019 63 mins
    With high-speed connectivity at the heart of connected vehicles, 5G will play a significant role as the industry undergoes major transformation toward fully autonomous vehicles. These vehicles will be required to cooperate with each other and with the infrastructure in a secure and reliable manner with higher sustainable throughput, greater outdoor position accuracy, guaranteed jitter/delivery at a significantly reduced latency and improved vehicle safety even for out-of-coverage communications.

    Key Takeaways:
    1. Real-time teleoperations, putting human in the loop for autonomous vehicles.
    2. Legal and lawful intervention of autonomous vehicles.
    3. 5G V2X for improved safety of automated driving.
  • How ATOS Uses PlateSpin Migrate
    How ATOS Uses PlateSpin Migrate
    Stephan Riebroek, Jo de Baer Recorded: May 22 2019 52 mins
    This webinar is brought to you by the Vivit Automation and Cloud Builders Special Interest Group (SIG).

    In this webinar, you will learn all about Micro Focus PlateSpin; what it is and how can this tool be used to migrate workload from one place to another. The speaker will also explain how the migration to different platforms is setup. After attending this webinar, you will know:

    • The core principles of 'lift-and-shift" server migration
    • How ATOS is using PlateSpin to successfully migrate customer applications to the cloud and other platforms
    • What major features were recently added to PlateSpin and what the future road map looks like
  • New Landscape of Network Speeds
    New Landscape of Network Speeds
    David Chalupsky, Intel; Craig Carlson, Marvell; Peter Onufryck, Microchip; John Kim, Mellanox Recorded: May 21 2019 66 mins
    In the short period from 2014-2018, Ethernet equipment vendors have announced big increases in line speeds, shipping 25, 50, and 100 Gigabits-per -second (Gb/s) speeds and announcing 200/400 Gb/s. At the same time Fibre Channel vendors have launched 32GFC, 64GFC and 128GFC technology while InfiniBand has reached 200Gb/s (called HDR) speeds.

    But who exactly is asking for these faster new networking speeds, and how will they use them? Are there servers, storage, and applications that can make good use of them? How are these new speeds achieved? Are new types of signaling, cables and transceivers required? How will changes in PCIe standards keep up? And do the faster speeds come with different distance limitations?

    Watch this SNIA Networking Storage Forum (NSF) webcast to learn how these new speeds are achieved, where they are likely to be deployed for storage, and what infrastructure changes are needed to support them.
  • Everything You Wanted to Know...But Were Too Proud to Ask - The Memory Pod
    Everything You Wanted to Know...But Were Too Proud to Ask - The Memory Pod
    Alan Bumgarner, Intel; Alex McDonald, NetApp; John Kim, Mellanox Recorded: May 16 2019 62 mins
    Traditionally, much of the IT infrastructure that we’ve built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components.

    We build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But currently fast memory has no persistence at all; it’s only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that.

    Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by using byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface).

    New memory technologies are challenging these assumptions. They look like storage in that they’re persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. Persistent memory (PM) lies between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, SNIA experts will discuss:

    •Traditional uses of storage and memory as a cache
    •How can we build and use systems based on PM?
    •What would a system with storage, persistent memory and DRAM look like?
    •Do we need a new programming model to take advantage of PM?
    •Interesting use cases for systems equipped with PM
    •How we might take better advantage of this new technology
  • How We Approached Evaluation of ALM Octane and Now Run Agile Testing
    How We Approached Evaluation of ALM Octane and Now Run Agile Testing
    Gerd Fladrich, Risang Sidik Recorded: May 16 2019 60 mins
    This webinar is brought to you by the Vivit Testing Quality ALM Special Interest Group (SIG).

    Join this webinar that will demonstrate how BNP Paribas started and performed evaluation of ALM Octane and is now using the tool to run complete agile testing lifecycles. The testing lifecycle will be presented as a live demo that follows Behavior-Driven Development (BDD) methodology and uses Gherkin notation for defining test suites.

    The testing lifecycle starts with definition of requirements and user stories, heavily builds on test automation and advanced reporting, manages and synchronizes defects across tools, and feeds learnings from testing back into the requirements. It uses an integrated development and testing infrastructure including products like Confluence, Jira, Micro Focus ALM, Micro Focus ALM Octane, Jenkins, GIT, Cucumber, IntelliJ, TestCafe, and others.

    BNP Paribas’s journey with agile test automation using ALM Octane started in 2017. The webinar presents in a nutshell how evaluation of ALM Octane was approached and conducted. It explains how new methods were introduced with the objective to leverage full tool functionality. The initiative has soon helped to increase testing efficiency and created important business value.

    Webinar participants will learn:

    • How BNP Paribas has conducted evaluation of ALM Octane
    • Why ALM Octane is much more than just a successor of ALM/Quality Center
    • How ALM Octane eases flexible management of its entire surrounding tool chain
    • Why methodology like BDD shall precede tools, and not vice versa
    • How Gherkin establishes a common language for the entire agile development lifecycle that helps integrating business, development, and testing
    • How to use ALM Octane for pipeline management, reporting, defect synchronization, import of performance testing and BPT results, integration of in-sprint testing etc.
  • Kubernetes in the Cloud
    Kubernetes in the Cloud
    Matt Baldwin, NetApp and Former Founder StackPoint Cloud; Ingo Fuchs, NetApp; Mike Jochimsen, Kaminario Recorded: May 2 2019 61 mins
    Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization.

    In this presentation, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on the storage aspects and best practices.

    •What is Kubernetes? Why would you want to use it?
    •How does Kubernetes help in a multi-cloud/private cloud environment?
    •How does Kubernetes orchestrate & manage storage? Can Kubernetes use Docker?
    •How do we provide persistence and data protection?
    •Example use cases
  • Critical Lessons Learned While Adopting DevOps Lifecycle for SecOps using Agile
    Critical Lessons Learned While Adopting DevOps Lifecycle for SecOps using Agile
    Matt Snyder Recorded: Apr 24 2019 60 mins
    This webinar is brought to you by the Vivit DevOps Special Interest Group (SIG).

    SecOps teams are always looking to strike a balance between staying ahead of new threats and not getting burned by older ones, all while having to do a hundred other things at once. The SecOps team at VMware will share their experience adopting a DevOps process for continuous improvement, while working to reduce the extreme utilization level of the Security Operations Center (SOC) team and to improve the overall security posture. The session will offer practical lessons learned, from a real 24/7 global SOC, without all the fancy buzzwords and feel good tips that don’t actually work in the real world.

    Join this webinar to learn:

    • How shifting the method of working has improved Security Monitoring, without having to spend more money
    • What to be aware of when adopting new methods or techniques
    • How to plan out goals and achieve them in environments where priorities constantly are shifting
  • Protocol Analysis 201 for High-Speed Fibre Channel Fabrics
    Protocol Analysis 201 for High-Speed Fibre Channel Fabrics
    Yamini Shastry, Viavi Solutions; David Rodgers, Teledyne LeCroy; Joe Kimpler, ATTO Technology Recorded: Apr 11 2019 63 mins
    In the FCIA webcast “Protocol Analysis for High-Speed Fibre Channel Fabrics” experts covered the basics on protocol analysis tools and how to incorporate them into the “best practices” application of SAN problem solving.
    Our experts return for this 201 course which will provide a deeper dive into how to interpret the output and results from the protocol analyzers. We will also share insight into using signal jammers and how to use them to correlate error conditions to be able to formulate real time solutions.

    Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) complicate analysis, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. In addition, in-depth knowledge of how to decipher the analytical results and then determine potential solutions is critical.

    Join us for a deeper dive into Protocol Analysis tools and how to interpret the analytical output from them. We will review:
    •Inter switch links (ISL) – How to measure and minimize fabric congestion
    •Post-capture analysis – Graphing, Trace reading, Performance metrics
    •Benefits of purposeful error injection
    •More Layer 2-3 and translation layers debug
    •Link Services and Extended Link Services - LRR Link Ready Rests

    You can watch the 1st webcast on this topic on-demand at http://bit.ly/2MxsWR7
  • Trends in Worldwide Media and Entertainment Storage
    Trends in Worldwide Media and Entertainment Storage
    Alex McDonald, SNIA SSSI Co-Chair (Moderator), Tom Coughlin, Coughlin Associates, Motti Beck, Mellanox Technologies Recorded: Mar 27 2019 56 mins
    Join SNIA Solid State Storage Initiative (SSSI) Education Chair and leading analyst Tom Coughlin and SSSI member Motti Beck of Mellanox Technologies for a journey into the requirements and trends in worldwide data storage for entertainment content acquisition, editing, archiving, and digital preservation. This webcast will cover capacity and performance trends and media projections for direct attached storage, cloud, and near-line network storage. It will also include results from a long-running digital storage survey of media and entertainment professionals. Learn what is needed for digital cinema, broadcast, cable, and internet applications and more.

Embed in website or blog