Hi [[ session.user.profile.firstName ]]

Data Center Management

  • Date
  • Rating
  • Views
  • Protocol Analysis 201 for High-Speed Fibre Channel Fabrics
    Protocol Analysis 201 for High-Speed Fibre Channel Fabrics
    Yamini Shastry, Viavi Solutions; David Rodgers, Teledyne LeCroy; Joe Kimpler, ATTO Technology Recorded: Apr 11 2019 63 mins
    In the FCIA webcast “Protocol Analysis for High-Speed Fibre Channel Fabrics” experts covered the basics on protocol analysis tools and how to incorporate them into the “best practices” application of SAN problem solving.
    Our experts return for this 201 course which will provide a deeper dive into how to interpret the output and results from the protocol analyzers. We will also share insight into using signal jammers and how to use them to correlate error conditions to be able to formulate real time solutions.

    Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) complicate analysis, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. In addition, in-depth knowledge of how to decipher the analytical results and then determine potential solutions is critical.

    Join us for a deeper dive into Protocol Analysis tools and how to interpret the analytical output from them. We will review:
    •Inter switch links (ISL) – How to measure and minimize fabric congestion
    •Post-capture analysis – Graphing, Trace reading, Performance metrics
    •Benefits of purposeful error injection
    •More Layer 2-3 and translation layers debug
    •Link Services and Extended Link Services - LRR Link Ready Rests

    You can watch the 1st webcast on this topic on-demand at http://bit.ly/2MxsWR7
  • Transactional Models and their Storage Requirements
    Transactional Models and their Storage Requirements
    Alex McDonald, Vice-Chair SNIA Europe, and Office of the CTO, NetApp; Paul Talbut, SNIA Europe General Manager Recorded: Apr 9 2019 58 mins
    We’re all accustomed to transferring money from one bank account to another; a credit to the payer becomes a debit to the payee. But that model uses a specific set of sophisticated techniques to accomplish what appears to be a simple transaction. We’re also aware of how today we can order goods online, or reserve an airline seat over the Internet. Or even simpler, we can update a photograph on Facebook. Can these applications use the same models, or are new techniques required?

    One of the more important concepts in storage is the notion of transactions, which are used in databases, financials, and other mission critical workloads. However, in the age of cloud and distributed systems, we need to update our thinking about what constitutes a transaction. We need to understand how new theories and techniques allow us to undertake transactional work in the face of unreliable and physically dispersed systems. It’s a topic full of interesting concepts (and lots of acronyms!). In this webcast, we’ll provide a brief tour of traditional transactional systems and their use of storage, we’ll explain new application techniques and transaction models, and we’ll discuss what storage systems need to look like to support these new advances.

    And yes, we’ll explain all the acronyms and nomenclature too.

    You will learn:

    • A brief history of transactional systems from banking to Facebook
    • How the Internet and distributed systems have changed and how we view transactions
    • An explanation of the terminology, from ACID to CAP and beyond
    • How applications, networks & particularly storage have changed to meet these demands
  • FICON 201
    FICON 201
    Patty Driever, IBM; Howard Johnson, Broadcom; Joe Kimpler, ATTO Technologies Recorded: Feb 20 2019 54 mins
    FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilizes Fibre Channel as the underlying transport.

    The FCIA FICON 101 webcast (on-demand at http://bit.ly/FICON101) described some of the key characteristics of the mainframe and how FICON satisfies the demands placed on mainframes for reliable and efficient access to data. FCIA experts gave a brief introduction into the layers of architecture (system/device and link) that the FICON protocol bridges. Using the FICON 101 session as a springboard, our experts return for FICON 201 where they will delve deeper into the architectural flow of FICON and how it leverages Fibre Channel to be an optimal mainframe transport.

    Join this live FCIA webcast where you’ll learn:

    - How FICON (FC-SB-x) maps onto the Fibre Channel FC-2 layer
    - The evolution of the FICON protocol optimizations
    - How FICON adapts to new technologies
  • Why Composable Infrastructure?
    Why Composable Infrastructure?
    Philip Kufeldt, Univ. of California, Santa Cruz; Mike Jochimsen, Kaminario; Alex McDonald, NetApp Recorded: Feb 13 2019 60 mins
    Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:

    •What prompted the development of composable infrastructure?
    •What are the solutions?
    •What is composable infrastructure?
    •Enabling technologies (not just what’s here, but what’s needed…)
    •Status of composable infrastructure standards/products
    •What’s on the horizon – 2 years? 5 Years
    •What it all means

    After you watch the webcast, check-out the Q&A blog bit.ly/2EOcAy8
  • Data Centre Design in the Era of Multi-Cloud: IT Transformation Drivers
    Data Centre Design in the Era of Multi-Cloud: IT Transformation Drivers
    Simon Ratcliffe, Principal Consultant, Ensono Recorded: Jan 24 2019 38 mins
    IT Transformation projects are usually driven by the need to reduce complexity, improve
    agility, simplify systems, contain costs, manage ever-growing data and provide more efficient
    operational management. Arguably, for seasoned IT professionals, there is nothing new
    about the drivers for transformational change; it’s the velocity and scale of transformation
    today that’s the big challenge.

    Today, to effectively accelerate business innovation, successful IT leaders are building
    infrastructure that focuses on automation and flexibility, supporting agile application
    development and helping deliver world-class customer experience. Of course, IT teams are
    still under pressure to deliver legacy, mission-critical applications but they also need to
    support a seemingly constant flow of emerging business opportunities. ​They’re also tasked
    to lower costs, reduce Capex, while helping to drive revenue growth. That’s a lot of drivers
    and this complex juggling act often requires modernising infrastructure. An almost inevitable
    result of this is that the mix of platforms they adopt will include public cloud.

    So, does that signal the end of the corporate data centre as we know it? Well, as is so often
    the answer – yes and no. ‘Yes’ because there is no doubt that the complexity and cost of
    building and managing on-premise infrastructures is becoming increasingly unsustainable for
    many businesses. And ‘no’ because business continuity and stability of legacy applications
    are still, quite rightly, primary drivers today.
  • How To Maintain Control Of Multi-Data Center and Hybrid Environments
    How To Maintain Control Of Multi-Data Center and Hybrid Environments
    David Cuthbertson, CEO, Square Mile Systems Recorded: Jan 23 2019 56 mins
    Management and control of any distributed IT infrastructure is increasing in difficulty with the variety of options available for hosting computing resources.

    The benefits of on-premise, co-location, cloud and managed services continue to evolve, though they still all have to deliver reliable and secure computing services. Governance and control requirements continue to increase with the processes and systems that IT teams use coming under increasing scrutiny.

    C level executives don’t want to keep hearing that their organizations (or outsource partners) struggle to know how many servers they have, what they do and the risks they currently live with in the new reality of data breaches, insider attacks and increasing systems complexity.
  • Edge Computing: Five Use Cases for the Here and Now
    Edge Computing: Five Use Cases for the Here and Now
    Jim Davis, CEO and Principal Analyst, Edge Research Group Recorded: Jan 23 2019 46 mins
    Edge computing has the potential to be a huge area of growth for datacenter, cloud and other
    vendors. There are many flashy scenarios for the use of edge computing, including autonomous
    transportation and smart cities. But there are near term opportunities to target that have a
    better near-term payoff. Successful services in the market will need to address these
    opportunities as part of an ecosystem solving the needs of application developers.

    Attendees will gain insight into:

    - Use cases for edge computing based on what application developers need – now
    - The geography of the edge computing opportunity
    - Challenges for adoption of edge computing services
    - How the competitive landscape is evolving, and how an ecosystem approach to market
    development is key to deriving value from edge computing services
  • Building a Case for Software-Defined Data Centers: Challenges and Solutions
    Building a Case for Software-Defined Data Centers: Challenges and Solutions
    Jeanne Morain, Scott Goessling, Dave Montgomery Recorded: Jan 22 2019 63 mins
    When it comes to your SDDC, there are many moving parts, new technologies, and vendors to take into consideration. From software-defined networks and storage to compute, colocation, data center infrastructure, on-prem and cloud, the data center landscape has changed forever.

    Tune into this live panel discussion with IT experts as they discuss what the future holds for compute, storage and network services in a software-defined data center, and what that means for vendors, data center managers, and colocation providers alike.

    Moderator: Jeanne Morain, iSpeak Cloud
    Panelists: Scott Goessling, COO/CTO, Burstorm and Dave Montgomery, Marketing Director - Platforms Business Unit, Western Digital
  • What NVMe™/TCP Means for Networked Storage
    What NVMe™/TCP Means for Networked Storage
    Sagi Grimberg, Lightbits; J Metz, Cisco; Tom Reu, Chelsio Recorded: Jan 22 2019 63 mins
    In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor- or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a network, you need to understand NVMe over Fabrics (NVMe-oF).

    TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express organization. This can mean really good things for storage and storage networking – but what are the tradeoffs?

    In this webinar, the lead author of the NVMe/TCP specification, Sagi Grimberg, and J Metz, member of the SNIA and NVMe Boards of Directors, will discuss:
    •What is NVMe/TCP
    •How NVMe/TCP works
    •What are the trade-offs?
    •What should network administrators know?
    •What kind of expectations are realistic?
    •What technologies can make NVMe/TCP work better?
    •And more…

    After the webcast, check out the Q&A blog http://sniaesfblog.org/author-of-nvme-tcp-spec-answers-your-questions/
  • Q4 2018 Community Update: Data Privacy & Information Management in 2019
    Q4 2018 Community Update: Data Privacy & Information Management in 2019
    Jill Reber, CEO, Primitive Logic and Kelly Harris, Senior Content Manager, BrightTALK Recorded: Dec 18 2018 47 mins
    Discover what's trending in the Enterprise Architecture community on BrightTALK and how you can leverage these insights to drive growth for your company. Learn which topics and technologies are currently top of mind for Data Privacy and Information Management professionals and decision makers.

    Tune in with Jill Reber, CEO of Primitive Logic and Kelly Harris, Senior Content Manager for EA at BrightTALK, to discover the latest trends in data privacy, the reasons behind them and what to look out for in Q1 2019 and beyond.

    - Top trending topics in Q4 2018 and why, including new GDPR and data privacy regulations
    - Key events in the community
    - Content that data privacy and information management professionals care about
    - What's coming up in Q1 2019

    Audience members are encouraged to ask questions during the Live Q&A.

Embed in website or blog