Hi [[ session.user.profile.firstName ]]

Why Composable Infrastructure?

Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:

•What prompted the development of composable infrastructure?
•What are the solutions?
•What is composable infrastructure?
•Enabling technologies (not just what’s here, but what’s needed…)
•Status of composable infrastructure standards/products
•What’s on the horizon – 2 years? 5 Years
•What it all means

After you watch the webcast, check-out the Q&A blog bit.ly/2EOcAy8
Recorded Feb 13 2019 60 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Philip Kufeldt, Univ. of California, Santa Cruz; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
Presentation preview: Why Composable Infrastructure?

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • CCS[Ep1]: Next-Generation Cybersecurity - Success Metrics, Best Practices & More Sep 5 2019 3:00 pm UTC 60 mins
    Johna Till Johnson, CEO & Founder, Nemertes Research
    Cloud & Cybersecurity Series [Ep.1]: Success Metrics, Best Practices & More

    What does it take for enterprise cybersecurity teams to "up their games" to the next level of cybersecurity? What does it mean to be a "successful" cybersecurity organization, and what technologies and practices does it take to become one?

    This webinar presents the highlights of Nemertes' in-depth research study of 335 organizations in 11 countries across a range of vertical industries.

    We separated the best from the rest, and took an in-depth look into what made the most successful organizations that way. Participants will come away with best practices, tools, technologies, and organizational structures that contribute to success. Most importantly, they'll learn how to measure cybersecurity success--and their progress towards it.
  • Kubernetes in the Cloud (Part 3): Stateful Workloads Recorded: Aug 20 2019 58 mins
    Ingo Fuchs, NetApp; Paul Burt, NetApp, Mike Jochimsen, Kaminario
    Kubernetes is great for running stateless workloads, like web servers. It’ll run health checks, restart containers when they crash, and do all sorts of other wonderful things. So, what about stateful workloads?

    This webcast will take a look at when it’s appropriate to run a stateful workload in cluster, or out. We’ll discuss the best options for running a workload like a database on the cloud, or in the cluster, and what’s needed to set that up.

    We’ll cover:
    •Secrets management
    •Running a database on a VM and connecting it to Kubernetes as a service
    •Running a database in Kubernetes using a `stateful set`
    •Running a database in Kubernetes using an Operator
    •Running a database on a cloud managed service

    After you watch the webcast, check out our Kubernetes Links & Resources blog at http://bit.ly/KubeLinks
  • Previewing the Storage Developer Conference EMEA in 2020 Recorded: Aug 13 2019 29 mins
    Alex McDonald, Vice-Chair SNIA EMEA, and Office of the CTO, NetApp; Paul Talbut, General Manager, SNIA EMEA
    The SNIA EMEA Storage Developer Conference (SDC) will return to Tel Aviv in early February 2020.

    SDC EMEA is organised by SNIA, the non-profit industry association responsible for data storage standards and education, and the conference is designed to provide an open and independent platform for technical education and knowledge sharing amongst the local storage development community.

    SDC is built by developers – for developers.

    This session will offer a preview of what is planned for the 2020 agenda ahead of the call for presentations and will also give potential sponsors the information they need to be able to budget for their participation in the event. If you have attended previously as a delegate, this is a great opportunity to learn more about how you can raise your profile as a speaker or get your company involved as a sponsor. There will be time allocated during the webcast to ask questions about the options available. Companies who have significant storage development teams will learn why this conference is valuable to the local technical community and why they should be directly engaged.
  • How to Be a Part of the Real-World Workload Revolution Recorded: Jul 9 2019 65 mins
    Eden KIm, CEO, Calypso Systems; Jim Fister, SNIA Solid State Storage Initiative
    Real-world digital workloads often behave very differently from what might be expected. The equipment used in a computing system may function differently than anticipated. Unknown quirks in complicated software and operations running alongside the workload may be doing more or less than the user initially supposed. To truly understand what is happening, the right approach is to test and monitor the systems’ behaviors as real code is executed. By using measured data designers, vendors and service personnel can pinpoint the actual limits and bottlenecks that a particular workload is experiencing. Join the SNIA Solid State Storage Special Interest Group to learn how to be a part of the real-world workload revolution
  • Introduction to SNIA Swordfish™ Features and Profiles Recorded: Jun 27 2019 55 mins
    Richelle Ahlvers, Broadcom
    Swordfish School: Introduction to SNIA Swordfish™ Features and Profiles
    Ready to ride the wave to what’s next in storage management? As a part of an ongoing series of educational materials to help speed your SNIA Swordfish™ implementation in this Swordfish School webcast, Storage standards expert Richelle Ahlvers (Broadcom Inc.) will provide an introduction to the Features and Profiles concepts, describe how they work together, and talk about how to use both Features and Profiles when implementing Swordfish.
    Features are used by implementations to advertise to clients what functionality they are able to support. Profiles are detailed descriptions that describe down to the individual property level what functionality is required for implementations to advertise Features. The Profiles are used for in-depth analysis during development, making it easy for clients to determine which Features to require for different configurations. They are also used to determine certification / conformance requirements.

    About SNIA Swordfish™
    Designed with IT administrators and DevOps engineers in mind to provide simplified and scalable storage management for data center environments, SNIA Swordfish™ is a standard that defines the management of data storage and services as an extension to the Distributed Management Task Force’s (DMTF) Redfish application programming interface specification. Unlike proprietary interfaces, Swordfish is open and easy-to-adopt with broad industry support.
    Your one stop shop for everything SNIA Swordfish is https://www.snia.org/swordfish.
  • Ask the Data Management Expert: Security and Compliance in the Cloud Recorded: Jun 6 2019 10 mins
    Nicolas Groh, Field CTO EMEA, Rubrik
    Join this interactive 1-2-1 discussion where Field Chief Technology Officer, Nicolas Groh will share;

    - Challenges businesses are facing today with regards to security and compliance in the cloud
    - Improvements that can be made today to ransomware prevention, detection, and recovery
    - Long-term security and compliance strategies
    - Quantifiable outcomes businesses can expect to see with a unified system of records

    Moderated by Paige Bidgood, EMEA Community Lead - IT Security & GRC, BrightTALK
  • Ask the Data Protection Expert: Next-Gen Policy Free Data Loss Protection Recorded: Jun 5 2019 9 mins
    Paige Bidgood & Richard Agnew, VP EMEA, Code42
    It's time to rethink data loss prevention. Today's progressive, employee-focused, idea-rich organizations are looking for new, less restrictive ways to protect their data.

    Watch this interactive 1-2-1 discussion where Richard Agnew, VP EMEA will share insights from the field including;

    How Code42 differs from legacy DLP vendors and why it is beneficial for Code42 customers
    How Code42 is addressing insider threats in cybersecurity
    Why organisations should consider adding Code42 to their security technology stack
    Why visibility is key in addressing the new threats organisations are facing in 2019

    Code42 Next-Gen DLP collects, indexes and analyzes all files and file activity, giving our customers full visibility to everywhere their data lives and moves — from endpoints to the cloud. With that kind of oversight, security teams can quickly and easily monitor, investigate, preserve and recover data without the complex classification rules and policies that ultimately block employee collaboration and productivity. Native to the cloud, Code42 Next-Gen DLP works without expensive hardware requirements and deploys in a matter of days. Today, more than 50,000 organizations worldwide rely on Code42 to protect their data from loss.
  • Everything You Wanted to Know...But Were Too Proud to Ask - The Memory Pod Recorded: May 16 2019 62 mins
    Alan Bumgarner, Intel; Alex McDonald, NetApp; John Kim, Mellanox
    Traditionally, much of the IT infrastructure that we’ve built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components.

    We build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But currently fast memory has no persistence at all; it’s only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that.

    Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by using byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface).

    New memory technologies are challenging these assumptions. They look like storage in that they’re persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. Persistent memory (PM) lies between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, SNIA experts will discuss:

    •Traditional uses of storage and memory as a cache
    •How can we build and use systems based on PM?
    •What would a system with storage, persistent memory and DRAM look like?
    •Do we need a new programming model to take advantage of PM?
    •How we might take better advantage of this new technology

    After you watch the webcast, check out the Q&A blog at http://bit.ly/32F2l98.
  • Protocol Analysis 201 for High-Speed Fibre Channel Fabrics Recorded: Apr 11 2019 63 mins
    Yamini Shastry, Viavi Solutions; David Rodgers, Teledyne LeCroy; Joe Kimpler, ATTO Technology
    In the FCIA webcast “Protocol Analysis for High-Speed Fibre Channel Fabrics” experts covered the basics on protocol analysis tools and how to incorporate them into the “best practices” application of SAN problem solving.
    Our experts return for this 201 course which will provide a deeper dive into how to interpret the output and results from the protocol analyzers. We will also share insight into using signal jammers and how to use them to correlate error conditions to be able to formulate real time solutions.

    Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) complicate analysis, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. In addition, in-depth knowledge of how to decipher the analytical results and then determine potential solutions is critical.

    Join us for a deeper dive into Protocol Analysis tools and how to interpret the analytical output from them. We will review:
    •Inter switch links (ISL) – How to measure and minimize fabric congestion
    •Post-capture analysis – Graphing, Trace reading, Performance metrics
    •Benefits of purposeful error injection
    •More Layer 2-3 and translation layers debug
    •Link Services and Extended Link Services - LRR Link Ready Rests

    You can watch the 1st webcast on this topic on-demand at http://bit.ly/2MxsWR7
  • Transactional Models and their Storage Requirements Recorded: Apr 9 2019 58 mins
    Alex McDonald, Vice-Chair SNIA Europe, and Office of the CTO, NetApp; Paul Talbut, SNIA Europe General Manager
    We’re all accustomed to transferring money from one bank account to another; a credit to the payer becomes a debit to the payee. But that model uses a specific set of sophisticated techniques to accomplish what appears to be a simple transaction. We’re also aware of how today we can order goods online, or reserve an airline seat over the Internet. Or even simpler, we can update a photograph on Facebook. Can these applications use the same models, or are new techniques required?

    One of the more important concepts in storage is the notion of transactions, which are used in databases, financials, and other mission critical workloads. However, in the age of cloud and distributed systems, we need to update our thinking about what constitutes a transaction. We need to understand how new theories and techniques allow us to undertake transactional work in the face of unreliable and physically dispersed systems. It’s a topic full of interesting concepts (and lots of acronyms!). In this webcast, we’ll provide a brief tour of traditional transactional systems and their use of storage, we’ll explain new application techniques and transaction models, and we’ll discuss what storage systems need to look like to support these new advances.

    And yes, we’ll explain all the acronyms and nomenclature too.

    You will learn:

    • A brief history of transactional systems from banking to Facebook
    • How the Internet and distributed systems have changed and how we view transactions
    • An explanation of the terminology, from ACID to CAP and beyond
    • How applications, networks & particularly storage have changed to meet these demands
  • FICON 201 Recorded: Feb 20 2019 54 mins
    Patty Driever, IBM; Howard Johnson, Broadcom; Joe Kimpler, ATTO Technologies
    FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilizes Fibre Channel as the underlying transport.

    The FCIA FICON 101 webcast (on-demand at http://bit.ly/FICON101) described some of the key characteristics of the mainframe and how FICON satisfies the demands placed on mainframes for reliable and efficient access to data. FCIA experts gave a brief introduction into the layers of architecture (system/device and link) that the FICON protocol bridges. Using the FICON 101 session as a springboard, our experts return for FICON 201 where they will delve deeper into the architectural flow of FICON and how it leverages Fibre Channel to be an optimal mainframe transport.

    Join this live FCIA webcast where you’ll learn:

    - How FICON (FC-SB-x) maps onto the Fibre Channel FC-2 layer
    - The evolution of the FICON protocol optimizations
    - How FICON adapts to new technologies
  • Why Composable Infrastructure? Recorded: Feb 13 2019 60 mins
    Philip Kufeldt, Univ. of California, Santa Cruz; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
    Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:

    •What prompted the development of composable infrastructure?
    •What are the solutions?
    •What is composable infrastructure?
    •Enabling technologies (not just what’s here, but what’s needed…)
    •Status of composable infrastructure standards/products
    •What’s on the horizon – 2 years? 5 Years
    •What it all means

    After you watch the webcast, check-out the Q&A blog bit.ly/2EOcAy8
  • Data Centre Design in the Era of Multi-Cloud: IT Transformation Drivers Recorded: Jan 24 2019 38 mins
    Simon Ratcliffe, Principal Consultant, Ensono
    IT Transformation projects are usually driven by the need to reduce complexity, improve
    agility, simplify systems, contain costs, manage ever-growing data and provide more efficient
    operational management. Arguably, for seasoned IT professionals, there is nothing new
    about the drivers for transformational change; it’s the velocity and scale of transformation
    today that’s the big challenge.

    Today, to effectively accelerate business innovation, successful IT leaders are building
    infrastructure that focuses on automation and flexibility, supporting agile application
    development and helping deliver world-class customer experience. Of course, IT teams are
    still under pressure to deliver legacy, mission-critical applications but they also need to
    support a seemingly constant flow of emerging business opportunities. ​They’re also tasked
    to lower costs, reduce Capex, while helping to drive revenue growth. That’s a lot of drivers
    and this complex juggling act often requires modernising infrastructure. An almost inevitable
    result of this is that the mix of platforms they adopt will include public cloud.

    So, does that signal the end of the corporate data centre as we know it? Well, as is so often
    the answer – yes and no. ‘Yes’ because there is no doubt that the complexity and cost of
    building and managing on-premise infrastructures is becoming increasingly unsustainable for
    many businesses. And ‘no’ because business continuity and stability of legacy applications
    are still, quite rightly, primary drivers today.
  • How To Maintain Control Of Multi-Data Center and Hybrid Environments Recorded: Jan 23 2019 56 mins
    David Cuthbertson, CEO, Square Mile Systems
    Management and control of any distributed IT infrastructure is increasing in difficulty with the variety of options available for hosting computing resources.

    The benefits of on-premise, co-location, cloud and managed services continue to evolve, though they still all have to deliver reliable and secure computing services. Governance and control requirements continue to increase with the processes and systems that IT teams use coming under increasing scrutiny.

    C level executives don’t want to keep hearing that their organizations (or outsource partners) struggle to know how many servers they have, what they do and the risks they currently live with in the new reality of data breaches, insider attacks and increasing systems complexity.
  • Edge Computing: Five Use Cases for the Here and Now Recorded: Jan 23 2019 46 mins
    Jim Davis, CEO and Principal Analyst, Edge Research Group
    Edge computing has the potential to be a huge area of growth for datacenter, cloud and other
    vendors. There are many flashy scenarios for the use of edge computing, including autonomous
    transportation and smart cities. But there are near term opportunities to target that have a
    better near-term payoff. Successful services in the market will need to address these
    opportunities as part of an ecosystem solving the needs of application developers.

    Attendees will gain insight into:

    - Use cases for edge computing based on what application developers need – now
    - The geography of the edge computing opportunity
    - Challenges for adoption of edge computing services
    - How the competitive landscape is evolving, and how an ecosystem approach to market
    development is key to deriving value from edge computing services
  • Building a Case for Software-Defined Data Centers: Challenges and Solutions Recorded: Jan 22 2019 63 mins
    Jeanne Morain, Scott Goessling, Dave Montgomery
    When it comes to your SDDC, there are many moving parts, new technologies, and vendors to take into consideration. From software-defined networks and storage to compute, colocation, data center infrastructure, on-prem and cloud, the data center landscape has changed forever.

    Tune into this live panel discussion with IT experts as they discuss what the future holds for compute, storage and network services in a software-defined data center, and what that means for vendors, data center managers, and colocation providers alike.

    Moderator: Jeanne Morain, iSpeak Cloud
    Panelists: Scott Goessling, COO/CTO, Burstorm and Dave Montgomery, Marketing Director - Platforms Business Unit, Western Digital
  • What NVMe™/TCP Means for Networked Storage Recorded: Jan 22 2019 63 mins
    Sagi Grimberg, Lightbits; J Metz, Cisco; Tom Reu, Chelsio
    In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor- or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a network, you need to understand NVMe over Fabrics (NVMe-oF).

    TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express organization. This can mean really good things for storage and storage networking – but what are the tradeoffs?

    In this webinar, the lead author of the NVMe/TCP specification, Sagi Grimberg, and J Metz, member of the SNIA and NVMe Boards of Directors, will discuss:
    •What is NVMe/TCP
    •How NVMe/TCP works
    •What are the trade-offs?
    •What should network administrators know?
    •What kind of expectations are realistic?
    •What technologies can make NVMe/TCP work better?
    •And more…

    After the webcast, check out the Q&A blog http://sniaesfblog.org/author-of-nvme-tcp-spec-answers-your-questions/
  • Q4 2018 Community Update: Data Privacy & Information Management in 2019 Recorded: Dec 18 2018 47 mins
    Jill Reber, CEO, Primitive Logic and Kelly Harris, Senior Content Manager, BrightTALK
    Discover what's trending in the Enterprise Architecture community on BrightTALK and how you can leverage these insights to drive growth for your company. Learn which topics and technologies are currently top of mind for Data Privacy and Information Management professionals and decision makers.

    Tune in with Jill Reber, CEO of Primitive Logic and Kelly Harris, Senior Content Manager for EA at BrightTALK, to discover the latest trends in data privacy, the reasons behind them and what to look out for in Q1 2019 and beyond.

    - Top trending topics in Q4 2018 and why, including new GDPR and data privacy regulations
    - Key events in the community
    - Content that data privacy and information management professionals care about
    - What's coming up in Q1 2019

    Audience members are encouraged to ask questions during the Live Q&A.
  • Introduction to SNIA Swordfish™ ─ Scalable Storage Management Recorded: Dec 4 2018 39 mins
    Daniel Sazbon, SNIA Europe Chair, IBM; Alex McDonald, SNIA Europe Vice Chair, NetApp
    The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.
  • Extending RDMA for Persistent Memory over Fabrics Recorded: Oct 25 2018 60 mins
    Tony Hurson, Intel; Rob Davis, Mellanox; John Kim, Mellanox
    For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

    The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

    After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL
Best practices for achieving an efficient data center
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.

This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Why Composable Infrastructure?
  • Live at: Feb 13 2019 6:00 pm
  • Presented by: Philip Kufeldt, Univ. of California, Santa Cruz; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
  • From:
Your email has been sent.
or close