Hi [[ session.user.profile.firstName ]]

Improving Server Efficiency with Intelligent Flash Arrays: 3 Case Studies

Server virtualization has become ubiquitous as a means to drive IT efficiency. But as you move more workloads to a virtualized environment, chances are your legacy storage system is struggling to keep up. Flash-based storage arrays can help solve the performance problem, but flash can be expensive and the performance and functionality they provide can vary widely.

Join us and learn:
How flash can help neutralized the I/O blender effect that occurs in virtual environments
The pros and cons for both all-flash and hybrid systems
What to consider before you purchase a flash-based storage array
How real-life organizations used flash to overcome inefficiencies and significantly reduce IT costs
Recorded Feb 18 2015 48 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Narayan Venkat, Chief Marketing Officer, Tegile Systems
Presentation preview: Improving Server Efficiency with Intelligent Flash Arrays: 3 Case Studies

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • The Scale-Out File System Architecture Overview Feb 28 2019 6:00 pm UTC 75 mins
    Zhiqi Tao, Intel; John Kim, Mellanox
    This webcast will present an overview of scale-out file system architectures. To meet the increasingly higher demand on both capacity and performance in large cluster computing environments, the storage subsystem has evolved toward a modular and scalable design. The scale-out file system is one implementation of the trend, in addition to scale-out object and block storage solutions. This presentation will provide an introduction to scale-out-file systems and cover:

    •General principles when architecting a scale-out file system storage solution
    •Hardware and software design considerations for different workloads
    •Storage challenges when serving a large number of compute nodes, e.g. name space consistency, distributed locking, data replication, etc.
    •Use cases for scale-out file systems
    •Common benchmark and performance analysis approaches
  • What’s New in Container Storage Feb 26 2019 6:00 pm UTC 75 mins
    Keith Hudgins, Docker; Alex McDonald, NetApp
    Containers are a big trend in application deployment. The landscape of containers is moving fast and constantly changing, with new standards emerging every few months. Learn what’s new, what to pay attention to, and how to make sense of the ever-shifting container landscape.

    This live webcast will cover:
    •Container storage types and Container Frameworks
    •An overview of the various storage APIs for the container landscape
    •How to identify the most important projects to follow in the container world
    •The Container Storage Interface spec and Kubernetes 1.13
    •How to get involved in the container community
  • FICON 201 Feb 20 2019 6:00 pm UTC 75 mins
    Patty Driever, IBM; Howard Johnson, Broadcom; Joe Kimpler, ATTO Technologies
    FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilizes Fibre Channel as the underlying transport.

    The FCIA FICON 101 webcast (on-demand at http://bit.ly/FICON101) described some of the key characteristics of the mainframe and how FICON satisfies the demands placed on mainframes for reliable and efficient access to data. FCIA experts gave a brief introduction into the layers of architecture (system/device and link) that the FICON protocol bridges. Using the FICON 101 session as a springboard, our experts return for FICON 201 where they will delve deeper into the architectural flow of FICON and how it leverages Fibre Channel to be an optimal mainframe transport.

    Join this live FCIA webcast where you’ll learn:

    - How FICON (FC-SB-x) maps onto the Fibre Channel FC-2 layer
    - The evolution of the FICON protocol optimizations
    - How FICON adapts to new technologies
  • Networking Requirements for Hyperconvergence Feb 5 2019 6:00 pm UTC 75 mins
    Christine McMonigal, Intel; Saqib Jang, Chelsio; Alex McDonald, NetApp
    “Why can’t I add a 33rd node?”

    One of the great advantages of Hyperconvergence infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking.

    In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition.

    In this webinar, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:

    •The impact of metadata on the network
    •What happens as we add additional nodes
    •How to right-size the network for growth
    •Tricks of the trade from the networking perspective to make your HCI work better
    •And more…

    Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will necessarily vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place.
  • File vs. Block vs. Object Storage Feb 5 2019 10:00 am UTC 75 mins
    Daniel Sazbon, Chair SNIA Europe and IBM; Alex McDonald, Vice-Chair SNIA Europe and NetApp
    When it comes to storage, a byte is a byte is a byte, isn’t it? One of the enduring truths about simplicity is that scale makes everything hard, and with that comes complexity. And when we’re not processing the data, how do we store it and access it?

    In this webcast, we will compare three types of data access: file, block and object storage, and the access methods that support them. Each has its own set of use cases, and advantages and disadvantages. Each provides simple to sophisticated management of the data, and each makes different demands on storage devices and programming technologies.

    Perhaps you’re comfortable with block and file, but are interested in investigating the more recent class of object storage and access. Perhaps you’re happy with your understanding of objects, but would really like to understand files a bit better, and what advantages or disadvantages they have compared to each other. Or perhaps you want to understand how file, block and object are implemented on the underlying storage systems – and how one can be made to look like the other, depending on how the storage is accessed. Join us as we discuss and debate:

    Storage devices
    •How different types of storage drive different management & access solutions

    Block
    •Where everything is in fixed-size chunks
    •SCSI and SCSI-based protocols, and how FC and iSCSI fit in

    Files
    •When everything is a stream of bytes
    •NFS and SMB

    Objects
    •When everything is a blob
    •HTTP, key value and RESTful interfaces

    Altogether
    •When files, blocks and objects collide
  • How to Prepare Your Data Center for the Big Data Explosion Jan 24 2019 10:00 pm UTC 37 mins
    Kevin L. Jackson, CEO, GovCloud Network, LLC
    Cloud computing innovation will power enterprise transformation in 2018. Cloud growth is also driving a rapid rise in the big data storage market, exacerbating the enterprise challenge around storage cost and complexity.

    Join this webinar with Kevin L. Jackson, CEO, GovCloud Network LLC and globally recognized cloud computing thought leader. He will show how Cloud Storage 2.0 can be used to address this proliferation of real-time data from the web, mobile devices, social media, sensors, log files, and transactional applications, and how all of these are affecting today's data centers.
  • Blockchain in 2019: The Impact on Enterprise Storage and Data Security Live 61 mins
    Ian Smith, CEO and Reuben Thompson, VP Technology, Gospel Technology
    Join this webcast with Ian Smith, CEO and Reuben Thompson, VP Technology at Gospel Technology, as they discuss:

    - Private enterprise blockchains vs public ecosystems (i.e. crypto)
    - Enabling data transactional trust without compromising speed
    - How blockchain can be used to store and protect data

    Gospel is an enterprise data platform built on blockchain, providing data storage for the distributed era, as well as enterprise data security and data breach avoidance.

    About the speakers:
    Ian is a serial entrepreneur and experienced enterprise technology executive, at one point holding a VP Product Management role for IBM Storage, and has been involved in solving some of the largest and most complex infrastructure and data problems in enterprise business.

    Reuben is responsible for all Gospel platform development and has extensive experience of managing large-scale software projects, scalable, distributed, service-oriented software architectures, and satisfying complex and divergent compliance requirements (FCA, PCI, etc).
  • What NVMe™/TCP Means for Networked Storage Recorded: Jan 22 2019 63 mins
    Sagi Grimberg, Lightbits; J Metz, Cisco; Tom Reu, Chelsio
    In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor- or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a network, you need to understand NVMe over Fabrics (NVMe-oF).

    TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express organization. This can mean really good things for storage and storage networking – but what are the tradeoffs?

    In this webinar, the lead author of the NVMe/TCP specification, Sagi Grimberg, and J Metz, member of the SNIA and NVMe Boards of Directors, will discuss:
    •What is NVMe/TCP
    •How NVMe/TCP works
    •What are the trade-offs?
    •What should network administrators know?
    •What kind of expectations are realistic?
    •What technologies can make NVMe/TCP work better?
    •And more…
  • NVMe in the Data Center: How to Expand Above and Beyond your Local Server Recorded: Jan 22 2019 32 mins
    Petros Koutoupis, Senior Platform Architect, IBM Cloud Object Storage
    NVMe adoption has taken the Data Center by storm. And while the technology has proven itself to outperform all other competing SSD implementation, it is still quite limited and restricted to the local server it is attached to. This is where NVMe Targets come into the picture. In this presentation, we will explore how NVMe devices can be exported across a network and attached to remote server nodes.
  • Virtualization and Storage Networking Best Practices Recorded: Jan 17 2019 65 mins
    Cody Hosterman, Pure Storage; Jason Massae, VMware; J Metz, Cisco
    With all the different storage arrays and connectivity protocols available today, knowing the best practices can help improve operational efficiency and ensure resilient operations. VMware’s storage global service has reported many of the common service calls they receive. In this webcast, we will share those insights and lessons learned by discussing:
    - Common mistakes when setting up storage arrays
    - Why iSCSI is the number one storage configuration problem
    - Configuring adapters for iSCSI or iSER
    - How to verify your PSP matches your array requirements
    - NFS best practices
    - How to maximize the value of your array and virtualization
    - Troubleshooting recommendations
  • Applications Take Advantage of Persistent Memory Recorded: Jan 15 2019 60 mins
    Raghu Kulkarni, SNIA PM & NVDIMM SIG member and Alex McDonald, SNIA SSSI Co-Chair
    Kick off the new year with a new SNIA Persistent Memory and NVDIMM Special Interest Group webcast on how applications can take advantage of Persistent Memory today with NVDIMM - the go-to Persistent Memory technology for boosting performance for next generation storage platforms. NVDIMM standards have paved the way to simple, plug-n-play solutions. If you're a developer or integrator who hasn't yet realized the benefits of NVDIMMs in your products, you will want to attend to learn about NVDIMM functionality, applications, and benefits. You'll come away with an understanding of how NVDIMMs fit into the persistent memory landscape.
  • Q4 2018 Community Update: Data Privacy & Information Management in 2019 Recorded: Dec 18 2018 47 mins
    Jill Reber, CEO, Primitive Logic and Kelly Harris, Senior Content Manager, BrightTALK
    Discover what's trending in the Enterprise Architecture community on BrightTALK and how you can leverage these insights to drive growth for your company. Learn which topics and technologies are currently top of mind for Data Privacy and Information Management professionals and decision makers.

    Tune in with Jill Reber, CEO of Primitive Logic and Kelly Harris, Senior Content Manager for EA at BrightTALK, to discover the latest trends in data privacy, the reasons behind them and what to look out for in Q1 2019 and beyond.

    - Top trending topics in Q4 2018 and why, including new GDPR and data privacy regulations
    - Key events in the community
    - Content that data privacy and information management professionals care about
    - What's coming up in Q1 2019

    Audience members are encouraged to ask questions during the Live Q&A.
  • Emerging Memory Poised to Explode Recorded: Dec 11 2018 58 mins
    Moderator: Alex McDonald, SNIA SSSI Co-Chair; Presenters: Tom Coughlin, Coughlin Associates & Jim Handy, Objective Analysis
    Join SSSI members and respected analysts Tom Coughlin and Jim Handy for a look into their new Emerging Memory and Storage Technologies Report. Tom and Jim will examine emerging memory technologies and their interaction with standard memories, how a new memory layer improves computer performance, and the technical advantages and economies of scale that contribute to the enthusiasm for emerging memories. They will provide an outlook on market projections and enabling and driving applications. The webcast is the perfect preparation for the 2019 SNIA Persistent Memory Summit January 24, 2019.
  • Will You Still Love Me When I Turn 64GFC? Recorded: Dec 11 2018 50 mins
    Dean Wallace, Marvell; Barry Maskas, HPE
    Fibre Channel’s speed roadmap defines a well-understood technological trend: the need to double the bit rate in the channel without doubling the required bandwidth.

    In order to do this, PAM4 (pulse-amplitude modulation, with four levels of pulse modulation), enters the Fibre Channel physical layer picture. With the use of four signal levels instead of two, and with each signal level corresponding to a two-bit symbol, the standards define 64GFC operation while maintaining backward compatibility with 32GFC and 16GFC.

    This advanced technical session will cover the T11 standards which define 64GFC serial Fibre Channel, backwards speed auto-negotiation compatibility, and compatible form factors:

    •New physical layer and specification challenges for PAM4, which includes eye openings, crosstalk sensitivity, and new test methodologies and parameters
    •Transceivers, their form factors, and how 64GFC maintains backward compatibility with multi-mode fibre cable deployments in the data center, including distance specifications
    •Discussion of protocol changes, and an overview of backward-compatible link speed and forward error correction (FEC) negotiation
    •The FCIA’s Fibre Channel speed roadmap and evolution, and new technologies under consideration

    After you watch the webcast, check out the FCIA Q&A blog: https://fibrechannel.org/64gfc-faq/
  • Take the Leap to SNIA’s Storage Management Initiative Specification 1.8 Recorded: Dec 5 2018 36 mins
    Mike Walker, former Chair SNIA SMI TWG and former IBM Engineer, Don Deel, SNIA SMI Board Chair, SMI TWG Chair, NetApp
    If you’re a storage equipment vendor, management software vendor or end-user of the ISO approved SNIA Storage Management Initiative Specification (SMI-S), you won’t want to miss this presentation. Enterprise storage industry expert Mike Walker will provide an overview of new indications, methods, properties and profiles of SMI-S 1.7 and the newly introduced version, SMI-S 1.8. If you haven’t yet made the jump to SMI-S 1.7, Walker will explain why it’s important to go directly to SMI-S 1.8.
  • What’s Next: Software-Defined Storage as a Service Without Legacy Limitations Recorded: Dec 4 2018 60 mins
    Tom Bendien, Gal Naor, Guy Loewenberg, Randall van Allen
    Storage is where your data lives and is needed to run your workloads on premise, in a colocation, Cloud or on the move. Many advances have been made in data management capabilities such as virtualizing the storage software layer and utilizing Cloud/hyper-converged hosting platforms. However, the software used to run storage systems continues to rely on traditional RAID and data management methods.

    This approach continues to impose limitations on storage performance and agility. Next generation storage software must be built from the ground up to provide better performance and flexibility.

    Madison Cloud has teamed with StorONE to deliver new software-defined storage capabilities with an entirely new software stack. Imagine using the same drive pool to simultaneously deliver block, file and object storage services. Forget the complexity of managing RAID groups and lengthy rebuilds. Access the full IOPS potential of your NVMe/SSD/HDD drives. Mix & match different drive types and sizes in a single system. Take as many snapshots as you like, without suffering from performance degradation and consuming valuable drive space. Use the largest available disk drives without RAID overhead and collapse your data center footprint in weeks, not years.

    Simplify storage procurement by flattening complex pricing structures into a single pricing tier that provides Petabyte scale block, file and object storage in a single platform, for as little as $0.01/GB/month.

    Your new simplified data management fabric now delivers what you need, when you need it, in a 100% utility model.

    Join us to learn how Madison Cloud and StorONE can deliver the data management platform you always wanted.
  • Introduction to SNIA Swordfish™ ─ Scalable Storage Management Recorded: Dec 4 2018 39 mins
    Daniel Sazbon, SNIA Europe Chair, IBM; Alex McDonald, SNIA Europe Vice Chair, NetApp
    The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.
  • Networking Requirements for Ethernet Scale-Out Storage Recorded: Nov 14 2018 44 mins
    John Kim, Mellanox; Saqib Jang, Chelsio; Fred Zhang, Intel
    Scale-out storage is increasingly popular for cloud, high-performance computing, machine learning, and certain enterprise applications. It offers the ability to grow both capacity and performance at the same time and to distribute I/O workloads across multiple machines.

    But unlike traditional local or scale-up storage, scale-out storage imposes different and more intense workloads on the network. Clients often access multiple storage servers simultaneously; data typically replicates or migrates from one storage node to another; and metadata or management servers must stay in sync with each other as well as communicating with clients. Due to these demands, traditional network architectures and speeds may not work well for scale-out storage, especially when it’s based on flash. Join this webinar to learn:

    •Scale-out storage solutions and what workloads they can address
    •How your network may need to evolve to support scale-out storage
    •Network considerations to ensure performance for demanding workloads
    •Key considerations for all flash

    After you watch the webcast, check out the Q&A blog: http://bit.ly/scale-out-q-a
  • Create a Smarter and More Economic Cloud Storage Architecture Recorded: Nov 7 2018 55 mins
    Michelle Tidwell, IBM; Eric Lakin, University of Michigan; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
    Building a cloud storage architecture requires both storage vendors, cloud service providers and large enterprises to consider new technical and economic paradigms in order to enable a flexible and cost efficient architecture.

    Economic:
    Cloud infrastructure is often procured by service providers and large enterprises in the traditional way – prepay for expected future storage needs and over provision for unexpected changes in demand. This requires large capital expenditures with slow cost recovery based on fluctuating customer adoption. Giving these cloud service providers flexibility in the procurement model for their storage allows them to more closely align the expenditure on infrastructure resources with the cost recovery from customers, optimizing the use of both CapEx and OpEx budgets.

    Technical:
    Clouds inherently require often unpredictable scalability – both up and down. Building a storage architecture with the ability to rapidly allocate resources for a specific customer need and reallocate resources based on changing customer requirements allows the cloud service provider to optimize storage capacity and performance pools in their data center without compromising the responsiveness to the change in needs. Such architecture should also align to the datacenter level orchestration system to allow for even higher level of resource optimization and flexibility.

    In this webcast, you will learn:
    •How modern storage technology allows you to build this infrastructure
    •The role of software defined storage
    •Accounting principles
    •How to model cloud costs of new applications and or re-engineering existing applications
    •Performance considerations
  • Extending RDMA for Persistent Memory over Fabrics Recorded: Oct 25 2018 60 mins
    Tony Hurson, Intel; Rob Davis, Mellanox; John Kim, Mellanox
    For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

    The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

    After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL
The hottest topics for storage and infrastructure professionals
The Enterprise Storage channel has the most up-to-date, relevant content for storage and infrastructure professionals. As data centers evolve with big data, cloud computing and virtualization, organizations are going to need to know how to make their storage more efficient. Join this channel to find out how you can use the most current technology to satisfy your business and storage needs.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Improving Server Efficiency with Intelligent Flash Arrays: 3 Case Studies
  • Live at: Feb 18 2015 4:00 pm
  • Presented by: Narayan Venkat, Chief Marketing Officer, Tegile Systems
  • From:
Your email has been sent.
or close