Hi [[ session.user.profile.firstName ]]

Achieving Green Through Green IT

Environmentalism has previously been viewed primarily as a cultural and conservationist movement. Today, Green IT movement is changing the face of environmentalism forever. Concepts such as energy conservation, reduced carbon emissions, PC and server virtualization, alternative energy sources, and other concepts are all having meaningful impact on businesses throughout the world. Most importantly, these Green IT concepts are saving businesses money.
Recorded Feb 24 2009 49 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Charles Weaver, President of MSPAlliance
Presentation preview: Achieving Green Through Green IT

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Everything You Wanted to Know...But Were Too Proud to Ask - The Memory Pod May 16 2019 5:00 pm UTC 75 mins
    Alan Bumgarner, Intel; Alex McDonald, NetApp; John Kim, Mellanox
    Traditionally, much of the IT infrastructure that we’ve built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components.

    We build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But currently fast memory has no persistence at all; it’s only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that.

    Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by using byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface).

    New memory technologies are challenging these assumptions. They look like storage in that they’re persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. Persistent memory (PM) lies between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, SNIA experts will discuss:

    •Traditional uses of storage and memory as a cache
    •How can we build and use systems based on PM?
    •What would a system with storage, persistent memory and DRAM look like?
    •Do we need a new programming model to take advantage of PM?
    •Interesting use cases for systems equipped with PM
    •How we might take better advantage of this new technology
  • Protocol Analysis 201 for High-Speed Fibre Channel Fabrics Recorded: Apr 11 2019 63 mins
    Yamini Shastry, Viavi Solutions; David Rodgers, Teledyne LeCroy; Joe Kimpler, ATTO Technology
    In the FCIA webcast “Protocol Analysis for High-Speed Fibre Channel Fabrics” experts covered the basics on protocol analysis tools and how to incorporate them into the “best practices” application of SAN problem solving.
    Our experts return for this 201 course which will provide a deeper dive into how to interpret the output and results from the protocol analyzers. We will also share insight into using signal jammers and how to use them to correlate error conditions to be able to formulate real time solutions.

    Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) complicate analysis, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. In addition, in-depth knowledge of how to decipher the analytical results and then determine potential solutions is critical.

    Join us for a deeper dive into Protocol Analysis tools and how to interpret the analytical output from them. We will review:
    •Inter switch links (ISL) – How to measure and minimize fabric congestion
    •Post-capture analysis – Graphing, Trace reading, Performance metrics
    •Benefits of purposeful error injection
    •More Layer 2-3 and translation layers debug
    •Link Services and Extended Link Services - LRR Link Ready Rests

    You can watch the 1st webcast on this topic on-demand at http://bit.ly/2MxsWR7
  • Transactional Models and their Storage Requirements Recorded: Apr 9 2019 58 mins
    Alex McDonald, Vice-Chair SNIA Europe, and Office of the CTO, NetApp; Paul Talbut, SNIA Europe General Manager
    We’re all accustomed to transferring money from one bank account to another; a credit to the payer becomes a debit to the payee. But that model uses a specific set of sophisticated techniques to accomplish what appears to be a simple transaction. We’re also aware of how today we can order goods online, or reserve an airline seat over the Internet. Or even simpler, we can update a photograph on Facebook. Can these applications use the same models, or are new techniques required?

    One of the more important concepts in storage is the notion of transactions, which are used in databases, financials, and other mission critical workloads. However, in the age of cloud and distributed systems, we need to update our thinking about what constitutes a transaction. We need to understand how new theories and techniques allow us to undertake transactional work in the face of unreliable and physically dispersed systems. It’s a topic full of interesting concepts (and lots of acronyms!). In this webcast, we’ll provide a brief tour of traditional transactional systems and their use of storage, we’ll explain new application techniques and transaction models, and we’ll discuss what storage systems need to look like to support these new advances.

    And yes, we’ll explain all the acronyms and nomenclature too.

    You will learn:

    • A brief history of transactional systems from banking to Facebook
    • How the Internet and distributed systems have changed and how we view transactions
    • An explanation of the terminology, from ACID to CAP and beyond
    • How applications, networks & particularly storage have changed to meet these demands
  • FICON 201 Recorded: Feb 20 2019 54 mins
    Patty Driever, IBM; Howard Johnson, Broadcom; Joe Kimpler, ATTO Technologies
    FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilizes Fibre Channel as the underlying transport.

    The FCIA FICON 101 webcast (on-demand at http://bit.ly/FICON101) described some of the key characteristics of the mainframe and how FICON satisfies the demands placed on mainframes for reliable and efficient access to data. FCIA experts gave a brief introduction into the layers of architecture (system/device and link) that the FICON protocol bridges. Using the FICON 101 session as a springboard, our experts return for FICON 201 where they will delve deeper into the architectural flow of FICON and how it leverages Fibre Channel to be an optimal mainframe transport.

    Join this live FCIA webcast where you’ll learn:

    - How FICON (FC-SB-x) maps onto the Fibre Channel FC-2 layer
    - The evolution of the FICON protocol optimizations
    - How FICON adapts to new technologies
  • Why Composable Infrastructure? Recorded: Feb 13 2019 60 mins
    Philip Kufeldt, Univ. of California, Santa Cruz; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
    Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:

    •What prompted the development of composable infrastructure?
    •What are the solutions?
    •What is composable infrastructure?
    •Enabling technologies (not just what’s here, but what’s needed…)
    •Status of composable infrastructure standards/products
    •What’s on the horizon – 2 years? 5 Years
    •What it all means

    After you watch the webcast, check-out the Q&A blog bit.ly/2EOcAy8
  • Data Centre Design in the Era of Multi-Cloud: IT Transformation Drivers Recorded: Jan 24 2019 38 mins
    Simon Ratcliffe, Principal Consultant, Ensono
    IT Transformation projects are usually driven by the need to reduce complexity, improve
    agility, simplify systems, contain costs, manage ever-growing data and provide more efficient
    operational management. Arguably, for seasoned IT professionals, there is nothing new
    about the drivers for transformational change; it’s the velocity and scale of transformation
    today that’s the big challenge.

    Today, to effectively accelerate business innovation, successful IT leaders are building
    infrastructure that focuses on automation and flexibility, supporting agile application
    development and helping deliver world-class customer experience. Of course, IT teams are
    still under pressure to deliver legacy, mission-critical applications but they also need to
    support a seemingly constant flow of emerging business opportunities. ​They’re also tasked
    to lower costs, reduce Capex, while helping to drive revenue growth. That’s a lot of drivers
    and this complex juggling act often requires modernising infrastructure. An almost inevitable
    result of this is that the mix of platforms they adopt will include public cloud.

    So, does that signal the end of the corporate data centre as we know it? Well, as is so often
    the answer – yes and no. ‘Yes’ because there is no doubt that the complexity and cost of
    building and managing on-premise infrastructures is becoming increasingly unsustainable for
    many businesses. And ‘no’ because business continuity and stability of legacy applications
    are still, quite rightly, primary drivers today.
  • How To Maintain Control Of Multi-Data Center and Hybrid Environments Recorded: Jan 23 2019 56 mins
    David Cuthbertson, CEO, Square Mile Systems
    Management and control of any distributed IT infrastructure is increasing in difficulty with the variety of options available for hosting computing resources.

    The benefits of on-premise, co-location, cloud and managed services continue to evolve, though they still all have to deliver reliable and secure computing services. Governance and control requirements continue to increase with the processes and systems that IT teams use coming under increasing scrutiny.

    C level executives don’t want to keep hearing that their organizations (or outsource partners) struggle to know how many servers they have, what they do and the risks they currently live with in the new reality of data breaches, insider attacks and increasing systems complexity.
  • Edge Computing: Five Use Cases for the Here and Now Recorded: Jan 23 2019 46 mins
    Jim Davis, CEO and Principal Analyst, Edge Research Group
    Edge computing has the potential to be a huge area of growth for datacenter, cloud and other
    vendors. There are many flashy scenarios for the use of edge computing, including autonomous
    transportation and smart cities. But there are near term opportunities to target that have a
    better near-term payoff. Successful services in the market will need to address these
    opportunities as part of an ecosystem solving the needs of application developers.

    Attendees will gain insight into:

    - Use cases for edge computing based on what application developers need – now
    - The geography of the edge computing opportunity
    - Challenges for adoption of edge computing services
    - How the competitive landscape is evolving, and how an ecosystem approach to market
    development is key to deriving value from edge computing services
  • Building a Case for Software-Defined Data Centers: Challenges and Solutions Recorded: Jan 22 2019 63 mins
    Jeanne Morain, Scott Goessling, Dave Montgomery
    When it comes to your SDDC, there are many moving parts, new technologies, and vendors to take into consideration. From software-defined networks and storage to compute, colocation, data center infrastructure, on-prem and cloud, the data center landscape has changed forever.

    Tune into this live panel discussion with IT experts as they discuss what the future holds for compute, storage and network services in a software-defined data center, and what that means for vendors, data center managers, and colocation providers alike.

    Moderator: Jeanne Morain, iSpeak Cloud
    Panelists: Scott Goessling, COO/CTO, Burstorm and Dave Montgomery, Marketing Director - Platforms Business Unit, Western Digital
  • What NVMe™/TCP Means for Networked Storage Recorded: Jan 22 2019 63 mins
    Sagi Grimberg, Lightbits; J Metz, Cisco; Tom Reu, Chelsio
    In the storage world, NVMe™ is arguably the hottest thing going right now. Go to any storage conference – either vendor- or vendor-neutral, and you’ll see NVMe as the latest and greatest innovation. It stands to reason, then, that when you want to run NVMe over a network, you need to understand NVMe over Fabrics (NVMe-oF).

    TCP – the long-standing mainstay of networking – is the newest transport technology to be approved by the NVM Express organization. This can mean really good things for storage and storage networking – but what are the tradeoffs?

    In this webinar, the lead author of the NVMe/TCP specification, Sagi Grimberg, and J Metz, member of the SNIA and NVMe Boards of Directors, will discuss:
    •What is NVMe/TCP
    •How NVMe/TCP works
    •What are the trade-offs?
    •What should network administrators know?
    •What kind of expectations are realistic?
    •What technologies can make NVMe/TCP work better?
    •And more…

    After the webcast, check out the Q&A blog http://sniaesfblog.org/author-of-nvme-tcp-spec-answers-your-questions/
  • Q4 2018 Community Update: Data Privacy & Information Management in 2019 Recorded: Dec 18 2018 47 mins
    Jill Reber, CEO, Primitive Logic and Kelly Harris, Senior Content Manager, BrightTALK
    Discover what's trending in the Enterprise Architecture community on BrightTALK and how you can leverage these insights to drive growth for your company. Learn which topics and technologies are currently top of mind for Data Privacy and Information Management professionals and decision makers.

    Tune in with Jill Reber, CEO of Primitive Logic and Kelly Harris, Senior Content Manager for EA at BrightTALK, to discover the latest trends in data privacy, the reasons behind them and what to look out for in Q1 2019 and beyond.

    - Top trending topics in Q4 2018 and why, including new GDPR and data privacy regulations
    - Key events in the community
    - Content that data privacy and information management professionals care about
    - What's coming up in Q1 2019

    Audience members are encouraged to ask questions during the Live Q&A.
  • Introduction to SNIA Swordfish™ ─ Scalable Storage Management Recorded: Dec 4 2018 39 mins
    Daniel Sazbon, SNIA Europe Chair, IBM; Alex McDonald, SNIA Europe Vice Chair, NetApp
    The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.
  • Extending RDMA for Persistent Memory over Fabrics Recorded: Oct 25 2018 60 mins
    Tony Hurson, Intel; Rob Davis, Mellanox; John Kim, Mellanox
    For datacenter applications requiring low-latency access to persistent storage, byte-addressable persistent memory (PM) technologies like 3D XPoint and MRAM are attractive solutions. Network-based access to PM, labeled here PM over Fabrics (PMoF), is driven by data scalability and/or availability requirements. Remote Direct Memory Access (RDMA) network protocols are a good match for PMoF, allowing direct RDMA Write of data to remote PM. However, the completion of an RDMA Write at the sending node offers no guarantee that data has reached persistence at the target. This webcast will outline extensions to RDMA protocols that confirm such persistence and additionally can order successive writes to different memories within the target system.

    The primary target audience is developers of low-latency and/or high-availability datacenter storage applications. The presentation will also be of broader interest to datacenter developers, administrators and users.

    After you watch, check-out our Q&A blog from the webcast: http://bit.ly/2DFE7SL
  • Centralized vs. Distributed Storage Recorded: Sep 11 2018 63 mins
    John Kim, Mellanox; Alex McDonald, NetApp; J Metz, Cisco
    In the history of enterprise storage there has been a trend to move from local storage to centralized, networked storage. Customers found that networked storage provided higher utilization, centralized and hence cheaper management, easier failover, and simplified data protection, which has driven the move to FC-SAN, iSCSI, NAS and object storage.

    Recently, distributed storage has become more popular where storage lives in multiple locations but can still be shared. Advantages of distributed storage include the ability to scale-up performance and capacity simultaneously and--in the hyperconverged use case--to use each node (server) for both compute and storage. Attend this webcast to learn about:
    •Pros and cons of centralized vs. distributed storage
    •Typical use cases for centralized and distributed storage
    •How distributed works for SAN, NAS, parallel file systems, and object storage
    •How hyperconverged has introduced a new way of consuming storage

    After the webcast, please check out our Q&A blog http://bit.ly/2xSajxJ
  • Fibre Channel Interoperability Recorded: Aug 23 2018 68 mins
    Barry Maskas, HPE; Tim Sheehan, University of New Hampshire Interoperability Lab; David Rodgers, Teledyne LeCroy
    Interoperability is a primary basis for the predictable behavior of a Fibre Channel (FC) SAN. FC interoperability implies standards conformance by definition. Interoperability also implies exchanges between a range of products, or similar products from one or more different suppliers, or even between past and future revisions of the same products. Interoperability may be developed as a special measure between two products, while excluding the rest, and still be standards conformant. When a supplier is forced to adapt its system to a system that is not based on standards, it is not interoperability but rather, only compatibility.

    Every FC hardware and software supplier publishes an interoperability matrix and per product conformance based on having validated conformance, compatibility, and interoperability. There are many dimensions to interoperability, from the physical layer, optics, and cables; to port type and protocol; to server, storage, and switch fabric operating systems versions; standards and feature implementation compatibility; and to use case topologies based on the connectivity protocol (F-port, N-Port, NP-port, E-port, TE-port, D-port).

    In this session we will delve into the many dimensions of FC interoperability, discussing:

    •Standards and conformance
    •Validation of conformance and interoperability
    •FC-NVMe conformance and interoperability
    •Interoperability matrices
    •Multi-generational interoperability
    •Use case examples of interoperability

    After you watch the webcast, check out the FC Interoperability Q&A blog https://fibrechannel.org/a-qa-on-fibre-channel-interoperability/
  • FCoE vs. iSCSI vs. iSER Recorded: Jun 21 2018 62 mins
    J Metz, Cisco; Saqib Jang, Chelsio; Rob Davis, Mellanox; Tim Lustig, Mellanox
    The “Great Storage Debates” webcast series continues, this time on FCoE vs. iSCSI vs. iSER. Like past “Great Storage Debates,” the goal of this presentation is not to have a winner emerge, but rather provide vendor-neutral education on the capabilities and use cases of these technologies so that attendees can become more informed and make educated decisions.

    One of the features of modern data centers is the ubiquitous use of Ethernet. Although many data centers run multiple separate networks (Ethernet and Fibre Channel (FC)), these parallel infrastructures require separate switches, network adapters, management utilities and staff, which may not be cost effective.

    Multiple options for Ethernet-based SANs enable network convergence, including FCoE (Fibre Channel over Ethernet) which allows FC protocols over Ethernet and Internet Small Computer System Interface (iSCSI) for transport of SCSI commands over TCP/IP-Ethernet networks. There are also new Ethernet technologies that reduce the amount of CPU overhead in transferring data from server to client by using Remote Direct Memory Access (RDMA), which is leveraged by iSER (iSCSI Extensions for RDMA) to avoid unnecessary data copying.

    That leads to several questions about FCoE, iSCSI and iSER:

    •If we can run various network storage protocols over Ethernet, what
    differentiates them?
    •What are the advantages and disadvantages of FCoE, iSCSI and iSER?
    •How are they structured?
    •What software and hardware do they require?
    •How are they implemented, configured and managed?
    •Do they perform differently?
    •What do you need to do to take advantage of them in the data center?
    •What are the best use cases for each?

    Join our SNIA experts as they answer all these questions and more on the next Great Storage Debate.

    After you watch the webcast, check out the Q&A blog from our presenters http://bit.ly/2NyJKUM
  • FICON 101 Recorded: Jun 19 2018 62 mins
    Patty Driever, IBM; Howard Johnson, Broadcom; J Metz, Cisco
    FICON (Fibre Channel Connection) is an upper-level protocol supported by mainframe servers and attached enterprise-class storage controllers that utilize Fibre Channel as the underlying transport. Mainframes are built to provide a robust and resilient IT infrastructure, and FICON is a key element of their ability to meet the increasing demands placed on reliable and efficient access to data. What are some of the key objectives and benefits of the FICON protocol? And what are the characteristics that make FICON relevant in today’s data centers for mission-critical workloads?

    Join us in this live FCIA webcast where you’ll learn:

    • Basic mainframe I/O terminology
    • The characteristics of mainframe I/O and FICON architecture
    • Key features and benefits of FICON

    After you watch the webcast, check out the Q&A blog: https://fibrechannel.org/ficon-webcast-qa/
  • Everything You Wanted To Know...But Were Too Proud To Ask - Storage Controllers Recorded: May 15 2018 48 mins
    Peter Onufryk, Microsemi, Craig Carlson, Cavium, Chad Hintz, Cisco, John Kim, Mellanox, J Metz, Cisco
    Are you a control freak? Have you ever wondered what was the difference between a storage controller, a RAID controller, a PCIe Controller, or a metadata controller? What about an NVMe controller? Aren’t they all the same thing?

    In part Aqua of the “Everything You Wanted To Know About Storage But Were Too Proud To Ask” webcast series, we’re going to be taking an unusual step of focusing on a term that is used constantly, but often has different meanings. When you have a controller that manages hardware, there are very different requirements than a controller that manages an entire system-wide control plane. From the outside looking in, it may be easy to get confused. You can even have controllers managing other controllers!
    Here we’ll be revisiting some of the pieces we talked about in Part Chartreuse [https://www.brighttalk.com/webcast/663/215131], but with a bit more focus on the variety we have to play with:
    •What do we mean when we say “controller?”
    •How are the systems being managed different?
    •How are controllers used in various storage entities: drives, SSDs, storage networks, software-defined
    •How do controller systems work, and what are the trade-offs?
    •How do storage controllers protect against Spectre and Meltdown?
    Join us to learn more about the workhorse behind your favorite storage systems.

    After you watch the webcast, check out the Q&A blog at http://bit.ly/2JgcHlM
  • Fibre Channel Cabling Recorded: Apr 19 2018 44 mins
    Zach Nason, Data Center Systems, Greg McSorley, Amphenol-Highspeed, Mark Jones, Broadcom
    Looking for more cost-effective ways to implement fibre channel cabling? Learn why proper cabling is important and how it fits into data center designs. Join this webcast to hear FCIA experts discuss:
    - Cable and connector types, cassettes, patch panels and other cabling products
    - Variables in Fiber Optic and Copper Cables: Reflectance, Insertion Loss,
    Crosstalk, Speed/Length Limitations and more
    - Different variations of Structured Cabling in an environment with FC
    - Helpful tips when planning and implementing a cabling infrastructure within a SAN

    After you watch the webcast, check out the Q&A blog: http://bit.ly/2KdtEx0
  • Introduction to SNIA Swordfish™ ─ Scalable Storage Management Recorded: Apr 19 2018 62 mins
    Richelle Ahlvers, Broadcom; Don Deel, NetApp
    The SNIA Swordfish™ specification helps to provide a unified approach for the management of storage and servers in hyperscale and cloud infrastructure environments, making it easier for IT administrators to integrate scalable solutions into their data centers. Swordfish builds on the Distributed Management Task Force’s (DMTF’s) Redfish® specification using the same easy-to-use RESTful methods and lightweight JavaScript Object Notation (JSON) formatting. Join this session to receive an overview of Swordfish including the new functionality added in version 1.0.6 released in March, 2018.
Best practices for achieving an efficient data center
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.

This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Achieving Green Through Green IT
  • Live at: Feb 24 2009 6:00 pm
  • Presented by: Charles Weaver, President of MSPAlliance
  • From:
Your email has been sent.
or close