This week on White Space, we look back at the news from DCD Converged conference in London. We’ve also brought back a special guest - Cole Crawford, CEO of Vapor IO and purveyor of unusual rack arrangements.
We discuss various ways to reuse server heat and discover that Coca Cola is apparently using Internet of Things to develop new flavors of the sugary drink.
Peter looks at the reasons behind the Telecity outage in the UK - but this outage has nothing on the recent data center fire in Azerbaijan, that left almost the entire country without access to the Internet.
Also mentioned are the news about CA Technologies getting out of the DCIM business, the reinvention of liquid cooling company Iceotope and the fact that the US government has just discovered another 2000 data centers it didn’t know it had.
Wireless is now the expected medium of choice for network users. Delivering it successfully can be a challenge especially with multiple different approaches and architectures available. What is right for your organisation? Cloud? Controller? How is it all secured?
This session will discuss 3 main Wi-Fi architecture types, their different advantages, the wired edge, and how to secure it all. Importantly, we will finish with what to consider when making the right choice for your needs.
IT organizations face rising challenges to protect more data and applications in the face of growing data security threats as they deploy encryption on vastly larger scales and across cloud and hybrid environments. By moving past silo-constrained encryption and deploying encryption as an IT service centrally, uniformly, and at scale across the enterprise, your organization can benefit from unmatched coverage— whether you are securing databases, applications, file servers, and storage in the traditional data center, virtualized environments, and the cloud, and as the data moves between these different environments. When complemented by centralized key management, your organization can apply data protection where it needs it, when it needs it, and how it needs it—according to the unique needs of your business. Join us on November 25th to learn how to unshare your data, while sharing the IT services that keep your data secure, efficiently and effectively in the cloud and across your entire infrastructure.
This tutorial covers technologies introduced by popular papers about Google File System and BigTable, Amazon Dynamo or Apache Hadoop. In addition, Parallel, Scale-out, Distributed and P2P approaches with Lustre, PVFS and pNFS with several proprietary ones are presented as well.
This tutorial adds also some key features essential at large scale to help understand and differentiate industry vendor's offerings.
Although we shall witness many strides in cybersecurity in 2016, there will still be a narrow margin between these and the threats we’re foreseeing. Advancements in existing technologies—both for crimeware and for everyday use—will bring forth new attack scenarios. It’s best for the security industry as well as the public, to be forewarned to avoid future abuse or any monetary or even lethal consequences.
The virtualization wave is beginning to stall as companies confront application performance problems that can no longer be addressed effectively, even in the short term, by the expensive deployment of silicon storage, brute force caching, or complex log structuring schemes. Simply put, hypervisor-based computing has hit the performance wall established decades ago when the industry shifted from multi-processor parallel computing to unicore/serial bus server computing.
Join industry analyst Jon Toigo and DataCore in this presentation where you will learn how your business can benefit from our Adaptive Parallel I/O software by:
- Harnessing the untapped power of today's multi-core processing systems and efficient CPU memory to create a new class of storage servers and hyper-converged systems
- Enabling order of magnitude improvements in I/O throughput
- Reducing the cost per I/O significantly
- Increasing the number of virtual machines that an individual server can host without application performance slowdowns
As NVM Express becomes the de facto interface standard for Enterprise and Client PCIe-based storage, the NVMe specification is evolving to take on the challenge of maintaining low latency to storage media while scaling out to meet the needs of modern data centers and applications. This talk will explore the coming NVMe Over Fabrics specification, and how it enables NVMe to be used across RDMA fabrics (e.g., Ethernet or InfiniBand™ with RDMA, Fibre Channel, etc.) and connect to other NVMe storage devices. Who should attend: engineering and marketing people interested in learning about how NVMe Over Fabrics works and the new types of system architectures enabled by this protocol.
SD-WAN has captured the attention of analysts, press and enterprises worldwide. Promising unrivaled performance, flexibility, visibility and control, this market disruptor will dramatically revolutionize traditional WANs. Is this obtainable, while saving infrastructure costs of up to 90%?
Hear from Ethan Banks, industry expert and co-founder of Packet Pushers, as he discusses why SD-WANs have moved beyond hype and are taking the industry by storm.
On this live webinar you will learn:
•What is an SD-WAN and its benefits
•Key feature requirements for SD-WANs
•How to adopt this technology without disturbing the network
•Ways an SD-WAN can reduce or eliminate your dependency on MPLS
•Other market observations from this leading industry expert
When we talk about “Storage” in the context of data centers, it can mean different things to different people. Someone who is developing applications will have a very different perspective than, say, someone who is responsible for managing that data on some form of media. Moreover, someone who is responsible for transporting data from one place to another has their own view that is related to, and yet different from, the previous two.
Add in virtualization and layers of abstraction, from file systems to storage protocols, and things can get very confusing very quickly. Pretty soon people don’t even know the right questions to ask!
How do applications and workloads get the information? What happens when you need more of it? Or faster access to it? Or move it far away? This webinar will take a step back and look at “storage” with a “big picture” approach, looking at the whole piece and attempt to fill in some of the blanks for you. We’ll be talking about:
- Applications and RAM
- Servers and Disks
- Networks and Storage Types
- Storage and Distances
- Tools of the Trade/Offs
The goal of the webinar is not to make specific recommendations, but equip the viewer with information that helps them ask the relevant questions, as well as get a keener insight to the consequences of storage choices.
This presentation will cover the need for increasing bandwidth in today's enterprise data centers and building networks. As 10Gb/s speeds become commonplace, 40 and 100Gb/s networks are becoming increasingly common in high speed data center backbones. While single-mode fiber has had the advantage in longer distance links, multimode fiber has held an advantage in short distance Vertical Cavity Surface Emitting Laser (VCSEL) based technology. The presentation will discuss work in the fiber industry to develop a next generation multimode fiber that will support multiple wavelengths while maintaining the low cost advantage of VCSEL based technology. It will discuss advances in the transceiver industry that can take advantage of a next generation short wavelength division multiplexing (SWDM) multimode fiber. Finally, it will cover the latest work in standards organizations to define this next generation fiber.
Learn about Wide Band Multimode Fiber (WBMMF) -- the application drivers, multiplexing technology, parallel fiber transmission, and Short Wavelength Wave Division Multiplexing. This presentation will also review the cabling evolution roadmap and the WBMMF specification framework.
Avoid EMI while achieving longer distances and higher performance utilizing fiber optics for EtherNet/IP networks across manufacturing zones and devices. In this session, you’ll hear about physical layer best practices and understand proper fiber media selection for each physical layer in the EtherNet/IP network. We will review design recommendations and considerations to help you successfully deploy a robust and secure plant-wide implementation of EtherNet/IP.
Michael German, Global Enterprise Technical Director, CommScope
Managing physical connectivity layer in any data center is a challenge. In today’s fiber-rich, highly complex, ultra-high density environment of data centers it can seem virtually impossible. So more data centers are relying on automated infrastructure management (AIM) solutions to track and manage their growing network.
Join intelligent connectivity expert Michael German of CommScope for a look at the strategies behind the deployment of AIM systems in data centers
TIA-1179 specifies requirements for telecommunications infrastructure for healthcare facilities. In this webinar learn what the factors are that are driving this standard, the applications that are affected by it and how it differs from other current cabling infrastructure standards.
With today’s high data rate installations, loss budgets are very low and correct testing is paramount. Results from a global survey conducted by Fluke Networks show that more than 90% of contractors surveyed reported at least one problem and more than50% reported six problems on links installed over a 30 day time period! The time and cost for retesting installations is significant and can have a large impact on warranty coverage.
For new high-speed optical networks supporting 40Gbps and 100Gbps Ethernet over multimode fiber (MMF) that use MPO style connector systems, it is critical to have accurate data indicating the performance of the permanent links deployed in the network. It is also important to assure that the links that are deployed by end-users can meet the manufacturer's warranty requirements.
In this presentation, various test methods and best practices, and their impact on measurement accuracy, repeatability and reproducibility are discussed. Use cases for testing MPO-based cable plant supporting higher speed applications are developed with the support of new and recently introduced MPO test sets and reference MPO test cords.
As transmission speeds necessary to handle the growing daily network traffic of the data center increase, the migration from existing systems to 40G/100G requires a new optical fiber paradigm. For the first time, parallel optics is required to deliver the necessary speeds in the network. To use existing installed infrastructure efficiently and maintain the necessary fiber alignment, optical fiber cables constructed of fiber ribbons and associated connectivity provide a most efficient and cost effective solution to meet the needs of this network protocol.