With funding decreasing every year for higher education, universities are turning to data centers to cut some of their operating expenses. Find out how Utah State University redesigned their data center with energy efficiency in mind to both lower the cost of running their infrastructure as well as understand their role in the environment.
This webinar will cover topics including:
- Understanding the fundamental problems with a data center redesign
- Developing a solution for hot isle, cold isle design
- The efficiency results from USU's redesign
- How the USU data center has impacted the educational environment
This week on White Space, we look back at the news from DCD Converged conference in London. We’ve also brought back a special guest - Cole Crawford, CEO of Vapor IO and purveyor of unusual rack arrangements.
We discuss various ways to reuse server heat and discover that Coca Cola is apparently using Internet of Things to develop new flavors of the sugary drink.
Peter looks at the reasons behind the Telecity outage in the UK - but this outage has nothing on the recent data center fire in Azerbaijan, that left almost the entire country without access to the Internet.
Also mentioned are the news about CA Technologies getting out of the DCIM business, the reinvention of liquid cooling company Iceotope and the fact that the US government has just discovered another 2000 data centers it didn’t know it had.
Wireless is now the expected medium of choice for network users. Delivering it successfully can be a challenge especially with multiple different approaches and architectures available. What is right for your organisation? Cloud? Controller? How is it all secured?
This session will discuss 3 main Wi-Fi architecture types, their different advantages, the wired edge, and how to secure it all. Importantly, we will finish with what to consider when making the right choice for your needs.
IT organizations face rising challenges to protect more data and applications in the face of growing data security threats as they deploy encryption on vastly larger scales and across cloud and hybrid environments. By moving past silo-constrained encryption and deploying encryption as an IT service centrally, uniformly, and at scale across the enterprise, your organization can benefit from unmatched coverage— whether you are securing databases, applications, file servers, and storage in the traditional data center, virtualized environments, and the cloud, and as the data moves between these different environments. When complemented by centralized key management, your organization can apply data protection where it needs it, when it needs it, and how it needs it—according to the unique needs of your business. Join us on November 25th to learn how to unshare your data, while sharing the IT services that keep your data secure, efficiently and effectively in the cloud and across your entire infrastructure.
This tutorial covers technologies introduced by popular papers about Google File System and BigTable, Amazon Dynamo or Apache Hadoop. In addition, Parallel, Scale-out, Distributed and P2P approaches with Lustre, PVFS and pNFS with several proprietary ones are presented as well.
This tutorial adds also some key features essential at large scale to help understand and differentiate industry vendor's offerings.
Although we shall witness many strides in cybersecurity in 2016, there will still be a narrow margin between these and the threats we’re foreseeing. Advancements in existing technologies—both for crimeware and for everyday use—will bring forth new attack scenarios. It’s best for the security industry as well as the public, to be forewarned to avoid future abuse or any monetary or even lethal consequences.
The virtualization wave is beginning to stall as companies confront application performance problems that can no longer be addressed effectively, even in the short term, by the expensive deployment of silicon storage, brute force caching, or complex log structuring schemes. Simply put, hypervisor-based computing has hit the performance wall established decades ago when the industry shifted from multi-processor parallel computing to unicore/serial bus server computing.
Join industry analyst Jon Toigo and DataCore in this presentation where you will learn how your business can benefit from our Adaptive Parallel I/O software by:
- Harnessing the untapped power of today's multi-core processing systems and efficient CPU memory to create a new class of storage servers and hyper-converged systems
- Enabling order of magnitude improvements in I/O throughput
- Reducing the cost per I/O significantly
- Increasing the number of virtual machines that an individual server can host without application performance slowdowns
As NVM Express becomes the de facto interface standard for Enterprise and Client PCIe-based storage, the NVMe specification is evolving to take on the challenge of maintaining low latency to storage media while scaling out to meet the needs of modern data centers and applications. This talk will explore the coming NVMe Over Fabrics specification, and how it enables NVMe to be used across RDMA fabrics (e.g., Ethernet or InfiniBand™ with RDMA, Fibre Channel, etc.) and connect to other NVMe storage devices. Who should attend: engineering and marketing people interested in learning about how NVMe Over Fabrics works and the new types of system architectures enabled by this protocol.
SD-WAN has captured the attention of analysts, press and enterprises worldwide. Promising unrivaled performance, flexibility, visibility and control, this market disruptor will dramatically revolutionize traditional WANs. Is this obtainable, while saving infrastructure costs of up to 90%?
Hear from Ethan Banks, industry expert and co-founder of Packet Pushers, as he discusses why SD-WANs have moved beyond hype and are taking the industry by storm.
On this live webinar you will learn:
•What is an SD-WAN and its benefits
•Key feature requirements for SD-WANs
•How to adopt this technology without disturbing the network
•Ways an SD-WAN can reduce or eliminate your dependency on MPLS
•Other market observations from this leading industry expert
When we talk about “Storage” in the context of data centers, it can mean different things to different people. Someone who is developing applications will have a very different perspective than, say, someone who is responsible for managing that data on some form of media. Moreover, someone who is responsible for transporting data from one place to another has their own view that is related to, and yet different from, the previous two.
Add in virtualization and layers of abstraction, from file systems to storage protocols, and things can get very confusing very quickly. Pretty soon people don’t even know the right questions to ask!
How do applications and workloads get the information? What happens when you need more of it? Or faster access to it? Or move it far away? This webinar will take a step back and look at “storage” with a “big picture” approach, looking at the whole piece and attempt to fill in some of the blanks for you. We’ll be talking about:
- Applications and RAM
- Servers and Disks
- Networks and Storage Types
- Storage and Distances
- Tools of the Trade/Offs
The goal of the webinar is not to make specific recommendations, but equip the viewer with information that helps them ask the relevant questions, as well as get a keener insight to the consequences of storage choices.
John Peluso, Senior Vice President of Product Strategy, AvePoint
With the increase in cloud computing and easy-to-use file sharing systems in the workplace, it's become increasingly difficult for IT departments to keep track and maintain a secure environment. When staff needs to access or share data quickly, they no longer need to rely on IT to provide the tools to do so. Why would they go through the red tape of IT procurement, provisioning, testing, and security when they can find a solution themselves in a matter of seconds?
Join John Peluso, Senior Vice President of Product Strategy at AvePoint, as he presents how organisations have decided to call a truce and provide self-service provisioning and management. In this webcast, he will discuss:
- The dangers of having silos of information that IT and the business are unaware of – disconnected from the centralized servers and storage of the data center or even approved cloud services
- How all of this information may be absent from aggregated capacity, secured content, usage, and other reporting at higher levels, which can complicate business decisions.
- What an organisation can do to proactively manage any rogue IT by providing self-service to end users
Gary Grider, High Performance Computing Division Leader, Los Alamos National Laboratory
It has been said that Objects are for applications and POSIX is for people. The HPC community as well as many other large scale IT organizations have legacy applications and users that know, use, and depend on a near POSIX environment with real folders, ease of renaming and reshaping trees, and other powerful concepts in POSIX.
There are several POSIX name spaces that sit on top of cloud style erasure based objects but few if any really provide an extremely scalable solution. MarFS is designed to address this problem by providing a scalable near-POSIX name space over standard object systems, with target scaling out to trillions of POSIX files, hundreds of Gigabytes/sec of data bandwidth, and millions of POSIX metadata operations/sec.
Business and IT leaders are understandably reluctant to retire considerable, legacy investment in technology, people, and processes due to security, risk, and regulatory compliance obligations. This creates a hybrid IT deployment model: an on-premise landscape of existing or legacy systems and off-premise cloud deployment of suitable IT capability.
The Open Data Center Alliance (ODCA) believes that integration of cloud deployments with enterprise landscapes should consider people, process, technology, and operating models. Doing so encourages faster cloud adoption, leverages existing enterprise investments in IT landscape and helps govern safe cloud adoption through effective risk and compliance management.
Join this free webinar and download a whitepaper published by ODCA and top member companies.
A growing number of enterprises are running applications in the cloud for production needs, while also running the bulk of their applications in physical data centers.
Managing network security in a hybrid IT environment brings many new challenges, such as:
- Lack of visibility
- Difficulty to ensure compliance across multiple vendors
- Maintaining network connectivity of business critical applications
Join our webinar to learn how to effectively manage Security Policy across hybrid cloud and physical networks. In this session we will share the key challenges that our customers experience when migrating workloads to the cloud, as well as methods to mitigate these challenges.
A recent IDC report indicates that only 25% of organizations have repeatable strategies for cloud adoption - 32% lack any Cloud strategy whatsoever. The ECMM's intention is to address this market need through a best practice based repeatable framework for planning cloud adoption that drives business transformation maturity.
By synthesizing together a wide range of industry best practice documents, the ECMM describes how the core building blocks of Cloud services can be identified and assembled to support a strategic business model expansion enabled by new technology innovation
Richard Fichera, Vice President and Principal Analyst Serving Infrastructure & Operations Professionals at Forrester Research
Server virtualization has driven a radically more efficient computing paradigm, enabling IT organizations to deploy business-enabling applications like mobility, social media, the Internet of Things, big data and collaboration at lightning speed. In the midst of this proliferation of applications and explosion of data, storage costs, complexity, and risk have increased at an unprecedented rate – until now.
What if you could have a single storage architecture that enables you to:
Eliminate the cost and complexity of managing silos of storage
Facilitate different service levels for different workloads on the fly
Realize the full benefit of storage consolidation
Would we have your attention?
Join us for a live webinar with guest analyst Richard Fichera of Forrester Research on the future of storage consolidation, and learn how the Nimble Storage Adaptive Flash platform can fuel gains in business responsiveness and agility.
Jeff Klaus, GM Intel Data Center Solutions; Paul Vaccaro, Intel Data Center Operations and Planning Manager
A rapid rate of change complicates every facet of data center management, and server-centric compute models are too cumbersome for today’s highly variable workloads. Is it possible to optimize resources and operations in such dynamic environments? In this presentation, learn how to replace manual, hardware-defined application provisioning and management with a highly automated, software-defined resource model and orchestration layer that enables flexibility, simplified on-demand capital efficiency, and lower TCO. Find out how to compose more agile pools of data center resources, and simultaneously drive up IT efficiency, optimize energy requirements, increase datacenter resilience, and strengthen disaster recovery plans.
The growth of data has put a strain on data center performance and efficiency. Solid-state-devices (SSD) are playing a significant role in increasing storage speeds and performance – but it’s not a simple plug and play solution. Join Adam Roberts, Chief Solutions Architect at SanDisk, to learn 5 tips to consider when looking to improve storage performance and data center efficiency with flash.
Chris Tsilipounidakis, Tegile Manager, Product Marketing
As IT managers roll out new applications or upgrade existing systems, it's important that they have a keen understanding of the latest advancements in storage technology so they can recommend the best approach.
In this session, you’ll learn about the latest storage architectures (flash caching, server-side PCIe flash, hybrid, and all-flash) and the pros and cons for each. We’ll also discuss how a well-designed infrastructure can drive IT efficiencies and deliver high availability while meeting your performance SLAs.
David Beeler, Senior Product Strategist, Vision Solutions
How long can you afford to be without data?
45% of businesses surveyed said that they had experienced a data loss in 2014. Downtime can come from any direction, in any form, at any time. Astonishingly, three-quarters of IT professionals say that they have never calculated the hourly cost of downtime.
Join David Beeler as he explores how you can reduce the impact of downtime on your business to near zero, whilst making it easier for you to manage your systems.
Jesse St. Laurent, VP of Product Strategy and Brian Knudtson, Technical Marketing Manager
Jesse St. Laurent, SimpliVity's VP of Product Strategy, presents about the data problem and how the omnicube solves this, allowing customers to achieve 3 times total cost savings and increased performance. Then, Brian Knudtson goes through an in depth demo, showing how SimpliVity's technology works and how the user interacts with the interface.
Software-Defined Storage is modifying the way storage is being consumed. With the increase of cores per server and server-side storage slots available, software is the key aspect to virtualize all those components and unlock the performance and capacities of faster networks and processors.
While Veritas has being virtualizing storage for decades, its new generation software allows cost-effective solutions where SAN is no longer needed for traditional workloads avoiding high investments that were needed before, tune in to learn more.
The goal of the OSDDC Incubator is to consider SDDC use cases, architectures and requirements. Based on these inputs, the Incubator has developed a white paper that reviews industry standards for the SDDC. This presentation will cover the current output of this DMTF incubator.
Scott D. Lowe of ActualTech Media and Brian Knudtson of SimpliVity discuss the findings of the 2015 State of the Hyperconverged Infrastructure Market report, and talk about the implications for those considering hyperconverged infrastructure.
With the growing adaptation of Software-Defined technologies as the foundation for enterprises’ data centers, private & hybrid clouds and overall IT agility, security infrastructure transformation must take place to efficiently integrate with the Software-Defined ecosystem and become software defined itself. This presentation will highlight the need for Software Defined Security (SD-Security) and Fortinet’s framework to deliver optimized security for the Software Defined IT.
Laurence James, NetApp Products, Alliances and Solutions Manager
Organizations need their IT teams to move ever more quickly to keep pace with the changing needs of business Traditional data centres, with infrastructure silos built around applications, limit responsiveness. Companies struggle with routine IT downtime, spiralling costs, performance challenges, and growing complexity as their operations scale.
Solving this problem requires an IT infrastructure built for agility, one capable of instantly delivering new services, projects, and capacity while keeping costs down. That’s the promise of the software-defined data centre (SDDC). Software-defined storage (SDS) is one of the four SDDC components, as well as Software-defined compute, network, and security. Today there are so many different definitions for the term Software Defined that you can be excused for the confusion it has caused. In the session Laurence James unravels some of the myths surrounding the Software defined craze.
Best practices for achieving an efficient data center
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.
This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more