With funding decreasing every year for higher education, universities are turning to data centers to cut some of their operating expenses. Find out how Utah State University redesigned their data center with energy efficiency in mind to both lower the cost of running their infrastructure as well as
With funding decreasing every year for higher education, universities are turning to data centers to cut some of their operating expenses. Find out how Utah State University redesigned their data center with energy efficiency in mind to both lower the cost of running their infrastructure as well as understand their role in the environment.
This webinar will cover topics including:
- Understanding the fundamental problems with a data center redesign
- Developing a solution for hot isle, cold isle design
- The efficiency results from USU's redesign
- How the USU data center has impacted the educational environment
In the traditional world of EDW, ETL pipelines are a troublesome bottleneck when preparing data for use in the data warehouse. ETL pipelines are notoriously expensive and brittle, so as companies move to Hadoop they look forward to getting rid of the ETL infrastructure.
But is it that simple? Some companies are finding that in order to move data between clusters for backup or aggregation purposes, they are building systems that look an awful lot like ETL.
Join us for a live webinar featuring Dr. Jim Metzler, Distinguished Research Fellow at Ashton Metzler & Associates, as he discusses the future of the WAN and why now is the time to rethink the architecture in order to evolve/accelerate your business.
Why take the traditional approach to branch office expansion by relying on providers that can’t meet the urgency of your business needs? A new era has come where complexity, cost, flexibility and time-to-market are no longer a hurdle. Welcome to the world of the Software-Defined WAN.
Join this webinar featuring Jim Metzler, industry specialist to discover:
•Why it’s time to re-architect your WAN
•How today’s WAN infrastructure is failing IT
•New market dynamics that the traditional WAN cannot support
•A step-by-step approach to evolving your WAN with minimal disruption
For the Data Scientist, Data Science is complex; for the average business user it is a mystical art form that promises a lot, but often under delivers against expectation. For many established companies the result of this has been a lack of investment in an area that is, for others, quickly becoming an area of competitive advantage.
Helping the business understand the value of Big Data and Analytics, whilst also helping translate their business requirements and expectations, is a critical foundational step of the Data Analytics Lifecycle that can lead to greater investment from the business and greater profit for the organization. By way of customer examples, this presentation discusses the importance of engaging the business early and the importance of being able to tell an engaging story about the ‘Art of the Possible’.
New video! Manage your vSphere environment with a powerful solution that leverages the cloud, always-on analytics, and data science to optimize visibility, insight and control. Five minutes to activate, and you can increase utilization, boost performance, diagnose, troubleshoot and fix issues—even proactively—ten times faster. Eliminate capacity shortfalls and drill down to root causes, hazards and more.
Join us for a live Big Data Analytics customer case study webcast featuring Dana Gardner, a leading IT industry analyst at Interarbor Solutions, as he interviews Procera Networks executive, Cam Cullen.
Learn how Procera Networks dealt with massive data volume challenges to provide network performance benefits to its global users, powered by HPE Vertica. HPE Vertica is the industry’s first comprehensive, scalable, open, and secure platform for Big Data Analytics.
Financial advisors espouse that proper asset allocation during times of market volatility can help us sleep better. There is a parallel in the IT world. With volatility driven by technology advancement, virtual & cloud environments, and consumer demand for the newest applications and hardware, a good night’s sleep for an Asset Manager requires properly managed and optimally allocated hardware and software assets in a constantly changing environment.
This session explores this intimidating world, common pitfalls, prescriptive actions and what the latest technology can do to make sure your assets, licenses and infrastructure are optimally aligned to drive wealth in IT…and let the Asset Manager sleep well without the fear of negative audit findings and exorbitant fines.
A movement is underway. Businesses are awakening to a new era of the digital enterprise, requiring companies to find new ways of delivering their services built for the digital era. Success in this new era requires a digital industrialization strategy in which datacenters become the core asset of the business enabling a transformation from an infrastructure that is tightly coupled with the business to a modern infrastructure that enables any business.
Digital industrialization is a continuous cycle that organizations can use to turn IT infrastructure from a cost into an asset by standardizing on one set of technologies and economics across facilities, hardware, software, and operations; consolidating datacenters; abstracting functionality; automating operations and governing it all to ensure security, integrity, and compliance.
This presentation will go through why digital industrialization is needed, what the benefits are and how the Ericsson Cloud portfolio facilitates it.
Would you like to cut complexity across all phases of app development and deployment?
Join us for this straightforward discussion on how CA Application Lifecycle Conductor reduces risk through a single source-of-truth. CA Application Lifecycle Conductor automates and manages the software development lifecycles that span mobile-to-mainframe environments — from the initial service desk ticket to the deployment of the application in production.
Join Rose Sakach, Sr. Principal Product Manager, and Vaughn Marshall, Director, Product Management as they outline CA Application Lifecycle Conductor’s many benefits. Discover how you can:
• Create one view and traceability for the application development lifecycle
• Identify the potential time savings for project managers, release managers and compliance managers
• Determine which customer segments would benefit the most from adopting CA ALC
Are you ready to simplify application lifecycle management—from mobile to mainframe?
With the average company experiencing unplanned downtime 13 times a year, the costs associated with continuing to invest in a legacy backup solution can be extensive. For this reason, more customers are switching to Veeam® and Quantum than ever before. Update to a modern data center and achieve Availability for the Always-On Enterprise™ with Veeam coupled with Quantum’s tiered storage that increases performance, reduces bandwidth requirements and executes a best practices for data protection.
After a record-setting year in 2015, where will the tech M&A market go in 2016? What trends that pushed M&A spending to its highest level since the Internet Bubble burst will continue to drive deals and which ones will wind down? What other sectors are likely to see the most activity this year? And most importantly, what valuations will be handed out in deals over the coming year? Drawing on data and views from across 451 Research, the Tech M&A Outlook webinar maps many of the major developments in the IT landscape (IoT, Big Data, cloud computing) to how those influence corporate acquisition strategies. Join us for a look ahead to what we expect for tech M&A in 2016.
Jose Ruiz, VP Engineering Operations, Compass Datacenters
As has often been reported, human error is one of the largest factors in data center outages. Since estimates of the average cost of an outage now exceed $740,000, the ability to reduce or eliminate man-caused outages can make a substantial impact on the organization’s bottom line. In this presentation, Jose Ruiz, VP of Engineering Operations for Compass Datacenters, will present a case study on how the introduction of wearable technology has enhanced one customer’s operational performance substantially.
Jabez Tan, Senior Analyst, Data Centres,Structure Research
What are the top data centre colocation trends for 2016. How have past predictions played out so far? Singapore and Hong Kong have stood out as the top 2 data centre markets in the Asia Pacific region. We take a quantitative deep dive into the data centre supply and revenue generation for each market, and how much revenue is being generated from colocation services.
Eric Slak, Sr. Analyst, Evaluator Group, Alex McDonald, Chair, SNIA Cloud Storage, Glyn Bowden, SNIA Cloud Storage Board
A Software Defined Data Center (SDDC) is a compute facility in which all elements of the infrastructure - networking, storage, CPU and security - are virtualized and removed from proprietary hardware stacks. Deployment, provisioning and configuration as well as the operation, monitoring and automation of the entire environment is abstracted from hardware and implemented in software.
The results of this software-defined approach include maximizing agility and minimizing cost, benefits that appeal to IT organizations of all sizes. In fact, understanding SDDC concepts can help IT professionals in any organization better apply these software-defined concepts to storage, networking, compute and other infrastructure decisions.
If you’re interested in Software-Defined Data Centers and how such a thing might be implemented – and why this concept is important to IT professionals who aren’t involved with building data centers - then please join us on March 15th as Eric Slack, Sr. Analyst with Evaluator Group, will explain what “software-defined” really means and why it’s important to all IT organizations and join a discussion with Alex McDonald, Chair for SNIA’s Cloud Storage Initiative about how these concepts apply to the modern data center.
In this webinar we’ll be exploring:
•How a SDDC leverages this concept to make the private cloud feasible
•How we can apply SDDC concepts to an existing data center
•How to develop your own software-defined data center environment
Ken Cantrell, Mngr, Performance Engineering, NetApp; Mark Rogov, Advisory Systems Engineer, EMC; David Fair - Chair, SNIA-ESF
The third installment of our performance benchmarking Webcast series, “Storage Performance Benchmarking: Block Components” aims to continue educating anyone untrained in the storage performance arts to ascend to a common base with the experts. In this Webcast, you will gain an understanding of the block components of modern storage arrays and learn storage block-world terminology, including:
•How storage media affects block storage performance
•Integrity and performance trade-offs for data protection: RAID, Erasure Coding, etc.…
•Terminology updates: seek time, rebuild time, garbage collection, queue depth and service time
Ryan Skipp, T-Systems; William Dupley, Hewlett Packard Enterprise; Brett Philp, Experis
The Cloud Maturity Model (CMM) is one of the most widely utilized tools published by the Open Data Center Alliance. Gain a deeper knowledge of the CMM and the best-practices that have shaped this visionary tool over the past five years.
The objective of the CMM is to help enterprises:
- Evaluate where its IT organization stands in its ability to adopt and integrate cloud services
- Benchmark its IT organization against other industry adopters of cloud
- Build a custom roadmap for their organization towards establishing more effective Hybrid IT – integrating cloud services to improve, not just change, their IT offering and aligned to their specific needs and objectives.
Public- and private-sector organizations have used the ODCA’s CMM to guide wide-scale implementations including the selection of cloud solutions and services. Many forward-thinking vendors integrate ODCA best practices into product and service roadmaps to support open standards and interoperability.
Hear from the primary contributors to version 3.0 of the Cloud Maturity Model. These technology and business executives represent top, global enterprise IT organizations who are on the leading-edge of cloud adoption and organizational maturity.
How To Break “The Cycle” and Move To Hyperconvergence
In this webinar, Storage Switzerland's George Crump and SimpliVity's Adam Sekora compare and contrast the suitability of SANs vs. hyperconverged architectures; examine the benefits of consolidating and reducing the number of discrete IT devices in lieu of hyperconverged infrastructure; and discuss the merits of simplified IT and its impact on technology refresh initiatives.
Scott Villinski, National Director, State and Local Government Enterprise Mobility Sales
One of the biggest challenges you will face as you move to the cloud is keeping your users productive while protecting your agency data. Your users' identities will live in your datacenter as well as in the cloud, so how you protect that and maintain your security processes is vitally important. The way people access applications and resources is changing. This is why the user's identity is crucial to protecting your data and applications.
Our discussion of hybrid identity will cover:
1. Options for synchronizing identities to the cloud
2. Self-service capabilities for your users, including password management, group management and single sign-on
3. How to configure single sign-on to SaaS applications
4. Automating identity management across different repositories in your datacenter
People in analytical roles are demanding more and more compute and storage to get their jobs done. Instead of building out infrastructure for a few employees or a department, systems engineers and IT managers can find value in creating a compute stack in the cloud to meet the fluctuating demand of their clients.
In this 45-minute webinar, you’ll learn:
- How to identify the right analytical workloads
- How to create a scalable compute environment using the cloud for analysts in under 10 minutes
- How to best manage costs associated with the cloud compute stack
- How to create dedicated client stacks with their own scratch space as well as general access to reference data
Health systems departments, research & development departments, and business analyst groups all face silos of these challenging, compute-intensive use cases. By learning how to quickly build this flexible workflow that can be scaled up and down (or off) instantly, you can support business objectives while efficiently managing costs.
Gunnar Menzel, ODCA President, Chief Architect Capgemini Infra
DevOps addresses inefficiencies that result from keeping operations and development in separate silos. By connecting development and operations, enterprise IT departments can begin to break down the walls.
DevOps defines a set of roles and responsibilities focused on reducing risk in IT deployments and projects. The result is maximized automation, elimination of human error, increased consistency, and reduced time spent on the outages, as well as error detection and prevention brought about by unstable environments.
In this webinar, ODCA president, Gunnar Menzel, will share perspectives on the DevOps concept, focusing on key challenges it can help resolve and the benefits it can provide.
Sara Hebert, Brennan Chapman, Aaron Wetherold, Jeff Kember
Moonbot Shoots for the Cloud to Meet Deadlines and Manage Costs
Threatened by deadlines for Academy award submissions, Moonbot Studios faced a shortage of rendering capacity while working on Taking Flight, its newest animated short film, and other important projects. As a small studio with a matching budget, the team did what it does best—it got creative and solved the problem with what they first called “magic.”
In this webinar, the Moonbot team will tell its tale of sending its rendering capacity to Google Compute Engine and how they defied networking odds by caching data close to the animators with an Avere vFXT. Hear Moonbot’s pipeline supervisor tell how they turned cloud data center distance into a non-issue, met deadlines, and gained quantitative benefits that sparked energy in this small team of creative aviators.
In this session, you will learn:
•What drove the Moonbot Studios to move to the cloud
•How they moved complex renders to Google Compute Engine, overcoming data access roadblocks
•Measurable results including speed, economics, flexibility, and creative freedom
The Moonbot Studios flight to the cloud will be supported by Google Cloud Platform and Avere Systems for a complete overview of how the technologies help bring new ideas to life.
After a long day, four of the brightest minds in the system gather at the local watering hole. While there, they discuss their experience using the fastest solution in the galaxy and how it compares to other, more sinister, options.
After the dust settled from the BATTLE OF VENDORS, our heroes stole away with the ultimate weapon, the onQ APPLIANCE, a fully replicated data center-in-a-box with enough power to run entire environments for as long they need.
In the ever treacherous struggle for DATA CENTER SUPREMACY, these four rebels now have the power to take back control of their data, improve their processes, and restore freedom to their weekends…
John Peluso, Senior Vice President of Product Strategy, AvePoint
With the increase in cloud computing and easy-to-use file sharing systems in the workplace, it's become increasingly difficult for IT departments to keep track and maintain a secure environment. When staff needs to access or share data quickly, they no longer need to rely on IT to provide the tools to do so. Why would they go through the red tape of IT procurement, provisioning, testing, and security when they can find a solution themselves in a matter of seconds?
Join John Peluso, Senior Vice President of Product Strategy at AvePoint, as he presents how organisations have decided to call a truce and provide self-service provisioning and management. In this webcast, he will discuss:
- The dangers of having silos of information that IT and the business are unaware of – disconnected from the centralized servers and storage of the data center or even approved cloud services
- How all of this information may be absent from aggregated capacity, secured content, usage, and other reporting at higher levels, which can complicate business decisions.
- What an organisation can do to proactively manage any rogue IT by providing self-service to end users
Gary Grider, High Performance Computing Division Leader, Los Alamos National Laboratory
It has been said that Objects are for applications and POSIX is for people. The HPC community as well as many other large scale IT organizations have legacy applications and users that know, use, and depend on a near POSIX environment with real folders, ease of renaming and reshaping trees, and other powerful concepts in POSIX.
There are several POSIX name spaces that sit on top of cloud style erasure based objects but few if any really provide an extremely scalable solution. MarFS is designed to address this problem by providing a scalable near-POSIX name space over standard object systems, with target scaling out to trillions of POSIX files, hundreds of Gigabytes/sec of data bandwidth, and millions of POSIX metadata operations/sec.
Business and IT leaders are understandably reluctant to retire considerable, legacy investment in technology, people, and processes due to security, risk, and regulatory compliance obligations. This creates a hybrid IT deployment model: an on-premise landscape of existing or legacy systems and off-premise cloud deployment of suitable IT capability.
The Open Data Center Alliance (ODCA) believes that integration of cloud deployments with enterprise landscapes should consider people, process, technology, and operating models. Doing so encourages faster cloud adoption, leverages existing enterprise investments in IT landscape and helps govern safe cloud adoption through effective risk and compliance management.
Join this free webinar and download a whitepaper published by ODCA and top member companies.
A growing number of enterprises are running applications in the cloud for production needs, while also running the bulk of their applications in physical data centers.
Managing network security in a hybrid IT environment brings many new challenges, such as:
- Lack of visibility
- Difficulty to ensure compliance across multiple vendors
- Maintaining network connectivity of business critical applications
Join our webinar to learn how to effectively manage Security Policy across hybrid cloud and physical networks. In this session we will share the key challenges that our customers experience when migrating workloads to the cloud, as well as methods to mitigate these challenges.
A recent IDC report indicates that only 25% of organizations have repeatable strategies for cloud adoption - 32% lack any Cloud strategy whatsoever. The ECMM's intention is to address this market need through a best practice based repeatable framework for planning cloud adoption that drives business transformation maturity.
By synthesizing together a wide range of industry best practice documents, the ECMM describes how the core building blocks of Cloud services can be identified and assembled to support a strategic business model expansion enabled by new technology innovation
Richard Fichera, Vice President and Principal Analyst Serving Infrastructure & Operations Professionals at Forrester Research
Server virtualization has driven a radically more efficient computing paradigm, enabling IT organizations to deploy business-enabling applications like mobility, social media, the Internet of Things, big data and collaboration at lightning speed. In the midst of this proliferation of applications and explosion of data, storage costs, complexity, and risk have increased at an unprecedented rate – until now.
What if you could have a single storage architecture that enables you to:
Eliminate the cost and complexity of managing silos of storage
Facilitate different service levels for different workloads on the fly
Realize the full benefit of storage consolidation
Would we have your attention?
Join us for a live webinar with guest analyst Richard Fichera of Forrester Research on the future of storage consolidation, and learn how the Nimble Storage Adaptive Flash platform can fuel gains in business responsiveness and agility.
Jeff Klaus, GM Intel Data Center Solutions; Paul Vaccaro, Intel Data Center Operations and Planning Manager
A rapid rate of change complicates every facet of data center management, and server-centric compute models are too cumbersome for today’s highly variable workloads. Is it possible to optimize resources and operations in such dynamic environments? In this presentation, learn how to replace manual, hardware-defined application provisioning and management with a highly automated, software-defined resource model and orchestration layer that enables flexibility, simplified on-demand capital efficiency, and lower TCO. Find out how to compose more agile pools of data center resources, and simultaneously drive up IT efficiency, optimize energy requirements, increase datacenter resilience, and strengthen disaster recovery plans.
Best practices for achieving an efficient data center
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.
This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more