With funding decreasing every year for higher education, universities are turning to data centers to cut some of their operating expenses. Find out how Utah State University redesigned their data center with energy efficiency in mind to both lower the cost of running their infrastructure as well as understand their role in the environment.
This webinar will cover topics including:
- Understanding the fundamental problems with a data center redesign
- Developing a solution for hot isle, cold isle design
- The efficiency results from USU's redesign
- How the USU data center has impacted the educational environment
RecordedMay 3 201238 mins
Your place is confirmed, we'll send you email reminders
J Metz, Cisco, Alex McDonald, NetApp, John Kim, Mellanox, Chad Hintz, Cisco
Welcome to this first part of the webcast series, where we’re going to take an irreverent, yet still informative look, at the parts of a storage solution in Data Center architectures. We’re going to star with the very basics – The Naming of the Parts. We’ll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we’ll learn:
•What an initiator is
•What a target is
•What a storage controller is
•What a RAID is, and what a RAID controller is
•What a Volume Manager is
•What a Storage Stack is
With these fundamental parts, we’ll be able to place them into a context so that you can understand how all these pieces fit together to form a Data Center storage environment.
Oh, and why are the parts named after colors, instead of numbered? Because there is no order to these webcasts. Each is a standalone seminar on understanding some of the elements of storage systems that can help you learn about technology without admitting that you were faking it the whole time! If you are looking for a starting point – the absolute beginning place – start with this one. We’ll be using these terms in all the other presentations.
Join Commvault experts to understand the best practice considerations of leveraging public cloud disaster recovery services when protecting your Software-Defined Data Centre (SDDC).
As awareness of the potential benefits of a Software Defined Data Centre has begun to resonate around CxO’s, the value and importance of disaster recovery provisioning has been challenged by the notion that clustering and high availability could be sufficient to accommodate the recovery needs (i.e. no DR provisioning is required).
In this webinar we'll look at the significant risks and pitfalls that a 'non DR' strategy can pose and learn about the five step programme to optimise your chances of recovery when working with Public Cloud providers.
In the era of data explosion in Cloud-Mobile convergence and Internet of Things, traditional architectures and storage systems will not be sufficient to support the transition of enterprises to cognitive analytics. The ever increasing data rates and the demand to reduce time to insights will require an integrated approach to data ingest, processing and storage to reduce end-to-end latency, much higher throughput, much better resource utilization, simplified manageability, and considerably lower energy usage to handle highly diversified analytics. Yet next-generation storage systems must also be smart about data content and application context in order to further improve application performance and user experience. A new software-defined storage system architecture offers the ability to tackle such challenges. It features seamless end-to-end data service of scalable performance, intelligent manageability, high energy efficiency, and enhanced user experience.
Camberley Bates, Managing Director and Senior Analyst, The Evaluator Group
Since the 90’s the storage architectures of SAN and NAS have been well understood and deployed with the focus on efficiency. With cloud-like applications, the massive scale of data and analytics, the introduction of solid state and HPC type applications hitting the data center, the architectures are changing, rapidly. It is a time of incredible change and opportunity for business and the IT staff that supports the change. Welcome to the new world of Enterprise Data Storage.
Jeff Kato, Taneja Group; Brian Biles, Datrium; Patrick Osborne, HPE; Kevin Fernandez, Nutanix
Join us for a fast-paced and informative 60-minute roundtable as we discuss one of the newest trends in storage: Disaggregation of traditional storage functions. A major trend within IT is to leverage server and server-side resources to the maximum extent possible. Hyper-scale architectures have led to the commoditization of servers and flash technology is now ubiquitous and is often times most affordable as a server-side component. Underutilized compute resources exist in many datacenters as the growth in CPU power has outpaced other infrastructure elements. One current hot trend— software-defined-storage—advocates collocating all storage functions to the server side but also relies on local, directly attached storage to create a shared pool of storage. That limits the server’s flexibility in terms of form factor and compute scalability.
Now some vendors are exploring a new, optimally balanced approach. New forms of storage are emerging that first smartly modularize storage functions, and then intelligently host components in different layers of the infrastructure. With the help of a lively panel of experts we will unpack this topic and explore how their innovative approach to intelligently distributing storage functions can bring about better customer business outcomes.
Jeff Kato, Senior Analyst & Consultant, Taneja Group
Brian Biles, Founder & CEO, Datrium
Patrick Osborne, Senior Director of Product Management and Marketing, HPE
Kevin Fernandez, Director of World Wide Technical Marketing, Nutanix
Sushant Rao, Sr. Director of Product Marketing, DataCore Software
Server virtualization was supposed to consolidate and simplify IT infrastructure in data centers. But, that only “sort of happened”. Companies do have fewer servers but they never hit the consolidation ratios they expected. Why? In one word, performance.
Surveys show that 61% of companies have experienced slow applications after server virtualization with 77% pointing to I/O problems as the culprit.
Now, with hyper-converged, companies have another opportunity to fulfill their vision of consolidating and reduce the complexity of their infrastructure. But, this will only happen if their applications get the I/O performance they need.
Join us for this webinar where we will show you how to get industry leading I/O response times and the best price/performance so you can reduce and simplify your infrastructure.
As organizations move into the "3rd platform", they’ll need to discover new ways to support solutions like Private Cloud and Real-Time Analytics. The traditional disk-based storage they're depending on is not keeping up. Flash is transforming datacenter infrastructure performance, capacity and cost to address these challenges.
Join Chris Tsilipounidakis from Tegile as he examines the ways of transforming to a Flash datacenter to support various, mixed application workloads in an ever-changing ecosystem.
The first rule of data analytics for fast-growing companies? Measure all things. When putting in place a robust data analytics strategy to go from measurement to insight, you’ve got lots of options for tools -- from databases and data warehouse options to new “big data” tools such as Hadoop, Spark, and their related components. But tools are nothing if you don’t know how to put them to use.
We’re going to get some real talk from practitioners in the trenches and learn how people are bringing together new big data technologies in the cloud to deliver a truly world class data analytics solution. One such practitioner is Celtra, a fast-growing provider of creative technology for data-driven digital display advertising. We’re going to sit down with the Director of Engineering, Analytics at Celtra to learn how they built a high-performance data processing pipeline using Spark + a cloud data warehouse, enabling them to process over 2 billion analytics events per day in support of dashboards, applications, and ad hoc analytics.
In this webinar you’ll:
* Build a simpler, faster solution to support your data analytics
* Support diverse reporting and ad hoc analytics in one system
* Take advantage of the cloud for flexibility, scaling, and simplicity
* Evan Schuman, Moderator, VentureBeat
* Grega Kešpret, Director of Engineering, Analytics, Celtra
* Jon Bock, VP of Marketing and Products, Snowflake
Register today and learn how the top SaaS strategies can streamline your business.
Mike Matchett, Sr. Analyst & Consultant, Taneja Group
Come join Senior Analyst Mike Matchett's lively discussion about the concerns and challenges coming when the Internet of Things crashes into our enterprise datacenters. We think big data today is big, but future IoT data streams promise to swamp everything from servers to storage. And today's new big data applications will still need to become more real-time, more agile, distributed and even more scalable. What's coming down the road, and how should we start planning for the future today?
This webcast will go for 30 minutes, followed by a 15 minute Q & A session, where the audience is welcome to ask questions.
Nick Serrecchia, Systems Engineer at Veeam and Terry Grulke, Sr. Technical Advisor at Quantum
With the average company experiencing unplanned downtime 13 times a year, the costs associated with continuing to invest in a legacy backup solution can be extensive. For this reason, more customers are switching to Veeam® and Quantum than ever before. Update to a modern data center and achieve Availability for the Always-On Enterprise™ with Veeam coupled with Quantum’s tiered storage that increases performance, reduces bandwidth requirements and executes a best practices for data protection.
The Internet of Things is becoming a reality, and as a result companies of all shapes and sizes are implementing digital transformation initiatives. This digital transformation begins with a modern data center, built on converged infrastructure, that provides a simple and cost-effective process to both deploy and run IT – supporting core business and next-generation applications.
Carrick Carpenter, Director of Delivery; Cloud Computing - Healthcare and Ozan Talu, Director of private cloud services
Cloud computing is a growing force in healthcare and, while many organizations understand the opportunity that the cloud offers, why and how to get there is widely debated. As providers evaluate the pros and cons of cloud based solutions, several adoption strategies are emerging. Taking the right approach is critical to determining future readiness as healthcare becomes more information-driven and connected, and moves towards collaborative care models and payment reform. This workshop will examine key applications of cloud computing in healthcare (including hosting, security/privacy and medical image archiving), highlight change management strategies from a technical/operational/process perspective, and identify the pros and cons of different cloud models including public vs. private. The workshop will be divided into vignettes that include didactic presentations and real-world case studies with interactive discussions.
Walfrido Zafarana, Product Application Manager, Emerson Network Power
The data centre is mission-critical to many businesses. An efficient chilled water system guaranteeing continuous cooling availability is fundamental in obtaining an overall low data center PUE, and it is thus important to clearly understand the different technologies available for your data centre application.
Join the Emerson Network Power Critical Advantage Webcast for all of the answers.
The webcast will provide insight into:
• The advantages of Chilled Water systems
• The different solutions available according to your data center internal conditions: air-cooled, freecooling, adiabatic
• How to achieve utmost efficiency at the data center system level with the iCOMTM Control
Patrick Grillo, Senior Director, Security Solutions, Fortinet
The Data Center is not an island. It is part of a complex ecosystem, working and evolving together for the overall benefit of the enterprise.
Data Center security can no longer be treated as an island. It must integrate and interact with the overall enterprise ecosystem and security infrastructure to provide a real-time, effective security posture.
This webinar will present a high level view as to the importance of deploying an integrated, end-to-end enterprise security platform for achieving data center security.
Because sometimes, the best data center security solution has nothing to do with the data center!
Jose Ruiz, VP Engineering Operations, Compass Datacenters
As has often been reported, human error is one of the largest factors in data center outages. Since estimates of the average cost of an outage now exceed $740,000, the ability to reduce or eliminate man-caused outages can make a substantial impact on the organization’s bottom line. In this presentation, Jose Ruiz, VP of Engineering Operations for Compass Datacenters, will present a case study on how the introduction of wearable technology has enhanced one customer’s operational performance substantially.
John Mao. Director of Business Development at Stratoscale
While public cloud adoption has been on the rise, most companies are still bound to strict requirements around security, privacy, and data sovereignty issues. For those exact companies, the idea of owning a private cloud is appealing. The reason is simple: private clouds provide similar benefits as public clouds -- such as self-service, automation, orchestration, and "one throat to choke". But how do mainstream IT organizations reap these same benefits? Come join us as we explore how new software-defined paradigms helps transform your dreams of a private cloud into a reality.
Jabez Tan, Senior Analyst, Data Centres,Structure Research
What are the top data centre colocation trends for 2016. How have past predictions played out so far? Singapore and Hong Kong have stood out as the top 2 data centre markets in the Asia Pacific region. We take a quantitative deep dive into the data centre supply and revenue generation for each market, and how much revenue is being generated from colocation services.
Ted Streck, Director, Data Center Practice, EMC Global Services
Organizations today are taking on data center consolidation, migration, and modernization initiatives as a means to improve efficiency, reduce overall costs, and deliver greater availability to meet business expectations in the era of hybrid cloud. However, these projects are risky and can cause dire consequences if not planned
strategically and with best practices.
So how do you accelerate the success of these initiatives while eliminating risk? How do you rapidly determine interdependencies between applications, storage, and servers in order to optimally plan a data center consolidation or modernization?
Attend this session to learn best practices for planning and executing data center initiatives, including how automated tools accelerate interdependency discovery and eliminate nearly 98% of human error.
Office 365 even gives you the control to log and manage data access workflows directly with datacenter engineers. Security, compliance, and privacy are built into Office 365 to help ensure that your company data is protected. Office 365 meets leading global compliance standards, such as HIPAA, FISMA, and ISO 27001, and it delivers industry-leading best practices in data center design, data loss prevention and advanced threat protection.
Join us to discover why 4 out of 5 fortune 500 companies have come to trust Microsoft’s Office 365 service.
Best practices for achieving an efficient data center
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.
This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more
Many studies have been done on the benefits of Predictive Analytics on customer engagement in order to change customer behaviour. However, the side less romanticized is the benefit to IT operations as it is sometimes difficult to turn the focus from direct revenue impacting gain to the more indirect revenue gains that can come from optimization and pro-active issue resolution.
I will be speaking, from an application operations engineers perspective, on the benefits to the business of using Predictive Analytics to optimize applications.
I will summarize the stages of analytics maturity that lead an organization from traditional reporting (descriptive analytics: hindsight), through predictive analytics (foresight), and into prescriptive analytics (insight). The benefits of big data (especially high-variety data) will be demonstrated with simple examples that can be applied to significant use cases.
The goal of data science in this case is to discover predictive power and prescriptive power from your data collections, in order to achieve optimal decisions and outcomes.
Join this webinar to see how the CloudPhysics Public Cloud Planning Rightsizer identifies opportunities to lower your costs of running applications on the public cloud.
The Public Cloud Planning Rightsizer automatically identifies on-premises virtual machines (VMs) that are over-provisioned with more resources (such as CPU and memory) than they use. This lets you optimize instance matching to the ideal cloud instances. Rightsizing reveals the verifiable cost of running workloads in the cloud. Now you can answer the question, “will we save money by migrating applications to the cloud?”
This webinar shows how Public Cloud Planning Rightsizer collects resource utilization data from each VM on a fine-grained basis, and then analyzes those data across time to discover the VM’s actual resource needs. Imagine an on-premises VM configured with 8 vCPUs: if the Rightsizer shows that it has never used more than 2 vCPUs, you can Rightsize that VM to a smaller instance in the cloud, saving substantial funds.
Many enterprise organizations are moving beyond antivirus software, adding new types of controls and monitoring tools to improve incident prevention, detection, and response on their endpoints. Unfortunately, some of these firms are doing so by adding tactical technologies that offering incremental benefits only.
So what’s needed?
A strategic approach that covers the entire ESG endpoint security continuum from threat prevention to incident response. A truly comprehensive solution will also include advanced endpoint security controls that reduce the attack surface and tight integration with network security, SIEM, and threat intelligence to improve threat detection and response processes.
Join ESG senior principal analyst Jon Oltsik, Intel Security, and Bufferzone on a webinar on July 21 at 10am PT/1pm ET to learn more about next-generation endpoint security requirements and strategies.
The ever changing Cloud Service Provider marketplace is filled with growing opportunities and increasing competition. Mike Slisinger, Cloud Solutions Architect at Nutanix, and Chris Feltham, Cloud Solution Sales Manager at Intel, will discuss how Nutanix and Intel collaborate on cloud technologies and solutions to help Cloud Service Providers solve infrastructure challenges and simplify operations. We will also discuss how current Nutanix and Intel powered Service Providers are building differentiated services that provide true business value to their customers.
Attending this webcast should provide Cloud Service Providers with a good understanding of how Intel and Nutanix can help reduce costs of offering cloud services while enabling and growing new revenue streams for business.
Hyperconverged infrastructures combine compute and storage components into a modular, scale-out platform that typically includes a hypervisor and some comprehensive management software. The technology is usually sold as self-contained appliance modules running on industry-standard server hardware with internal HDDs and SSDs. This capacity is abstracted and pooled into a shared resource for VMs running on each module or ‘node’ in the cluster. Hyperconverged infrastructures are sold as stand-alone appliances or as software that companies or integrators can use to build their own compute environments for private or hybrid clouds, special project infrastructures or departmental/remote office IT systems.
Understand what hyperconvergence is – and is not
Understand the capabilities this technology can bring
Discussion of where this technology is going
How and where it is being used in the Enterprise
DMTF’s Platform Management Components Intercommunications (PMCI) Working Group develops standards to address “inside the box” communication and functional interfaces between the components of the platform management subsystem such as management controllers, BIOS, and intelligent management devices. Presented by DMTF’s Senior VP of Technology, Hemal Shah, this webinar will provide an overview of PMCI standards including Management Component Transport Protocol (MCTP), Platform Level Data Model (PLDM) and Network Controller Sideband Interface (NC-SI).
Digital transformation is on the agenda of every company and creates a new focus on agile software development. Join us to learn how platform as a service for software developers and operations (DevOps) transforms the underlying infrastructure cloud. We will cover the IT requirements and the important role of scale-out infrastructure, infrastructure as code and containers for such clouds.
From May 2018, the EU rules on data protection are changing, and all companies with more than 250 employees will need to reassess their practices. What’s more, the penalties for non-compliance are changing too—so now’s the time to get prepared.
The days of ensuring each designer has their workstation under their desk is becoming less the norm. Many organizations, particularly media and entertainment as well as architecture and engineering are considering leveraging the cloud to provide workstations to solve common IT problems resulting from big data sets, a dispersed and flexible workforce as well as increasing concern for data security.
Alex Herrera, a senior analyst with Jon Peddie Research, author, and consultant to the world’s leading computer graphics and semiconductor companies will provide guidance on how organizations can develop an IT strategy to deploy and support a secure cloud model, where pay-as-you-go is the norm.
This session will provide valuable insights including:
• Pros and cons of hosting workstations in the cloud
• How to effectively manage workflows
• Differences between private and public clouds
• Key considerations for cloud deployments
Teradici’s CTO will discuss how customers can effectively leverage Teradici PCoIP Workstation Access Software to securely deliver a seamless end user experience from the cloud.
Those who attend the webinar will receive a copy of the slide deck.
Q&A will follow at the end of the session.