Server Energy Usage, Costs: Implications for the Cloud and In-house Data Centers
This talk will describe the key driving forces affecting data center costs, developing and documenting detailed examples from available data, estimating costs and correcting them for inflation, and explaining the implications of the results. It also will explore some ways to improve data center efficiency, the most important and most neglected of which relate to institutional changes that can help companies reduce the total costs of computing services.
RecordedAug 16 201234 mins
Your place is confirmed, we'll send you email reminders
J Metz, Cisco, Alex McDonald, NetApp, John Kim, Mellanox, Chad Hintz, Cisco
Welcome to this first part of the webcast series, where we’re going to take an irreverent, yet still informative look, at the parts of a storage solution in Data Center architectures. We’re going to star with the very basics – The Naming of the Parts. We’ll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we’ll learn:
•What an initiator is
•What a target is
•What a storage controller is
•What a RAID is, and what a RAID controller is
•What a Volume Manager is
•What a Storage Stack is
With these fundamental parts, we’ll be able to place them into a context so that you can understand how all these pieces fit together to form a Data Center storage environment.
Oh, and why are the parts named after colors, instead of numbered? Because there is no order to these webcasts. Each is a standalone seminar on understanding some of the elements of storage systems that can help you learn about technology without admitting that you were faking it the whole time! If you are looking for a starting point – the absolute beginning place – start with this one. We’ll be using these terms in all the other presentations.
Join Commvault experts to understand the best practice considerations of leveraging public cloud disaster recovery services when protecting your Software-Defined Data Centre (SDDC).
As awareness of the potential benefits of a Software Defined Data Centre has begun to resonate around CxO’s, the value and importance of disaster recovery provisioning has been challenged by the notion that clustering and high availability could be sufficient to accommodate the recovery needs (i.e. no DR provisioning is required).
In this webinar we'll look at the significant risks and pitfalls that a 'non DR' strategy can pose and learn about the five step programme to optimise your chances of recovery when working with Public Cloud providers.
In the era of data explosion in Cloud-Mobile convergence and Internet of Things, traditional architectures and storage systems will not be sufficient to support the transition of enterprises to cognitive analytics. The ever increasing data rates and the demand to reduce time to insights will require an integrated approach to data ingest, processing and storage to reduce end-to-end latency, much higher throughput, much better resource utilization, simplified manageability, and considerably lower energy usage to handle highly diversified analytics. Yet next-generation storage systems must also be smart about data content and application context in order to further improve application performance and user experience. A new software-defined storage system architecture offers the ability to tackle such challenges. It features seamless end-to-end data service of scalable performance, intelligent manageability, high energy efficiency, and enhanced user experience.
Camberley Bates, Managing Director and Senior Analyst, The Evaluator Group
Since the 90’s the storage architectures of SAN and NAS have been well understood and deployed with the focus on efficiency. With cloud-like applications, the massive scale of data and analytics, the introduction of solid state and HPC type applications hitting the data center, the architectures are changing, rapidly. It is a time of incredible change and opportunity for business and the IT staff that supports the change. Welcome to the new world of Enterprise Data Storage.
Jeff Kato, Taneja Group; Brian Biles, Datrium; Patrick Osborne, HPE; Kevin Fernandez, Nutanix
Join us for a fast-paced and informative 60-minute roundtable as we discuss one of the newest trends in storage: Disaggregation of traditional storage functions. A major trend within IT is to leverage server and server-side resources to the maximum extent possible. Hyper-scale architectures have led to the commoditization of servers and flash technology is now ubiquitous and is often times most affordable as a server-side component. Underutilized compute resources exist in many datacenters as the growth in CPU power has outpaced other infrastructure elements. One current hot trend— software-defined-storage—advocates collocating all storage functions to the server side but also relies on local, directly attached storage to create a shared pool of storage. That limits the server’s flexibility in terms of form factor and compute scalability.
Now some vendors are exploring a new, optimally balanced approach. New forms of storage are emerging that first smartly modularize storage functions, and then intelligently host components in different layers of the infrastructure. With the help of a lively panel of experts we will unpack this topic and explore how their innovative approach to intelligently distributing storage functions can bring about better customer business outcomes.
Jeff Kato, Senior Analyst & Consultant, Taneja Group
Brian Biles, Founder & CEO, Datrium
Patrick Osborne, Senior Director of Product Management and Marketing, HPE
Kevin Fernandez, Director of World Wide Technical Marketing, Nutanix
Sushant Rao, Sr. Director of Product Marketing, DataCore Software
Server virtualization was supposed to consolidate and simplify IT infrastructure in data centers. But, that only “sort of happened”. Companies do have fewer servers but they never hit the consolidation ratios they expected. Why? In one word, performance.
Surveys show that 61% of companies have experienced slow applications after server virtualization with 77% pointing to I/O problems as the culprit.
Now, with hyper-converged, companies have another opportunity to fulfill their vision of consolidating and reduce the complexity of their infrastructure. But, this will only happen if their applications get the I/O performance they need.
Join us for this webinar where we will show you how to get industry leading I/O response times and the best price/performance so you can reduce and simplify your infrastructure.
As organizations move into the "3rd platform", they’ll need to discover new ways to support solutions like Private Cloud and Real-Time Analytics. The traditional disk-based storage they're depending on is not keeping up. Flash is transforming datacenter infrastructure performance, capacity and cost to address these challenges.
Join Chris Tsilipounidakis from Tegile as he examines the ways of transforming to a Flash datacenter to support various, mixed application workloads in an ever-changing ecosystem.
The first rule of data analytics for fast-growing companies? Measure all things. When putting in place a robust data analytics strategy to go from measurement to insight, you’ve got lots of options for tools -- from databases and data warehouse options to new “big data” tools such as Hadoop, Spark, and their related components. But tools are nothing if you don’t know how to put them to use.
We’re going to get some real talk from practitioners in the trenches and learn how people are bringing together new big data technologies in the cloud to deliver a truly world class data analytics solution. One such practitioner is Celtra, a fast-growing provider of creative technology for data-driven digital display advertising. We’re going to sit down with the Director of Engineering, Analytics at Celtra to learn how they built a high-performance data processing pipeline using Spark + a cloud data warehouse, enabling them to process over 2 billion analytics events per day in support of dashboards, applications, and ad hoc analytics.
In this webinar you’ll:
* Build a simpler, faster solution to support your data analytics
* Support diverse reporting and ad hoc analytics in one system
* Take advantage of the cloud for flexibility, scaling, and simplicity
* Evan Schuman, Moderator, VentureBeat
* Grega Kešpret, Director of Engineering, Analytics, Celtra
* Jon Bock, VP of Marketing and Products, Snowflake
Register today and learn how the top SaaS strategies can streamline your business.
Mike Matchett, Sr. Analyst & Consultant, Taneja Group
Come join Senior Analyst Mike Matchett's lively discussion about the concerns and challenges coming when the Internet of Things crashes into our enterprise datacenters. We think big data today is big, but future IoT data streams promise to swamp everything from servers to storage. And today's new big data applications will still need to become more real-time, more agile, distributed and even more scalable. What's coming down the road, and how should we start planning for the future today?
This webcast will go for 30 minutes, followed by a 15 minute Q & A session, where the audience is welcome to ask questions.
Nick Serrecchia, Systems Engineer at Veeam and Terry Grulke, Sr. Technical Advisor at Quantum
With the average company experiencing unplanned downtime 13 times a year, the costs associated with continuing to invest in a legacy backup solution can be extensive. For this reason, more customers are switching to Veeam® and Quantum than ever before. Update to a modern data center and achieve Availability for the Always-On Enterprise™ with Veeam coupled with Quantum’s tiered storage that increases performance, reduces bandwidth requirements and executes a best practices for data protection.
The Internet of Things is becoming a reality, and as a result companies of all shapes and sizes are implementing digital transformation initiatives. This digital transformation begins with a modern data center, built on converged infrastructure, that provides a simple and cost-effective process to both deploy and run IT – supporting core business and next-generation applications.
Carrick Carpenter, Director of Delivery; Cloud Computing - Healthcare and Ozan Talu, Director of private cloud services
Cloud computing is a growing force in healthcare and, while many organizations understand the opportunity that the cloud offers, why and how to get there is widely debated. As providers evaluate the pros and cons of cloud based solutions, several adoption strategies are emerging. Taking the right approach is critical to determining future readiness as healthcare becomes more information-driven and connected, and moves towards collaborative care models and payment reform. This workshop will examine key applications of cloud computing in healthcare (including hosting, security/privacy and medical image archiving), highlight change management strategies from a technical/operational/process perspective, and identify the pros and cons of different cloud models including public vs. private. The workshop will be divided into vignettes that include didactic presentations and real-world case studies with interactive discussions.
Walfrido Zafarana, Product Application Manager, Emerson Network Power
The data centre is mission-critical to many businesses. An efficient chilled water system guaranteeing continuous cooling availability is fundamental in obtaining an overall low data center PUE, and it is thus important to clearly understand the different technologies available for your data centre application.
Join the Emerson Network Power Critical Advantage Webcast for all of the answers.
The webcast will provide insight into:
• The advantages of Chilled Water systems
• The different solutions available according to your data center internal conditions: air-cooled, freecooling, adiabatic
• How to achieve utmost efficiency at the data center system level with the iCOMTM Control
Patrick Grillo, Senior Director, Security Solutions, Fortinet
The Data Center is not an island. It is part of a complex ecosystem, working and evolving together for the overall benefit of the enterprise.
Data Center security can no longer be treated as an island. It must integrate and interact with the overall enterprise ecosystem and security infrastructure to provide a real-time, effective security posture.
This webinar will present a high level view as to the importance of deploying an integrated, end-to-end enterprise security platform for achieving data center security.
Because sometimes, the best data center security solution has nothing to do with the data center!
Jose Ruiz, VP Engineering Operations, Compass Datacenters
As has often been reported, human error is one of the largest factors in data center outages. Since estimates of the average cost of an outage now exceed $740,000, the ability to reduce or eliminate man-caused outages can make a substantial impact on the organization’s bottom line. In this presentation, Jose Ruiz, VP of Engineering Operations for Compass Datacenters, will present a case study on how the introduction of wearable technology has enhanced one customer’s operational performance substantially.
John Mao. Director of Business Development at Stratoscale
While public cloud adoption has been on the rise, most companies are still bound to strict requirements around security, privacy, and data sovereignty issues. For those exact companies, the idea of owning a private cloud is appealing. The reason is simple: private clouds provide similar benefits as public clouds -- such as self-service, automation, orchestration, and "one throat to choke". But how do mainstream IT organizations reap these same benefits? Come join us as we explore how new software-defined paradigms helps transform your dreams of a private cloud into a reality.
Jabez Tan, Senior Analyst, Data Centres,Structure Research
What are the top data centre colocation trends for 2016. How have past predictions played out so far? Singapore and Hong Kong have stood out as the top 2 data centre markets in the Asia Pacific region. We take a quantitative deep dive into the data centre supply and revenue generation for each market, and how much revenue is being generated from colocation services.
Ted Streck, Director, Data Center Practice, EMC Global Services
Organizations today are taking on data center consolidation, migration, and modernization initiatives as a means to improve efficiency, reduce overall costs, and deliver greater availability to meet business expectations in the era of hybrid cloud. However, these projects are risky and can cause dire consequences if not planned
strategically and with best practices.
So how do you accelerate the success of these initiatives while eliminating risk? How do you rapidly determine interdependencies between applications, storage, and servers in order to optimally plan a data center consolidation or modernization?
Attend this session to learn best practices for planning and executing data center initiatives, including how automated tools accelerate interdependency discovery and eliminate nearly 98% of human error.
Office 365 even gives you the control to log and manage data access workflows directly with datacenter engineers. Security, compliance, and privacy are built into Office 365 to help ensure that your company data is protected. Office 365 meets leading global compliance standards, such as HIPAA, FISMA, and ISO 27001, and it delivers industry-leading best practices in data center design, data loss prevention and advanced threat protection.
Join us to discover why 4 out of 5 fortune 500 companies have come to trust Microsoft’s Office 365 service.
Best practices for achieving an efficient data center
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.
This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more
Um im Zeitalter der digitalen Transformation überleben zu können ist eine klare Service Orientierung unabdingbar. Im Fokus steht die zu erbringende Leistung bzw. der Nutzen den sie erzeugt. Die sogenannten „Digital Natives des 21. Jahrhunderts“ wie amazon, Tesla oder airbnb nutzen die Mechanismen des digitalen Zeitalters nicht nur um neue Märkte zu erschließen, sondern auch um nachhaltig die Spielregeln traditioneller Märkte zu verändern.
Wer mithalten möchte, kann nicht mehr dem alt-bewährten Build-to-Order Ansatz folgen. Überleben werden jene Unternehmen, die es schaffen, Services schnell, agil und kostengünstig zu produzieren. Über Jahre gewachsene, hybride IT Landschaften machen die Umsetzung dieser Aufgabe aber nicht leichter.
Lernen Sie in diesem Webcast, was sich hinter „Service Design Thinking“ verbirgt und welchen Weg es gibt, um diese Herausforderungen erfolgreich zu meistern.
SD-WAN can dramatically reduce cost and increases the ability to rapidly bring new services online, connecting users to all types of applications, and speeding up time to market. But the idea of re-architecting the WAN can be daunting and the decision to adopt an SD-WAN solution can be a difficult one.
Join renowned network expert Ethan Banks, Co-Founder of Packet Pushers, and Rolf Muralt, VP Products Management SD-WAN at Silver Peak, in a webinar that discusses the SD-WAN market, lessons learned, and what features to be on the lookout for as you make your decision. They will discuss issues around technology selection and deployment, including:
· How a zero-touch, hybrid or SD-WAN can leverage multiple connectivity forms
· Ways to prioritize and route traffic across different connections
· Quality of Service (QoS), and how to maintain 100% uptime
· Best practices for transitioning with minimal impact on budget and resources
· Real customer examples that demonstrate different deployment stage and benefits
Traditional performance testing typically requires that all components of the application are “completed,” integrated and deployed into an appropriate environment. This results in testing not being done until late in the delivery cycle or sometimes skipped entirely. Which can then lead to a less then optimal user experience, expensive rework and potential loss of business.
Many organizations are adopting service virtualization to overcome the key challenges associated with performance testing. During this session see why and specifically how service virtualization:
•Enables you to do testing early in the dev cycle by simulating unavailable production systems and missing components
•Helps you control the inputs (like response times and 3rd party system responses) so you can do more negative and exploratory testing
•Provisions performance test environments “in a box” for on-demand testing
•Works with CA APM so that you can monitor an app during a load and performance test and see how the app reacts
Applications - the lifeblood of modern business - can be in a sorry state of affairs given today's forced alignment with server, OS, and storage boundaries. This can not only cause deployment delays and complexity, but it also results in underutilized hardware and inflated operational costs. There is a drive to embrace new technologies and methodologies in the enterprise, but this presents significant challenges. Limited application-awareness at the infrastructure level makes it nearly impossible to deliver on the promised SLAs and the tight coupling of applications and underlying operating software (OS or hypervisors) compromises application portability as well as developer productivity.
A growing number of enterprises are turning to application containers to support more efficient and effective development and deployment in an application-centric IT paradigm. By abstracting applications from the underlying infrastructure, containers can simplify application deployment, and enable seamless portability across machines and clouds. Containers can also enable significant cost savings by consolidating multiple applications per machine without compromising performance or predictability. Join us to learn more about container adoption in the enterprise and how a container-based server and storage virtualization environment can help take your software-defined datacenter transformation to the next level of an application-defined datacenter.
Featuring speakers from F5, Illumio, Nutanix, Rubrik, and Workspot. Compare and evaluate 4 leading hyperconverged platform-optimized solutions that expand the capabilities of the Nutanix enterprise cloud platform: F5 application delivery, Illumio adaptive security, Rubrik data protection, and Workspot VDI.
• Workspot's cloud-native, infinitely and instantly scalable orchestration architecture (aka VDI 2.0) enables enterprise-class VDI deployment in hours, in which you can use all your existing infrastructure (apps, desktops and data).
• Rubrik eliminates backup pain with automation, instant recovery, unlimited replication, and data archival at infinite scale -- with zero complexity.
• Visualization 2.0 from Illumio shows you a live, interactive map of all of your application traffic across your data centers and clouds, and identifies applications for secure migration to the Nutanix platform.
• F5 delivers your mission critical applications on an enterprise cloud that uniquely delivers the agility, pay-as-you-grow consumption, and operational simplicity of the public cloud without sacrificing the predictability, security, and control of on-premises infrastructure.
The constant barrage of application connectivity and security policy change requests, not to mention the relentless battle against cyber-attacks have made the traditional approach to managing security untenable. In order keep your business both agile and secure – across today’s highly complex and diverse enterprise networks – you must focus your security management efforts on what matters most – the applications that power your business.
Join Joe DiPietro, SE Director at AlgoSec on Tuesday, July 26 at 11am EDT for a technical webinar, where he will discuss an application-centric, lifecycle approach to security policy management – from automatically discovering application connectivity requirements, through ongoing change management and proactive risk analysis, to secure decommissioning – that will help you improve your security maturity and business agility. During the webinar, Joe will explain how to:
• Understand the security policy management lifecycle and its impact on application availability, security and compliance
• Auto-discover and map business applications and their connectivity flows – and why it’s important
• Securely migrate business application connectivity and security devices to a new data center
•Get a single pane of glass that aligns application connectivity with your security device estate
• Identify risk and vulnerabilities and prioritize them based on business criticality
Measurement is critical to high quality video viewing experiences – especially in an OTT world. OTT video introduced new challenges for measurement system architecture and deployment, which can now be addressed through virtualized tools. But NFV/SDN architectures offer something more – the ability to scale video delivery infrastructure dynamically in response to quality and viewer demands.
Join this webcast to hear how IneoQuest and Intel worked together to develop virtualized versions of iQ’s popular end-to-end video quality monitoring tools. You will also learn how these virtualized offerings can be leveraged to monitor video quality across the distribution infrastructure.
•In the head-end/origin for content quality assurance at the ingest, transcoding, packaging, and publishing points
•At the network/CDN ingest points
•Within the network/CDNs
•Beyond the CDN, across geographically distributed access networks.
Trend Micro Akdeniz Ülkeleri Kanal Müdürü Mehmet Dağdevirentürk, kendinizi ve firmanızı fidye yazılımlara karşı nasıl koruyacağınızı ve fidye yazılımlarla ilgili son gelişmelerin neler olduğunu sizlerle paylaşıyor. Fidye yazılıma karşı savunmanın sihirli bir formulü yok, ancak en yaygın saldırı yöntemlerini bilmeniz, en iyi korumayı sağlamak için atacağınız öncelikli adımları bilmenizi sağlar. Bu webinarda Trend Micro olarak sizlerle tecrübelerimizi paylaşarak karşılaşabileceğiniz olası risklere karşı planınızı şimdiden oluşturmanız için bir yol haritası sunuyoruz.
Many studies have been done on the benefits of Predictive Analytics on customer engagement in order to change customer behaviour. However, the side less romanticized is the benefit to IT operations as it is sometimes difficult to turn the focus from direct revenue impacting gain to the more indirect revenue gains that can come from optimization and pro-active issue resolution.
I will be speaking, from an application operations engineers perspective, on the benefits to the business of using Predictive Analytics to optimize applications.