DCIM: Managing the Facilities and IT of the Data Center - Panel Session
Data Center Infrastructure Management (DCIM) has been discussed in data center conferences and media. It is a set of tools and methods to make a data center as a whole perform optimally. Simply put, DCIM deals with mechanical and electrical systems of facilities, and power and environmental informat
Data Center Infrastructure Management (DCIM) has been discussed in data center conferences and media. It is a set of tools and methods to make a data center as a whole perform optimally. Simply put, DCIM deals with mechanical and electrical systems of facilities, and power and environmental information of IT equipment. No standards have been defined yet but several functional areas have been mentioned, such as Inventory, Change, Capacity, Simulation, Monitoring and Efficiency Modeling. Many vendors have emerged with various solutions that focus on one or a few areas but not on a holistic scale. So the integration of multiple tools will be necessary to satisfy the overall needs and some vendors are working together to integrate their solutions.
But equally important is how to manage IT equipment at a higher level, such as server health, and application and service status. This is the market segment known as system management. Although DCIM and system management have been developed independent to each other, it is increasingly necessary to integrate the two to manage a data center even more efficiently. The information obtained with system management functions will become crucial to control the infrastructure sides of facilities and even IT equipment.
In this panel session, we will review DCIM solutions and how they can be integrated with system management for better design and operations of data centers. We will also discuss the impact of such integration, such as management and organizational structure. But at the same time, we need to realize that the ultimate goal of having a data center is to satisfy business goals of an enterprise with IT infrastructure, such as server health and applications status. That is why it is utmost important to integrate both IT and Facilities under common management to plan, design, monitor and control. Join this panel as these thought leaders discuss how you can use DCIM to best manage the facilities and IT within your data center.
RecordedAug 16 201257 mins
Your place is confirmed, we'll send you email reminders
As power density is rapidly increasing in today’s data center, provisioning the right amount of power to the rack without under sizing or over provisioning the power chain has become a real design challenge.
Managing the current and future power needs of the data center requires Cap-Ex to deploy a flexible power infrastructure: safely handling peak power demands, balancing critical loads and easily scaling to meet growing power needs.
In this webinar you will learn:
> How to create Long term power flexibility and improved availability for your operation
> How to increase energy efficiency and improve SLAs through a comprehensive set of best practices.
As automation lightens the load on IT operations personnel, and the DevOps cultural shift brings dev and ops together, some ops people are worried about their jobs.
Furthermore, as DevOps takes hold, other roles in the organization are set to transform. DevOps potentially reworks project management methodologies, so what about the project managers? DevOps flattens organizations, so should middle managers be worried?
In this webinar, industry analyst and president of Intellyx Jason Bloomberg will set the stage with some broad observations about the organizational impact DevOps is having across enterprises.
Next, Pauly Comtois, VP of DevOps at Hearst Business Media will join Jason for a thought-provoking, in-depth discussion of how DevOps impacts individual's roles within the organization, with some first-person stories of Hearst Business Media’s DevOps transformation.
You will learn:
- Potential starting points to mapping your own DevOps transformation
- What non-technical groups and roles need to be part of any DevOps initiative – and methods for effective inclusion
- New organizational approaches to structure and collaboration that empower instead of threaten
- How to better manage business changes with DevOps strategies
The cloud is here to stay. However, that also means more and more companies are realizing that they’ve built two separate IT teams and are struggling to deliver quality services without a unified view. A true hybrid monitoring solution is the answer to this common problem.
Join Jay Lyman, Analyst at 451 Research, and Kent Erickson, Alliance Strategist at Zenoss, as they discuss what is shaping hybrid IT, where it's going, and what to do about it.
The pursuit of Data Center Infrastructure Management (DCIM) benefits leads, too often, in a procurement and implementation process take too long, the software costs too much and the tools often under-deliver. Often, organizations fail to define business requirements for DCIM and evaluate options in a holistic and consistent framework. This webinar can help provide insight into why those outcomes occur and how to avoid.
No matter how well designed and built your data center infrastructure may be, ultimately it is the day-to-day operations activities and management decisions where the success of your mission stands or falls. In fact, the leading cause of downtime is human error. To mitigate risks, achieve your business goals, and meet uptime requirements, it is critical to unify operating behaviors with the functionality of your infrastructure. These panelists will discuss how they have achieved M&O excellence across their global portfolios leveraging the Uptime Institute M&O Stamp of Approval.
Enterprises and Service Providers are driving down cost, increasing automation, supporting cloud applications and delivering high quality of experience over ordinary broadband networks using a Software-Defined Wide Area Network (SD-WAN). But not all SD-WAN is created equal.
Join this webinar to learn more about the differences in SD-WAN architectures and how Cloud-Delivered SD-WAN with x86-based hardware drives down the cost of wide area networking, increases the automation of deployments and reliably supports cloud-based applications.
As organizations move into 2nd and 3rd generations of virtualization with VMware, they often experience high latency and unpredictable performance. The traditional disk-based storage they're depending on for their VMware environment is not keeping up. Flash is transforming datacenter cost, performance and capacity to address these challenges.
Join Chris Tsilipounidakis as he examines the benefits of transforming to an All-Flash virtualized datacenter to support virtualized workloads.
Join Commvault experts to understand the best practice considerations of leveraging public cloud disaster recovery services when protecting your Software-Defined Data Centre (SDDC).
As awareness of the potential benefits of a Software Defined Data Centre has begun to resonate around CxO’s, the value and importance of disaster recovery provisioning has been challenged by the notion that clustering and high availability could be sufficient to accommodate the recovery needs (i.e. no DR provisioning is required).
In this webinar we'll look at the significant risks and pitfalls that a 'non DR' strategy can pose and learn about the five step programme to optimise your chances of recovery when working with Public Cloud providers.
Jeff Kato, Senior Analyst & Consultant, Taneja Group
Join us for a fast-paced and informative 60-minute roundtable as we discuss one of the newest trends in storage: Disaggregation of traditional storage functions. A major trend within IT is to leverage server and server-side resources to the maximum extent possible. Hyper-scale architectures have led to the commoditization of servers and flash technology is now ubiquitous and is often times most affordable as a server-side component. Underutilized compute resources exist in many datacenters as the growth in CPU power has outpaced other infrastructure elements. One current hot trend— software-defined-storage—advocates collocating all storage functions to the server side but also relies on local, directly attached storage to create a shared pool of storage. That limits the server’s flexibility in terms of form factor and compute scalability.
Now some vendors are exploring a new, optimally balanced approach. New forms of storage are emerging that first smartly modularize storage functions, and then intelligently host components in different layers of the infrastructure. With the help of a lively panel of experts we will unpack this topic and explore how their innovative approach to intelligently distributing storage functions can bring about better customer business outcomes.
Sushant Rao, Sr. Director of Product Marketing, DataCore Software
Server virtualization was supposed to consolidate and simplify IT infrastructure in data centers. But, that only “sort of happened”. Companies do have fewer servers but they never hit the consolidation ratios they expected. Why? In one word, performance.
Surveys show that 61% of companies have experienced slow applications after server virtualization with 77% pointing to I/O problems as the culprit.
Now, with hyper-converged, companies have another opportunity to fulfill their vision of consolidating and reduce the complexity of their infrastructure. But, this will only happen if their applications get the I/O performance they need.
Join us for this webinar where we will show you how to get industry leading I/O response times and the best price/performance so you can reduce and simplify your infrastructure.
As organizations move into the "3rd platform", they’ll need to discover new ways to support solutions like Private Cloud and Real-Time Analytics. The traditional disk-based storage they're depending on is not keeping up. Flash is transforming datacenter infrastructure performance, capacity and cost to address these challenges.
Join Chris Tsilipounidakis from Tegile as he examines the ways of transforming to a Flash datacenter to support various, mixed application workloads in an ever-changing ecosystem.
Mike Matchett, Sr. Analyst & Consultant, Taneja Group
Come join Senior Analyst Mike Matchett's lively discussion about the concerns and challenges coming when the Internet of Things crashes into our enterprise datacenters. We think big data today is big, but future IoT data streams promise to swamp everything from servers to storage. And today's new big data applications will still need to become more real-time, more agile, distributed and even more scalable. What's coming down the road, and how should we start planning for the future today?
This webcast will go for 30 minutes, followed by a 15 minute Q & A session, where the audience is welcome to ask questions.
Nick Serrecchia, Systems Engineer at Veeam and Terry Grulke, Sr. Technical Advisor at Quantum
With the average company experiencing unplanned downtime 13 times a year, the costs associated with continuing to invest in a legacy backup solution can be extensive. For this reason, more customers are switching to Veeam® and Quantum than ever before. Update to a modern data center and achieve Availability for the Always-On Enterprise™ with Veeam coupled with Quantum’s tiered storage that increases performance, reduces bandwidth requirements and executes a best practices for data protection.
The Internet of Things is becoming a reality, and as a result companies of all shapes and sizes are implementing digital transformation initiatives. This digital transformation begins with a modern data center, built on converged infrastructure, that provides a simple and cost-effective process to both deploy and run IT – supporting core business and next-generation applications.
Carrick Carpenter, Director of Delivery; Cloud Computing - Healthcare and Ozan Talu, Director of private cloud services
Cloud computing is a growing force in healthcare and, while many organizations understand the opportunity that the cloud offers, why and how to get there is widely debated. As providers evaluate the pros and cons of cloud based solutions, several adoption strategies are emerging. Taking the right approach is critical to determining future readiness as healthcare becomes more information-driven and connected, and moves towards collaborative care models and payment reform. This workshop will examine key applications of cloud computing in healthcare (including hosting, security/privacy and medical image archiving), highlight change management strategies from a technical/operational/process perspective, and identify the pros and cons of different cloud models including public vs. private. The workshop will be divided into vignettes that include didactic presentations and real-world case studies with interactive discussions.
Walfrido Zafarana, Product Application Manager, Emerson Network Power
The data centre is mission-critical to many businesses. An efficient chilled water system guaranteeing continuous cooling availability is fundamental in obtaining an overall low data center PUE, and it is thus important to clearly understand the different technologies available for your data centre application.
Join the Emerson Network Power Critical Advantage Webcast for all of the answers.
The webcast will provide insight into:
• The advantages of Chilled Water systems
• The different solutions available according to your data center internal conditions: air-cooled, freecooling, adiabatic
• How to achieve utmost efficiency at the data center system level with the iCOMTM Control
Patrick Grillo, Senior Director, Security Solutions, Fortinet
The Data Center is not an island. It is part of a complex ecosystem, working and evolving together for the overall benefit of the enterprise.
Data Center security can no longer be treated as an island. It must integrate and interact with the overall enterprise ecosystem and security infrastructure to provide a real-time, effective security posture.
This webinar will present a high level view as to the importance of deploying an integrated, end-to-end enterprise security platform for achieving data center security.
Because sometimes, the best data center security solution has nothing to do with the data center!
Jose Ruiz, VP Engineering Operations, Compass Datacenters
As has often been reported, human error is one of the largest factors in data center outages. Since estimates of the average cost of an outage now exceed $740,000, the ability to reduce or eliminate man-caused outages can make a substantial impact on the organization’s bottom line. In this presentation, Jose Ruiz, VP of Engineering Operations for Compass Datacenters, will present a case study on how the introduction of wearable technology has enhanced one customer’s operational performance substantially.
John Mao. Director of Business Development at Stratoscale
While public cloud adoption has been on the rise, most companies are still bound to strict requirements around security, privacy, and data sovereignty issues. For those exact companies, the idea of owning a private cloud is appealing. The reason is simple: private clouds provide similar benefits as public clouds -- such as self-service, automation, orchestration, and "one throat to choke". But how do mainstream IT organizations reap these same benefits? Come join us as we explore how new software-defined paradigms helps transform your dreams of a private cloud into a reality.
Jabez Tan, Senior Analyst, Data Centres,Structure Research
What are the top data centre colocation trends for 2016. How have past predictions played out so far? Singapore and Hong Kong have stood out as the top 2 data centre markets in the Asia Pacific region. We take a quantitative deep dive into the data centre supply and revenue generation for each market, and how much revenue is being generated from colocation services.
Ted Streck, Director, Data Center Practice, EMC Global Services
Organizations today are taking on data center consolidation, migration, and modernization initiatives as a means to improve efficiency, reduce overall costs, and deliver greater availability to meet business expectations in the era of hybrid cloud. However, these projects are risky and can cause dire consequences if not planned
strategically and with best practices.
So how do you accelerate the success of these initiatives while eliminating risk? How do you rapidly determine interdependencies between applications, storage, and servers in order to optimally plan a data center consolidation or modernization?
Attend this session to learn best practices for planning and executing data center initiatives, including how automated tools accelerate interdependency discovery and eliminate nearly 98% of human error.
Office 365 even gives you the control to log and manage data access workflows directly with datacenter engineers. Security, compliance, and privacy are built into Office 365 to help ensure that your company data is protected. Office 365 meets leading global compliance standards, such as HIPAA, FISMA, and ISO 27001, and it delivers industry-leading best practices in data center design, data loss prevention and advanced threat protection.
Join us to discover why 4 out of 5 fortune 500 companies have come to trust Microsoft’s Office 365 service.
Ian Stobirski, Advisory Solutions Consultant, Vision Solutions
Migrating data, applications or full servers is a fact of life if your business is to take advantage of the efficiencies of new platforms and manage escalating storage requirements. Yet many IT professionals put off migration due to concerns about downtime and the risk of failure.
If you are planning a migration, register for this webcast today! You will learn how to mitigate data loss, avoid aborted cutovers and minimise downtime, delays and budget overruns. And you will walk away with tips and strategies that can make your next migration project a success.
Topics for discussion will include:
•Why migrations fail
•Understanding proper planning methodology
•Strategies for success
•Reducing migration downtime to near zero
Eric Slack, Sr. Analyst, Evaluator Group, Alex McDonald, Chair, SNIA Cloud Storage, Glyn Bowden, SNIA Cloud Storage Board
A Software Defined Data Center (SDDC) is a compute facility in which all elements of the infrastructure - networking, storage, CPU and security - are virtualized and removed from proprietary hardware stacks. Deployment, provisioning and configuration as well as the operation, monitoring and automation of the entire environment is abstracted from hardware and implemented in software.
The results of this software-defined approach include maximizing agility and minimizing cost, benefits that appeal to IT organizations of all sizes. In fact, understanding SDDC concepts can help IT professionals in any organization better apply these software-defined concepts to storage, networking, compute and other infrastructure decisions.
If you’re interested in Software-Defined Data Centers and how such a thing might be implemented – and why this concept is important to IT professionals who aren’t involved with building data centers - then please join us on March 15th as Eric Slack, Sr. Analyst with Evaluator Group, will explain what “software-defined” really means and why it’s important to all IT organizations and join a discussion with Alex McDonald, Chair for SNIA’s Cloud Storage Initiative about how these concepts apply to the modern data center.
In this webinar we’ll be exploring:
•How a SDDC leverages this concept to make the private cloud feasible
•How we can apply SDDC concepts to an existing data center
•How to develop your own software-defined data center environment
As organizations run more mission-critical applications within virtual environments, it's often a challenge to continue to meet performance and availability SLAs. Storage is usually the likely culprit. IT managers must have a keen understanding of the latest advancements in storage technology so they can recommend the best approach moving forward.
In this session, you’ll learn about the latest storage architectures (flash caching, server-side PCIe flash, hybrid, and all-flash) and the pros and cons for each. We’ll also discuss how a well-designed infrastructure can help you meet your performance requirements, drive efficiencies, and deliver high availability for your VMware environment.
Tell a friend! Share this webinar by clicking on the social icon above.
Ken Cantrell, Mngr, Performance Engineering, NetApp; Mark Rogov, Advisory Systems Engineer, EMC; David Fair - Chair, SNIA-ESF
The third installment of our performance benchmarking Webcast series, “Storage Performance Benchmarking: Block Components” aims to continue educating anyone untrained in the storage performance arts to ascend to a common base with the experts. In this Webcast, you will gain an understanding of the block components of modern storage arrays and learn storage block-world terminology, including:
•How storage media affects block storage performance
•Integrity and performance trade-offs for data protection: RAID, Erasure Coding, etc.…
•Terminology updates: seek time, rebuild time, garbage collection, queue depth and service time
After the Webcast, visit our Webcast Q&A blog too http://sniaesfblog.org/?p=521
Best practices for achieving an efficient data center
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.
This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more