DCIM: Managing the Facilities and IT of the Data Center - Panel Session
Data Center Infrastructure Management (DCIM) has been discussed in data center conferences and media. It is a set of tools and methods to make a data center as a whole perform optimally. Simply put, DCIM deals with mechanical and electrical systems of facilities, and power and environmental information of IT equipment. No standards have been defined yet but several functional areas have been mentioned, such as Inventory, Change, Capacity, Simulation, Monitoring and Efficiency Modeling. Many vendors have emerged with various solutions that focus on one or a few areas but not on a holistic scale. So the integration of multiple tools will be necessary to satisfy the overall needs and some vendors are working together to integrate their solutions.
But equally important is how to manage IT equipment at a higher level, such as server health, and application and service status. This is the market segment known as system management. Although DCIM and system management have been developed independent to each other, it is increasingly necessary to integrate the two to manage a data center even more efficiently. The information obtained with system management functions will become crucial to control the infrastructure sides of facilities and even IT equipment.
In this panel session, we will review DCIM solutions and how they can be integrated with system management for better design and operations of data centers. We will also discuss the impact of such integration, such as management and organizational structure. But at the same time, we need to realize that the ultimate goal of having a data center is to satisfy business goals of an enterprise with IT infrastructure, such as server health and applications status. That is why it is utmost important to integrate both IT and Facilities under common management to plan, design, monitor and control. Join this panel as these thought leaders discuss how you can use DCIM to best manage the facilities and IT within your data center.
Migrating your Hadoop cluster between versions or distributions is difficult. It is a critical process that if done incorrectly can lead to the loss of data, system downtime, and disruption of business activities.
In this webinar, learn how you can mitigate the risk in a migration through the development of a comprehensive migration strategy and leveraging tools like those from WANdisco to simplify and automate your migration.
This webinar takes top to bottom approach on deep dive training of Data Plane Development Kit. The attendees will learn as how various Data Plane Development Kit (DPDK) components fit together in delivering final vertical applications such as a router, a security appliance, QoS and load balancer.
Most of the customers and developers are familiar with L2fwd and L3fwd sample applications. However one of the questions that is frequently asked is regarding availability of not only 1) run to completion applications but also 2) pipeline mode applications or 3) a combination of run to completion and pipeline mode application. To clarify, the webinar will classify the key sample applications into the above three categories. We will take two key applications – 1) IP Pipeline and 2) Packet Ordering Sample application and drill down deeper as how customers can map their own application with these architectural usage models. We will do a brief code walk through of one of the application to showcase how professionally the coding is done with great commenting.
While earlier release features were focused on CapEx improvement, such as thread pinning, only polled mode driver, the latest releases are including OpEx improvements also. The audience will learn Release 2.1 and Release 2.2 features from the point of view both CapEx and OpEx improvement.
Data Plane Development Kit is spelled as “P-E-R-F-O-R-M-A-N-C-E”. So, any deep dive presentation is complete only with key performance optimization techniques. So, the attendees will learn top 3 key optimization techniques that they can use while integrating their applications with DPDK.
Learn how a combination of innovative data center cabinets and unique deployment methods significantly reduces CapEX and OpEX without sacrificing performance and scalability. You'll discover how a cabinet's shared Zero-U space supports vertical growth, offers an overall cost savings for cabinets, PDUs and patching of at least 46% and reduces stranded power outlets by 75%. You'll also learn how preconfigured data center cabinet solutions can reduce the labor and time required to go live by as much as 50% while easing scalability and optimizing performance.
Charl Joubert from University of Pretoria in South Africa explains why HP Service Manager with Smart Analytics leverages Big Data to be a game changer for problem management at the service desk and how amazing gains have been measured on daily tasks.
When adding new 10G infrastructure to your network it is best practice to validate performance before making it operational. Yet this step is frequently skipped - increasing the likelihood of future problems such as:
•More time spent troubleshooting
•Sub-optimal application delivery
In this webinar learn how the new OneTouch AT 10G Network Assistant facilitates 10G infrastructure acceptance testing with automated, 1-button standardised testing and reporting.
As if you didn’t know, today’s data center is undergoing a revolution, in large part due to two new advances: software-defined storage and hyper-convergence. Combined, the two advances offer simplicity of deployment and greater flexibility, as well as new operational efficiencies, reducing the need for IT staff specialization and creating time for bigger projects. Learn the hidden secret behind an HP hyper-converged data center, and discover how these two technologies make it possible for you to set up a virtualized server environment that can handle the hyper-growth that today’s business world imposes. And that’s just a small slice of what you’ll learn in this webinar.
COMLINE has updated and improved its datacenter to the next generation, based on HP Cloud Service Automation and HP Operation Orchestration. COMLINE has built a secure and highly automated hybrid cloud architecture to deliver IT services to SMEs throughout Germany from an advanced data centre. Return on investment has been achieved in less than one year along with 80% company productivity gain.
Richard Fichera, Vice President and Principal Analyst Serving Infrastructure & Operations Professionals at Forrester Research
Server virtualization has driven a radically more efficient computing paradigm, enabling IT organizations to deploy business-enabling applications like mobility, social media, the Internet of Things, big data and collaboration at lightning speed. In the midst of this proliferation of applications and explosion of data, storage costs, complexity, and risk have increased at an unprecedented rate – until now.
What if you could have a single storage architecture that enables you to:
Eliminate the cost and complexity of managing silos of storage
Facilitate different service levels for different workloads on the fly
Realize the full benefit of storage consolidation
Would we have your attention?
Join us for a live webinar with guest analyst Richard Fichera of Forrester Research on the future of storage consolidation, and learn how the Nimble Storage Adaptive Flash platform can fuel gains in business responsiveness and agility.
The growth of data has put a strain on data center performance and efficiency. Solid-state-devices (SSD) are playing a significant role in increasing storage speeds and performance – but it’s not a simple plug and play solution. Join Adam Roberts, Chief Solutions Architect at SanDisk, to learn 5 tips to consider when looking to improve storage performance and data center efficiency with flash.
Chris Tsilipounidakis, Tegile Manager, Product Marketing
As IT managers roll out new applications or upgrade existing systems, it's important that they have a keen understanding of the latest advancements in storage technology so they can recommend the best approach.
In this session, you’ll learn about the latest storage architectures (flash caching, server-side PCIe flash, hybrid, and all-flash) and the pros and cons for each. We’ll also discuss how a well-designed infrastructure can drive IT efficiencies and deliver high availability while meeting your performance SLAs.
David Beeler, Senior Product Strategist, Vision Solutions
How long can you afford to be without data?
45% of businesses surveyed said that they had experienced a data loss in 2014. Downtime can come from any direction, in any form, at any time. Astonishingly, three-quarters of IT professionals say that they have never calculated the hourly cost of downtime.
Join David Beeler as he explores how you can reduce the impact of downtime on your business to near zero, whilst making it easier for you to manage your systems.
Jesse St. Laurent, VP of Product Strategy and Brian Knudtson, Technical Marketing Manager
Jesse St. Laurent, SimpliVity's VP of Product Strategy, presents about the data problem and how the omnicube solves this, allowing customers to achieve 3 times total cost savings and increased performance. Then, Brian Knudtson goes through an in depth demo, showing how SimpliVity's technology works and how the user interacts with the interface.
The goal of the OSDDC Incubator is to consider SDDC use cases, architectures and requirements. Based on these inputs, the Incubator has developed a white paper that reviews industry standards for the SDDC. This presentation will cover the current output of this DMTF incubator.
Scott D. Lowe of ActualTech Media and Brian Knudtson of SimpliVity discuss the findings of the 2015 State of the Hyperconverged Infrastructure Market report, and talk about the implications for those considering hyperconverged infrastructure.
With the growing adaptation of Software-Defined technologies as the foundation for enterprises’ data centers, private & hybrid clouds and overall IT agility, security infrastructure transformation must take place to efficiently integrate with the Software-Defined ecosystem and become software defined itself. This presentation will highlight the need for Software Defined Security (SD-Security) and Fortinet’s framework to deliver optimized security for the Software Defined IT.
Laurence James, NetApp Products, Alliances and Solutions Manager
Organizations need their IT teams to move ever more quickly to keep pace with the changing needs of business Traditional data centres, with infrastructure silos built around applications, limit responsiveness. Companies struggle with routine IT downtime, spiralling costs, performance challenges, and growing complexity as their operations scale.
Solving this problem requires an IT infrastructure built for agility, one capable of instantly delivering new services, projects, and capacity while keeping costs down. That’s the promise of the software-defined data centre (SDDC). Software-defined storage (SDS) is one of the four SDDC components, as well as Software-defined compute, network, and security. Today there are so many different definitions for the term Software Defined that you can be excused for the confusion it has caused. In the session Laurence James unravels some of the myths surrounding the Software defined craze.
Jose Ruiz, Director of Engineering, Compass Datacenters
New tools have dramatically enhanced the ability of data center operators to base important data center decisions about capacity planning and operational performance on actual data and actionable insights derived from that information. By combining modeling technologies to effectively calibrate the data center during the commissioning process and then using these benchmarks in modeling prospective configuration scenarios, data center operators can optimize the efficiency of their facilities prior to the movement or addition of a single rack. In this presentation, Jose Ruiz will share a real world case study that illustrates how predictive analytics can lead to smarter capacity planning and more effective operational decisions.
Chad Hintz, SNIA-ESF Board Member,Technical Solutions Architect, Cisco; David Fair, SNIA-ESF Chair, Intel
Big data and large-scale web services are creating a storage network congestion problem. Join this live Webcast to learn how new architectures can use an innovative congestion control mechanism called CONGA to address congestion. Developed from research done at Stanford, CONGA is a network-based distributed congestion-aware load balancing mechanism. It is being researched for use in next generation data centers to help enhance IP-based storage networks and is becoming available in commercial switches. This Webcast will dive into:
•A definition of CONGA
•How CONGA efficiently handles load balancing and asymmetry without TCP modifications
•CONGA as part of a new data center fabric
•Affects of 40g/100g in these architectures
•The CONGA impact on IP storage networks
Discover the new data center architectures that will support the most demanding applications such as big data analytics and large-scale web services.
Jason Leiva, Solutions Architect, Veeam & Eric Bassier, Senior Director, Data Center Product, Quantum
Organizations of all sizes face a variety of technical challenges that are forcing them to rethink how they approach storing and protecting their virtual data. Join this webinar to learn how Veeam® and Quantum have teamed up to help organizations of all sizes overcome those obstacles, providing award-winning technology and expertise in backup, recovery, and archive.
Chris Curtis, Senior Vice President, Compass Datacenters
The nature of the issues facing prospective data center operators are rapidly evolving. For many years data center related decisions were largely tactical and reactive in nature. With the escalating demand for the immediate delivery of information and content, coupled with the emergence of the substantial data processing requirements of innovations such as the Internet of Things, data center decisions must be made under an entirely new paradigm. In this presentation, Chris Curtis will discuss how the changing requirements for data centers necessitate that operators make their decisions within a larger strategic context to ensure that they are capable of addressing a company’s evolving needs for a decade or longer.
Mike Bainbridge, IT Blogger and Solutions Architect
From ecommerce platform selection to warehouse, order and product management systems, join ecommerce architect specialist Mike Bainbridge as he gives an overview of key software solutions for retailers.
Join Bob Plumridge, SNIA Europe Chairman, as he presents a brief overview of the solid state technologies which are being integrated into Enterprise Storage Systems today, including technologies, benefits, and price/performance.
He will then describe where they fit into typical Enterprise Storage architectures today, with descriptions of specific use cases.
Best practices for achieving an efficient data center
With today’s pressures on lowering our carbon footprint and cost constraints within organizations, IT departments are increasingly in the front line to formulate and enact an IT strategy that greatly improves energy efficiency and the overall performance of data centers.
This channel will cover the strategic issues on ‘going green’ as well as practical tips and techniques for busy IT professionals to manage their data centers. Channel discussion topics will include:
- Data center efficiency, monitoring and infrastructure management;
- Data center design, facilities management and convergence;
- Cooling technologies and thermal management
And much more