Getting More from Less: The Economics of Storage Virtualization
Virtualization in the data center is a stable and proven approach for IT efficiencies; from desktops and servers to networks and storage. Irrespective of the virtualization implementation (host-based, appliance or controller-based), storage virtualization is a core ingredient to deploy economically superior architectures.
Join Justin Augat from Hitachi Data Systems to understand the types of costs that storage virtualization can directly address (and reduce), the qualitative benefits of virtualization and converting these to cost savings. He will also discuss quantitative methods to measure and predict cost savings that come from virtualization with respect to data migration, space reclamation, storage consolidation and storage management.
RecordedJun 13 201241 mins
Your place is confirmed, we'll send you email reminders
Virtualization workloads generate many requirements and challenges for IT departments, including high performance, low latency, high-availability and the ability to quickly move and reconfigure workloads based on changing demands. This presentation focuses on best practices for employing a wide array of different storage features in the Windows Server platform. Details range from the SMB 3.x protocol to data-deduplication, clustering, Hyper-V Replica, and many more related features. The presentation will begin with suggestions for determining requirements for different kinds of virtual disks and different business workloads. Based on these requirements, we'll drill-down in to practical advice on how, when, and why these features can help increase service delivery and reduce costs for virtualized environments of all sizes.
Join Anil Desai, independent consultant and author of over 20 technical books around Windows Server platform, virtualization, databases and IT management best practices. He has over 20 years of experience in architecting, implementing, and managing IT software and datacenter solutions. He has worked extensively with IT management, development, and database technology. Anil holds many technical certifications and is a twelve-time Microsoft MVP Award recipient (currently Cloud/Datacenter Management).
Different workloads demand different attributes from their storage. These differences lead some to believe flash storage is only good for certain point use cases like accelerating databases. But the performance of flash systems lead others to claim a single flash system can support all workloads. The truth, as usual, is somewhere in the middle. Join Storage Switzerland and IBM for this live interactive webinar where we bust another flash myth and help you select the right flash for the right workload for the right reasons.
Most organizations making an investment in NetApp Filers count on the system to store user data and host virtual machine datastores from an environment like VMware. In addition these organizations want their NetApp systems to do more and be the repository for the next wave of unstructured data; data generated by machines. NetApp systems are busting at the seams, so these organizations are trying to decide what to do next.
To help you find out what to do next, join Storage Switzerland and Caringo for our live webinar and learn:
1. What are the modern unstructured data use cases
2. The challenges NetApp faces in addressing its customers’ issues
3. Other solutions; can all-flash or object storage solve these challenges
4. Making the move - how to migrate from NetApp to other systems
5. How to re-purpose, instead of replacing your NetApp
Discover the complexities of licensing database technologies such as Oracle, SQL Server and PostgreSQL on VMware, with particular emphasis on modern converged and hyper-converged platforms. It's vital to ensure your virtual machines stay compliant with your database vendor’s license requirements. Join us to learn about the business and financial risks involved if you don't have a solid plan in place for compliance, as well as explore strategies for controlling and/or reducing costs and limiting organizational risk.
Dustin Laun, Sr. Advisor of Innovation at FCC, Jordan Braunstein, Principal at Visual Integrator
Learn from MuleSoft experts on why legacy modernization is a vital step toward pursuing digital transformation, and how the Anypoint Platform can support legacy modernization in the federal government.
Dr. Jim Metzler, Co-Founder & Principal Analyst at Ashton, Metzler & Associates; Todd Krautkremer, SVP, Cradlepoint
In less than a decade, virtualization and cloud technologies have transformed enterprise computing from top (applications) to bottom (storage).
During this time, computing has evolved from a tiered and operationally isolated architecture of front-end, application, database and storage servers — each tier having its own teams, processes, tools, and manual functions — to a tightly integrated, fully automated “stack” that’s orchestrated with a common set of people and tools.
Because of this unification and automation of the computing stack, enterprises can now instantly deploy workloads across private and public clouds with ease. While this evolution has been occurring on the computing side of IT, networks have remained largely unchanged. Software-Defined Networking (SDN) and Network Function Virtualization (NFV) technologies are about to change that.
Join Cradlepoint and analyst Dr. Jim Metzler in a discussion about SDN and NFV and how they are creating the new networking stack.
Poor application performance and crashes cost millions of dollars to businesses globally. Yet, recent surveys show that only 26% of application teams will proactively examine user experience metrics in production. 72% of app teams first learn of UX issues through user complaints.
Today’s impatient and intolerant user is quick to abandon slow performing, crashing and error prone apps. So it is up to application teams to quickly isolate issues, understand what went wrong and know how to fix it fast.
Join us for this webinar and learn how HPE AppPulse Trace cuts through the complexity of isolating transaction performance issues. During this live webcast, user experience experts will demonstrate how to correlate performance issues from the user action to service code execution and diagnose issues down to the line of code and log messages.
During this Webinar, you will learn how to:
Quickly drill down to server-side transactions for rapid investigation of performance bottlenecks
Trace transactions from the browser or mobile app all the way to the backend
Trace all aspects of transaction execution including end-to-end flow, code timing, contextual logs, exceptions and database queries
As traditional enterprises look to experiment with emerging Docker use cases, the need to connect containers to persistent data or even just to share data between Docker containers becomes more obvious. However, this expansion has proven to be a challenge for infrastructure that relies on monolithic storage arrays. As a result, enterprises are looking to software-defined storage as a more attractive choice because of its flexibility, programmability and ability to provide well-managed, persistent storage. In this talk, you’ll get:
• What Docker is and is not, as well as container-specific storage challenges
• Why you need container-specific storage policies, SLAs, and classes of service (CoS)
• How to leverage software-defined storage solutions for Docker and containers
• When to use hyperconverged or hyperscale architectures for your Docker initiatives
Computing models in today’s dynamic data centers and clouds are changing dramatically. Application-centric enterprises are finding that they need to develop nimble operational models for infrastructure, networking, and application services. Application delivery controllers (ADCs) are an important part of the networking and application services considerations for software-defined data centers.
New software-defined load balancers are significantly improving the way that application services are delivered and scaled, while freeing IT from repetitive tasks through intelligent automation. Application developers and lines of business are benefiting from better APIs that align with their goals for continuous integration and delivery (CI/CD) and cloud-native applications.
In this webinar, you’ll learn:
– How to eliminate the overprovisioning and overspending that is typical with traditional hardware-based load balancing solutions?
– How to scale not just load balancers but also applications, elastically and predictively based on real-time traffic patterns?
– How to take advantage of x86 servers, VMs, or containers to deliver application services close to individual applications?
– What are the best ways to support multi-cloud deployments and cloud-native applications?
– Ways to troubleshoot applications in minutes with the ability to record and replay traffic events, security and client data.
– How to accelerate application services for SDN environments such as Cisco ACI and private clouds such as OpenStack or for container-based microservices applications?
– The move to agile infrastructure and operations is already happening and it is now reaching critical networking components in the stack such as ADCs.
Join us for our live webinar on May 11th at 1 p.m.ET and 10 a.m. PT when Storage Switzerland and Tegile Systems will discuss how the acquisition and operating costs of flash make it feasible to build a private cloud that is responsive to the needs of the business and cost effective.
Storage architectures have evolved to meet ever-changing business demands. Today’s enterprises need the flexibility to place workloads where they make the most sense and achieve objectives for resiliency and growth without introducing unnecessary complexity and cost.
If you have storage initiatives focused on virtualization, backup, archiving, or cloud, attend this session to learn new strategies for building a more agile, multi-site storage infrastructure at lower cost.
As organizations run more mission-critical applications within virtual environments, it's often a challenge to continue to meet performance and availability SLAs. Storage is usually the likely culprit. IT managers must have a keen understanding of the latest advancements in storage technology so they can recommend the best approach moving forward.
In this session, you’ll learn about the latest storage architectures (flash caching, server-side PCIe flash, hybrid, and all-flash) and the pros and cons for each. We’ll also discuss how a well-designed infrastructure can help you meet your performance requirements, drive efficiencies, and deliver high availability for your VMware environment.
Tell a friend! Share this webinar by clicking on the social icon above.
In this webinar Storage Switzerland, Hitachi Data Systems and Brocade discuss why enterprises need to invest in big data analytics, how they can make that investment and what are some of the key requirements in designing a system.
In the latest version of HPE NNMi, see how you can seamlessly monitor your physical and virtual network infrastructure end to end. NNMi discovers and visualizes connectivity for virtual network appliances hosted in your VMware environment, and provides root cause analysis and troubleshooting for outages that occur at the virtual network edge. See these features in action, as well as the lineup of new capabilities in NNMi 10.10:
See new mapping and visualizations to explore the virtual network edge
Learn how NNMi tracks workloads as they migrate within the data center
Learn how events, incidents, and root cause analysis capabilities extend into the virtual network edge
Powerful technologies from the core to the edge are enabling new insights and transforming value creation. But these opportunities create new risks and urgently beg for innovative approaches to securing our most precious information. Learn how a new architecture of cloud security expertise, endpoints and apps will enable high-confidence computing and deliver security and privacy anywhere, anytime.
Driven by the need for business agility, the infrastructure convergence and hybrid cloud trend are proliferating fast.
However, backup is often overlooked when new converged platforms and cloud services are inserted into existing environments. These new silos complicate management and introduce complexity by breaking down existing backup architectures, slowing down your data centre modernisation projects and putting your business at risk.
Join us on Wednesday, 20th January and learn how you can implement new data centre modernisation projects efficiently, identify and remove data protection blind spots and ease the transition of your applications to the cloud.
In this interactive webinar we will discuss the storage challenges that VMware creates for IT professionals and how flash-based storage systems can not only help solve storage performance challenges, they can also simplify storage management and increase storage efficiency. We will also speak to Fritz Gielow, IT Administrator with the County of Nevada to discuss how flash has solved their VMware storage problems.
Russ Fellows, Senior Partner & Analyst, Evaluator Group
Deploying Solid-State for Virtualized Environments: Use Cases for All Flash, Hybrid and Alternative Storage Implementations
This session dives into common use cases for all-flash and hybrid storage systems for virtualized environments in mid and large enterprises. Russ will focus on actual deployments, allowing listeners an opportunity to learn how to build a solid business case for solid-state, based upon findings from enterprise firms, as well as hands on performance testing with multiple systems in head to head comparisons, providing practical information.
Review the options and architectures that are best suited for server and desktop virtualization
Understanding of when and where to deploy solid-state storage or hybrid to maximize your IT budget and ROI.
Virtualization is no longer a passing trend. Organization deploying it in their servers, desktops, storage and networks are experiencing increased performance and decreased costs, when implemented and managed properly. Join this channel to hear leading experts discuss this maturing technology and how you can create your own software-defined data center.
Join us for the third and final webcast in our series on micro-segmentation, how it protects networks, and how it works with perimeter firewalls. We’ll also discuss its advantages beyond protection in automating security workflows and more.
In this webcast series, we’ve explored the security benefits of micro-segmentation with NSX, notably how it protects data centers inside the perimeter firewall. But did you know that with micro-segmentation, IT can also automate security workflows such as provisioning, moves/adds/changes, threat response, and security policy management? Join us as we discuss:
• How to improve accuracy and gain better overall security in the data center
• Security policy approaches with network virtualization
• How to automate security workflows to gain greater agility
Build a fundamentally more agile, efficient and secure application environment with VMware NSX network virtualization on powerful industry standard infrastructure featuring Intel® Xeon® processors and Intel® Ethernet 10GB/40GB Converged Network Adapters.
Data communication speeds are constantly increasing to keep up with the demand in bandwidth. Ethernet speeds of 100 Gb/s are being deployed and 400 Gb/s or more are being considered. As the speeds increase, the reach of multimode fiber gets shorter. One way to mitigate the shrinking distance is to the use the highest bandwidth fiber. What if we tell you that the transceivers can help mitigate as well?
Topics to be discussed include:
- Characteristics of cable, connectivity, and transceivers and how they can maximize network reach and flexibility
- Current trends in Ethernet and Fibre Channel and what is coming in the near future.
Telco Cloud represents an enormous opportunity for communications service providers to transform their business practices. By bringing together the best of telco and cloud tools and technologies, communications service providers can deploy network functions anywhere to provide the best user experience without sacrificing service reliability. In this webinar, we will highlight the technical problems and challenges and offer a variety of solutions for the audience to address performance, availability, security, and manageability and automation as they consider their options for transforming their networks in a Telco Cloud environment.
The use of broadband Internet connections in SD-WAN environment has many benefits, however for any enterprise, performance and reliability cannot be compromised. An SD-WAN solution must include all the functionality needed to meet these essential requirements that deliver outstanding performance and Quality of Service by:
•Actually improving the quality of the bandwidth you already have, instead of routing around it
•Enabling centralized control and administration of network-wide policies
•Providing detailed visibility into real-time and historical application and network trends
•Allowing for the modular deployment of WAN optimization to insure performance when you need it, where you need it
This all adds up to an enterprise-grade, performance-centric offering that allows your SD-WAN to rapidly connect users to the applications they need. Deployment times are reduced significantly and enterprises enjoy enhanced performance, visibility and control over the entire network.
Within the financial services industry, middle office analytics and simulations continue to grow in volume and complexity. Massive compute and storage demands cause strain on IT resources. While new technologies promise speed and scalability, evaluating this unique middle office environment requires a look at compliance, risk, and pricing analytics to determine potential gains and losses. In this webinar, IDC – Financial Insights Research Director, Bill Fearnley, looks at current middle office IT workflows supporting analytics, backtesting and financial modeling and evaluates a hybrid cloud infrastructure to support growing demands.
In this webinar, you’ll:
· Hear an IDC Analyst’s view on the current financial services IT environment
· Learn of common challenges and approaches to combat growing strain on compute and storage infrastructure
· Join in a discussion about the viability of enabling cloud services to expand compute and storage capacity
· Gain guidance on how large hedge funds and investment banks are overcoming inherent cloud challenges like latency, data accessibility, and cost management
There has been a great deal of interest in Graphene. Some would call it hype. But with its flexibility and heat conduction properties, this atom-thin layer of carbon, which has been touted as the strongest material ever measured, has enormous product and market potential for the ICT industry.
Because graphene is conductive at nano-scale layers, it can be used for lightweight, flexible yet durable display screens, electric circuits and solar cells. It is also currently being made into inks and 3D printable materials. Imagine what this can mean for the design of communications devices, or circuitry, or batteries. Imagine the impact on wearables, the design and development of IoT sensors, or large scale retail store windows. Graphene holds a great deal of promise.
Despite the potential graphene promises, it has taken longer than expected to transform research and development into commercialized product.
This webcast will explore both the tremendous potential harbored in those structured carbon atoms and the business reality. The focus will be on the use of the material for the ICT industry. We will also look at other use cases that may be the first steps on graphene’s path to commercial application.
- Dr. Stephen Hodge, Research Associate at the Cambridge Graphene Centre, Engineering Department, University of Cambridge
- Anthony Schiavo, Research Associate, Advanced Materials Team, Lux Research, Inc.
- Limor Schafman, Director of Content Development, TIA (Moderator)
Server virtualization was supposed to consolidate and simplify IT infrastructure in data centers. But, that only “sort of happened”. Companies do have fewer servers but they never hit the consolidation ratios they expected. Why? In one word, performance.
Surveys show that 61% of companies have experienced slow applications after server virtualization with 77% pointing to I/O problems as the culprit.
Now, companies are looking to take the next step to fulfill their vision of consolidating and reducing the complexity of their infrastructure. But, this will only happen if their applications get the I/O performance they need.
This is where DataCore’s Parallel I/O technology comes in. By processing I/Os in parallel leveraging multi-core, multi-processor systems, Parallel I/O delivers industry leading I/O response times as well as price/performance. The net benefit is that fewer storage nodes can provide much better performance, allowing you to reduce and simplify your infrastructure.
Do you run a mix of virtualized and diverse workloads, including block storage? Are you looking to increase density and maintain blazingly fast speeds? If so, this webinar is for you!
In this webinar, speakers from DataCore and SanDisk will discuss the performance and economic advantages of combining software-defined-storage with all-flash storage. We’ll also share two customer stories on how they were able to:
- Achieve effortless and non-disruptive data migration from magnetic to flash storage
- Prevent storage-related downtime
- Dynamically control the movement of data from flash to high-capacity storage
- Strike the right economic balance between fast performance and low cost
Don’t let data growth and complex workloads slow you down. Attend this webinar and learn about new possibilities.