Optimizing Your Cisco Envir.: Driving Innovation in IT’s Era of the New Normal
To increase agility, drive workforce productivity and better support customers’, business is demanding the delivery of new innovative applications, including collaboration, video, VDI and everything cloud. All this, however, must be achieved with flat budgets and without introducing additional risk. This is the “New Normal” for IT.
To reduce data center costs and speed service delivery, organizations are adopting converged infrastructure platforms like Cisco Unified Computing System (UCS), while readying the network with the capabilities and capacity needed to deliver dynamic services. Doing this effectively not only requires performance visibility to maintain uptime, but also better insights into where new technology infrastructure will be most effective for the business.
On June 6th, together with Enterprise Management Associates, we’ll discuss the transformative trends and technologies shaping IT and business today. We’ll also show how CA Technologies and Cisco are working together to help our customers accelerate innovation by cost effectively ensuring quality-of-service and an optimum customer experience – especially for new applications hosted in unified datacenter and delivered across a truly borderless network. During this session you’ll gain valuable insights into how organizations can:
• Leverage Cisco UCS platforms to better support the delivery of new virtualized cloud services the business is demanding
• Maintain and exceed service levels, manage performance and ensure Cisco UCS compute power is there when the business needs it
• Proactively manage the performance of tier 1 applications, virtualized services and infrastructure to increase ROI and accelerate time-to-value of Cisco investments
• Deliver real-time visibility and historical analysis into bandwidth consumption and application response times to drive business performance improvements
• Provide extensible, value-added services including voice and video quality-of-experience management
RecordedJun 6 201259 mins
Your place is confirmed, we'll send you email reminders
Virtualization workloads generate many requirements and challenges for IT departments, including high performance, low latency, high-availability and the ability to quickly move and reconfigure workloads based on changing demands. This presentation focuses on best practices for employing a wide array of different storage features in the Windows Server platform. Details range from the SMB 3.x protocol to data-deduplication, clustering, Hyper-V Replica, and many more related features. The presentation will begin with suggestions for determining requirements for different kinds of virtual disks and different business workloads. Based on these requirements, we'll drill-down in to practical advice on how, when, and why these features can help increase service delivery and reduce costs for virtualized environments of all sizes.
Join Anil Desai, independent consultant and author of over 20 technical books around Windows Server platform, virtualization, databases and IT management best practices. He has over 20 years of experience in architecting, implementing, and managing IT software and datacenter solutions. He has worked extensively with IT management, development, and database technology. Anil holds many technical certifications and is a twelve-time Microsoft MVP Award recipient (currently Cloud/Datacenter Management).
Different workloads demand different attributes from their storage. These differences lead some to believe flash storage is only good for certain point use cases like accelerating databases. But the performance of flash systems lead others to claim a single flash system can support all workloads. The truth, as usual, is somewhere in the middle. Join Storage Switzerland and IBM for this live interactive webinar where we bust another flash myth and help you select the right flash for the right workload for the right reasons.
Most organizations making an investment in NetApp Filers count on the system to store user data and host virtual machine datastores from an environment like VMware. In addition these organizations want their NetApp systems to do more and be the repository for the next wave of unstructured data; data generated by machines. NetApp systems are busting at the seams, so these organizations are trying to decide what to do next.
To help you find out what to do next, join Storage Switzerland and Caringo for our live webinar and learn:
1. What are the modern unstructured data use cases
2. The challenges NetApp faces in addressing its customers’ issues
3. Other solutions; can all-flash or object storage solve these challenges
4. Making the move - how to migrate from NetApp to other systems
5. How to re-purpose, instead of replacing your NetApp
Discover the complexities of licensing database technologies such as Oracle, SQL Server and PostgreSQL on VMware, with particular emphasis on modern converged and hyper-converged platforms. It's vital to ensure your virtual machines stay compliant with your database vendor’s license requirements. Join us to learn about the business and financial risks involved if you don't have a solid plan in place for compliance, as well as explore strategies for controlling and/or reducing costs and limiting organizational risk.
Dustin Laun, Sr. Advisor of Innovation at FCC, Jordan Braunstein, Principal at Visual Integrator
Learn from MuleSoft experts on why legacy modernization is a vital step toward pursuing digital transformation, and how the Anypoint Platform can support legacy modernization in the federal government.
Dr. Jim Metzler, Co-Founder & Principal Analyst at Ashton, Metzler & Associates; Todd Krautkremer, SVP, Cradlepoint
In less than a decade, virtualization and cloud technologies have transformed enterprise computing from top (applications) to bottom (storage).
During this time, computing has evolved from a tiered and operationally isolated architecture of front-end, application, database and storage servers — each tier having its own teams, processes, tools, and manual functions — to a tightly integrated, fully automated “stack” that’s orchestrated with a common set of people and tools.
Because of this unification and automation of the computing stack, enterprises can now instantly deploy workloads across private and public clouds with ease. While this evolution has been occurring on the computing side of IT, networks have remained largely unchanged. Software-Defined Networking (SDN) and Network Function Virtualization (NFV) technologies are about to change that.
Join Cradlepoint and analyst Dr. Jim Metzler in a discussion about SDN and NFV and how they are creating the new networking stack.
Poor application performance and crashes cost millions of dollars to businesses globally. Yet, recent surveys show that only 26% of application teams will proactively examine user experience metrics in production. 72% of app teams first learn of UX issues through user complaints.
Today’s impatient and intolerant user is quick to abandon slow performing, crashing and error prone apps. So it is up to application teams to quickly isolate issues, understand what went wrong and know how to fix it fast.
Join us for this webinar and learn how HPE AppPulse Trace cuts through the complexity of isolating transaction performance issues. During this live webcast, user experience experts will demonstrate how to correlate performance issues from the user action to service code execution and diagnose issues down to the line of code and log messages.
During this Webinar, you will learn how to:
Quickly drill down to server-side transactions for rapid investigation of performance bottlenecks
Trace transactions from the browser or mobile app all the way to the backend
Trace all aspects of transaction execution including end-to-end flow, code timing, contextual logs, exceptions and database queries
As traditional enterprises look to experiment with emerging Docker use cases, the need to connect containers to persistent data or even just to share data between Docker containers becomes more obvious. However, this expansion has proven to be a challenge for infrastructure that relies on monolithic storage arrays. As a result, enterprises are looking to software-defined storage as a more attractive choice because of its flexibility, programmability and ability to provide well-managed, persistent storage. In this talk, you’ll get:
• What Docker is and is not, as well as container-specific storage challenges
• Why you need container-specific storage policies, SLAs, and classes of service (CoS)
• How to leverage software-defined storage solutions for Docker and containers
• When to use hyperconverged or hyperscale architectures for your Docker initiatives
Computing models in today’s dynamic data centers and clouds are changing dramatically. Application-centric enterprises are finding that they need to develop nimble operational models for infrastructure, networking, and application services. Application delivery controllers (ADCs) are an important part of the networking and application services considerations for software-defined data centers.
New software-defined load balancers are significantly improving the way that application services are delivered and scaled, while freeing IT from repetitive tasks through intelligent automation. Application developers and lines of business are benefiting from better APIs that align with their goals for continuous integration and delivery (CI/CD) and cloud-native applications.
In this webinar, you’ll learn:
– How to eliminate the overprovisioning and overspending that is typical with traditional hardware-based load balancing solutions?
– How to scale not just load balancers but also applications, elastically and predictively based on real-time traffic patterns?
– How to take advantage of x86 servers, VMs, or containers to deliver application services close to individual applications?
– What are the best ways to support multi-cloud deployments and cloud-native applications?
– Ways to troubleshoot applications in minutes with the ability to record and replay traffic events, security and client data.
– How to accelerate application services for SDN environments such as Cisco ACI and private clouds such as OpenStack or for container-based microservices applications?
– The move to agile infrastructure and operations is already happening and it is now reaching critical networking components in the stack such as ADCs.
Join us for our live webinar on May 11th at 1 p.m.ET and 10 a.m. PT when Storage Switzerland and Tegile Systems will discuss how the acquisition and operating costs of flash make it feasible to build a private cloud that is responsive to the needs of the business and cost effective.
Storage architectures have evolved to meet ever-changing business demands. Today’s enterprises need the flexibility to place workloads where they make the most sense and achieve objectives for resiliency and growth without introducing unnecessary complexity and cost.
If you have storage initiatives focused on virtualization, backup, archiving, or cloud, attend this session to learn new strategies for building a more agile, multi-site storage infrastructure at lower cost.
As organizations run more mission-critical applications within virtual environments, it's often a challenge to continue to meet performance and availability SLAs. Storage is usually the likely culprit. IT managers must have a keen understanding of the latest advancements in storage technology so they can recommend the best approach moving forward.
In this session, you’ll learn about the latest storage architectures (flash caching, server-side PCIe flash, hybrid, and all-flash) and the pros and cons for each. We’ll also discuss how a well-designed infrastructure can help you meet your performance requirements, drive efficiencies, and deliver high availability for your VMware environment.
Tell a friend! Share this webinar by clicking on the social icon above.
In this webinar Storage Switzerland, Hitachi Data Systems and Brocade discuss why enterprises need to invest in big data analytics, how they can make that investment and what are some of the key requirements in designing a system.
In the latest version of HPE NNMi, see how you can seamlessly monitor your physical and virtual network infrastructure end to end. NNMi discovers and visualizes connectivity for virtual network appliances hosted in your VMware environment, and provides root cause analysis and troubleshooting for outages that occur at the virtual network edge. See these features in action, as well as the lineup of new capabilities in NNMi 10.10:
See new mapping and visualizations to explore the virtual network edge
Learn how NNMi tracks workloads as they migrate within the data center
Learn how events, incidents, and root cause analysis capabilities extend into the virtual network edge
Powerful technologies from the core to the edge are enabling new insights and transforming value creation. But these opportunities create new risks and urgently beg for innovative approaches to securing our most precious information. Learn how a new architecture of cloud security expertise, endpoints and apps will enable high-confidence computing and deliver security and privacy anywhere, anytime.
Driven by the need for business agility, the infrastructure convergence and hybrid cloud trend are proliferating fast.
However, backup is often overlooked when new converged platforms and cloud services are inserted into existing environments. These new silos complicate management and introduce complexity by breaking down existing backup architectures, slowing down your data centre modernisation projects and putting your business at risk.
Join us on Wednesday, 20th January and learn how you can implement new data centre modernisation projects efficiently, identify and remove data protection blind spots and ease the transition of your applications to the cloud.
In this interactive webinar we will discuss the storage challenges that VMware creates for IT professionals and how flash-based storage systems can not only help solve storage performance challenges, they can also simplify storage management and increase storage efficiency. We will also speak to Fritz Gielow, IT Administrator with the County of Nevada to discuss how flash has solved their VMware storage problems.
Russ Fellows, Senior Partner & Analyst, Evaluator Group
Deploying Solid-State for Virtualized Environments: Use Cases for All Flash, Hybrid and Alternative Storage Implementations
This session dives into common use cases for all-flash and hybrid storage systems for virtualized environments in mid and large enterprises. Russ will focus on actual deployments, allowing listeners an opportunity to learn how to build a solid business case for solid-state, based upon findings from enterprise firms, as well as hands on performance testing with multiple systems in head to head comparisons, providing practical information.
Review the options and architectures that are best suited for server and desktop virtualization
Understanding of when and where to deploy solid-state storage or hybrid to maximize your IT budget and ROI.
Virtualization is no longer a passing trend. Organization deploying it in their servers, desktops, storage and networks are experiencing increased performance and decreased costs, when implemented and managed properly. Join this channel to hear leading experts discuss this maturing technology and how you can create your own software-defined data center.
If you are a security integrator then this webinar is for you!
Challenges created by more cameras, higher resolutions, and increasingly complex analytics are creating an influx in data, and managing this infrastructure takes an intelligent, scalable storage platform.
Join us on Tuesday, August 30 at 9:00 AM PST for The Cost Shift Model
for Video Storage and Data Management to learn how Quantum is taking a different approach to revolutionize storage solutions within the surveillance and security industry.
Attend this webinar and learn:
- How a multi-tier storage approach is shifting the budget spent in the surveillance market
- How to extend your customers’ surveillance budget
- How you can offer a scalable storage solution without compromising video quality, retention time, or camera streams
Quantum can help you design and implement a scalable storage foundation that will enable you to differentiate your offerings in the market.
Register for this webinar today!
Experts from 6WIND & Radware prove that it is possible to attain and sustain virtualized performance above and beyond industry expectations for NFV using an OpenStack environment. By combining 6WIND Virtual Accelerator™ and Radware Alteon® NG VA, network operators can migrate to high performance vADCs while eliminating PCI passthrough and SR-IOV. The end result is a cost effective move to an NFV architecture without compromising the performance or impacting the virtualization environment that their customers expect.
Is automation anxiety a 'thing'? Or is it just a buzzword?
EM360° talks to Parker Software's Technical Manager, Daniel Horton to find out what this term is all about and whether there really is anything to panic about.
In the cloud computing era, data growth is exponential. Every day billions of photos are shared and large amount of new data created in multiple formats. Within this cloud of data, the relevant data with real monetary value is small. To extract the valuable data, big data analytics frame works like SparK is used. This can run on top of a variety of file systems and data bases. To accelerate the SparK by 10-1000x, customers are creating solutions like log file accelerators, storage layer accelerators, MLLIB (One of the SparK library) accelerators, and SQL accelerators etc.
FPGAs (Field Programmable Gate Arrays) are the ideal fit for these type of accelerators where the workloads are constantly changing. For example, they can accelerate different algorithms on different data based on end users and the time of the day, but keep the same hardware.
This webinar will describe the role of FPGAs in SparK accelerators and give SparK accelerator use cases.
Join this webinar to be certain of making the right decisions on moving resources to the cloud. You’ll see how to evaluate which workloads are candidates for cloud migration PLUS measure how efficiently you’re utilizing your own resources.
The CloudPhysics Cost Calculator for Private Cloud lets you apply basic costing models to determine your actual costs per virtual machine (VM) in terms of power, compute resources, memory, storage, licensing, and more to generate a cost baseline.
Now you can apply CloudPhysics rightsizing intelligence to your VMs. See your “as is” costs beside your rightsized costs at peak, 99th percentile, and 95th percentile. Capture savings by reducing workloads to match actual demands and reduce overprovisioning.
When mapping your VMs to their public cloud instances, apply the same peak, 99th percentile, and 95th percentile data to reveal cost difference for private versus public cloud.
Attend this webinar to be sure you’ve optimized decision-making before you move.
Cybersecurity has jumped to the top of companies’ risk agenda after a number of high profile data breaches, and other hacks. In an increasingly digitized world, where data resides in the cloud, on mobiles and Internet of Things enabling multitude of connected devices, the threat vectors are multiplying, threatening the firms’ operations and future financial stability.
Organizations with the ability to view cybersecurity breaches as a risk, with associated probabilities and impacts, can strike the right balance between resilience and protection. By bringing together leadership and capabilities across fraud, IT, cybersecurity and operational risk, organizations can connect the dots and manage their GRC program more effectively. Organizations need to employ a proactive approach to review their existing risk management processes, roles and responsibilities with respect to cybersecurity to re-align them into an overall ERM strategy with boardroom backing.
Attend this panel webinar, as we discuss these issues and address ways to develop an evolving GRC program to cope with the growing threat landscape.
The database is the quintessential data dependency for any application. Databases in production environments tend to be performance sensitive and expect consistent and predictable performance from their underlying infrastructure. On the other hand, databases in dev/test environments need to be fast, agile and portable.
Due to this paradox, production databases are typically deployed on bare metal servers for maximum performance and predictability. This often leads to underutilization of hardware, idle capacity, and poor isolation. On the other hand, dev/test databases are deployed on VMs which are fast to deploy, improve hardware utilization and consolidation, are fully isolated, and are easy to move across data centers and clouds, but suffer from poor performance, hypervisor overhead and unpredictability.
In this session, we will discuss:
- How NoSQL databases like Cassandra can benefit from container technology
- If the current storage systems can support containerized databases
- How to alleviate data management challenges for large databases
- How the Robin Containerization Platform can deliver bare-metal-like performance, while retaining all virtualization benefits
Join us for the third and final webcast in our series on micro-segmentation, how it protects networks, and how it works with perimeter firewalls. We’ll also discuss its advantages beyond protection in automating security workflows and more.
In this webcast series, we’ve explored the security benefits of micro-segmentation with NSX, notably how it protects data centers inside the perimeter firewall. But did you know that with micro-segmentation, IT can also automate security workflows such as provisioning, moves/adds/changes, threat response, and security policy management? Join us as we discuss:
• How to improve accuracy and gain better overall security in the data center
• Security policy approaches with network virtualization
• How to automate security workflows to gain greater agility
Build a fundamentally more agile, efficient and secure application environment with VMware NSX network virtualization on powerful industry standard infrastructure featuring Intel® Xeon® processors and Intel® Ethernet 10GB/40GB Converged Network Adapters.
Data communication speeds are constantly increasing to keep up with the demand in bandwidth. Ethernet speeds of 100 Gb/s are being deployed and 400 Gb/s or more are being considered. As the speeds increase, the reach of multimode fiber gets shorter. One way to mitigate the shrinking distance is to the use the highest bandwidth fiber. What if we tell you that the transceivers can help mitigate as well?
Topics to be discussed include:
- Characteristics of cable, connectivity, and transceivers and how they can maximize network reach and flexibility
- Current trends in Ethernet and Fibre Channel and what is coming in the near future.
Telco Cloud represents an enormous opportunity for communications service providers to transform their business practices. By bringing together the best of telco and cloud tools and technologies, communications service providers can deploy network functions anywhere to provide the best user experience without sacrificing service reliability. In this webinar, we will highlight the technical problems and challenges and offer a variety of solutions for the audience to address performance, availability, security, and manageability and automation as they consider their options for transforming their networks in a Telco Cloud environment.