The Big IT Picture: Where Does Enterprise Software & Infrastructure Fit?
Senior management and operational managers responsible for results in IT need to see the big picture in order to understand how IT is doing. The challenge is, the big picture is made up of a lot of different elements and therefore is not easy to see.
Simply put, the big IT picture is all the elements you need in order to tell a complete story: what it contains, what it does, how it works, what vulnerabilities it has, how much it costs… the list goes on. The big picture is useful because it’s of greater value than the sum of its parts, in the same way water is something more than just two hydrogen atoms bonded to a single atom of oxygen.
The big picture challenge in IT is daunting because it is based on technology and technology changes quickly. The story that you told six months ago is not the story that is true today or the story that will unfold six months from now. In addition, there’s a subtle shift going on in IT departments. It’s not just about the technology anymore, it’s about the business of technology. In the past, people were thrilled if you just gave them the numbers. Now they want to be able to do something with the numbers. These days, the big picture is in effect business intelligence (the information) plus business analytics (what that information means to you).
Where does enterprise software and infrastructure fit in the big IT picture?
Join this webinar to learn more about:
·Enterprise software and infrastructure’s role in, and up through, the 4-tier IT stack: Services, Capabilities, People, Infrastructure
·Progressing towards best practice in the technical and business IT management structure
·Enables linking enterprise software and infrastructure to business services and customer satisfaction in the big IT picture
·Aligning technical and business IT operations for improved collaboration
·Succeeding in the cultural shift from IT as provider of software and infrastructure to IT as provider or business services and customer happiness
RecordedAug 17 201652 mins
Your place is confirmed, we'll send you email reminders
Fibre Channel has long been known to be a very secure protocol for storage. Even so, there is no such thing as a “perfectly secure” technology, and for that reason it’s important to constantly update and protect against threats.
The sheer variety of environments in which Fibre Channel fabrics are deployed makes it very difficult to simply rely only on physical security. In fact, it’s possible to access different storage systems by different users, even when spanned over several sites. Fibre Channel enables security services to specifically address these concerns, and prevent misconfigurations or access to data by non-authorized people and machines.
This webcast is going to dive deep into the guts of security aspects of Fibre Channel, looking closely at the protocols used to implement security in a Fibre Channel fabric. In particular, we’re going to look at:
•The definitions of the protocols to authenticate Fibre Channel devices
•What are the different classes of threats, and what are the mechanisms to protect against them
•What are session keys and how to set them up
•How Fibre Channel negotiates these parameters to insure frame-by-frame integrity and confidentiality
•How Fibre Channel establishes and distributes policies across a fabric
Please join us to learn more about the technical considerations that Fibre Channel brings to the table to secure and protect your data and information.
Michelle Tidwell, Program Director, IBM; Tom Clark, Distinguished Engineer, IBM; Matt Levan, Storage Solutions Architect, IBM
As enterprises move to a hybrid multi-cloud world, they are faced with many challenges. Decisions surrounding what technologies to use is one, but they are also seeing a transformation in traditional IT roles. The storage admins are asked to be more cloud savvy while new roles of cloud admins are emerging to handle the complexities of deploying simple and efficient clouds. Meanwhile, both these roles are asked to ensure a self-service environment is architected so that application developers can get resources needed to develop cutting edge apps not in weeks, days or hours, but in minutes.
In part one of this three part series, we covered the high level aspects of Kubernetes. This presentation will discuss key capabilities IT vendors are creating based on open source technologies such as Docker and Kubernetes to build self-service infrastructure to support hybrid multi-cloud deployments.We’ll cover:
•Persistent storage and how to specify it
•Ensuring application portability between Private and Public Clouds
•Building a self-service infrastructure (Helm, Operators)
•Selecting Block, File, Object (Traditional Storage, SDS)
Eden KIm, CEO, Calypso Systems; Jim Fister, SNIA Solid State Storage Initiative
Real-world digital workloads often behave very differently from what might be expected. The equipment used in a computing system may function differently than anticipated. Unknown quirks in complicated software and operations running alongside the workload may be doing more or less than the user initially supposed. To truly understand what is happening, the right approach is to test and monitor the systems’ behaviors as real code is executed. By using measured data designers, vendors and service personnel can pinpoint the actual limits and bottlenecks that a particular workload is experiencing. Join the SNIA Solid State Storage Special Interest Group to learn how to be a part of the real-world workload revolution
Ed Mazurek, Cisco; John Rodrigues, Broadcom; J Metz, Cisco
In this back-to-basics Fibre Channel webinar, we’re going to be talking about one of the most fundamental functions of the protocol and what makes it so reliable, predictable and secure: Zoning. The ability to ensure that end devices are able to communicate only with the set of devices explicitly permitted is part of what makes Fibre Channel so powerful. This ability to secure those connections in zones adds built-in security to these connections.
In this webinar, you’ll learn:
What is Zoning
Why you’d want to Zone
The Different Types of Zoning
Zoning Configuration Flow
Consequences of Zoning
Zoneset Activation Flow
Zoning best practices for different types of applications
Advances in Zoning
Swordfish School: Introduction to SNIA Swordfish™ Features and Profiles
Ready to ride the wave to what’s next in storage management? As a part of an ongoing series of educational materials to help speed your SNIA Swordfish™ implementation in this Swordfish School webcast, Storage standards expert Richelle Ahlvers (Broadcom Inc.) will provide an introduction to the Features and Profiles concepts, describe how they work together, and talk about how to use both Features and Profiles when implementing Swordfish.
Features are used by implementations to advertise to clients what functionality they are able to support. Profiles are detailed descriptions that describe down to the individual property level what functionality is required for implementations to advertise Features. The Profiles are used for in-depth analysis during development, making it easy for clients to determine which Features to require for different configurations. They are also used to determine certification / conformance requirements.
About SNIA Swordfish™
Designed with IT administrators and DevOps engineers in mind to provide simplified and scalable storage management for data center environments, SNIA Swordfish™ is a standard that defines the management of data storage and services as an extension to the Distributed Management Task Force’s (DMTF) Redfish application programming interface specification. Unlike proprietary interfaces, Swordfish is open and easy-to-adopt with broad industry support.
Your one stop shop for everything SNIA Swordfish is https://www.snia.org/swordfish.
Enterprises have embraced cloud computing to unlock the opportunity offered by digital transformation. The cloud’s flexibility and agility enable enterprises to grow their business without borders and ensure productivity and efficiency.
While enterprise applications continue to migrate to the cloud, the necessary change in the wide area network (WAN) is often overlooked. The hub-and-spoke WAN architecture that served the needs of the enterprise when applications were delivered from the datacenter, must evolve to serve the needs of the era of cloud applications. Software-defined WAN (SD-WAN) is the WAN's response to this paradigm shift in application traffic to the cloud.
While SD-WAN has emerged as a key enabler of secure and seamless direct cloud application access with the benefits of transport independence, better security and path selection, it has brought into focus the importance of the transport underlay and the technology investment — both past and future — that an enterprise needs to consider before adopting SD-WAN.
This webinar spotlights two critical success factors for driving mainstream enterprise adoption of SD-WAN:
*Predictable and robust Internet connectivity
*Investment protection (of installed legacy network equipment or in new technology) as the WAN evolves to support applications delivered from the cloud.
Join us at this webinar as we make sense of SD-WAN adoption for your company.
Sathish Gnanasekaran, Broadcom; John Kim, Mellanox; J Metz, Cisco; Tim Lustig, Mellanox
For a long time, the architecture and best practices of storage networks have been relatively well-understood. Recently, however, advanced capabilities have been added to storage that could have broader impacts on networks than we think.
The three main storage network transports - Fibre Channel, Ethernet, and InfiniBand – all have mechanisms to handle increased traffic, but they are not all affected or implemented the same way. For instance, placing a protocol such as NVMe over Fabrics can mean very different things when looking at one networking method in comparison to another.
Unfortunately, many network administrators may not understand how different storage solutions place burdens upon their networks. As more storage traffic traverses the network, customers face the risk of congestion leading to higher-than-expected latencies and lower-than expected throughput. Watch this webinar to learn:
•Typical storage traffic patterns
•What is Incast, what is head of line blocking, what is congestion, what is a slow drain, and when do these become problems on a network?
•How Ethernet, Fibre Channel, InfiniBand handle these effects
•The proper role of buffers in handling storage network traffic
•Potential new ways to handle increasing storage traffic loads on the network
After you watch the webcast, check out the Q&A blog http://bit.ly/323kyNj
Mahbubul Alam, CTO & CMO, Movimento, an APTIV company
With high-speed connectivity at the heart of connected vehicles, 5G will play a significant role as the industry undergoes major transformation toward fully autonomous vehicles. These vehicles will be required to cooperate with each other and with the infrastructure in a secure and reliable manner with higher sustainable throughput, greater outdoor position accuracy, guaranteed jitter/delivery at a significantly reduced latency and improved vehicle safety even for out-of-coverage communications.
1. Real-time teleoperations, putting human in the loop for autonomous vehicles.
2. Legal and lawful intervention of autonomous vehicles.
3. 5G V2X for improved safety of automated driving.
This webinar is brought to you by the Vivit Automation and Cloud Builders Special Interest Group (SIG).
In this webinar, you will learn all about Micro Focus PlateSpin; what it is and how can this tool be used to migrate workload from one place to another. The speaker will also explain how the migration to different platforms is setup. After attending this webinar, you will know:
• The core principles of 'lift-and-shift" server migration
• How ATOS is using PlateSpin to successfully migrate customer applications to the cloud and other platforms
• What major features were recently added to PlateSpin and what the future road map looks like
David Chalupsky, Intel; Craig Carlson, Marvell; Peter Onufryck, Microchip; John Kim, Mellanox
In the short period from 2014-2018, Ethernet equipment vendors have announced big increases in line speeds, shipping 25, 50, and 100 Gigabits-per -second (Gb/s) speeds and announcing 200/400 Gb/s. At the same time Fibre Channel vendors have launched 32GFC, 64GFC and 128GFC technology while InfiniBand has reached 200Gb/s (called HDR) speeds.
But who exactly is asking for these faster new networking speeds, and how will they use them? Are there servers, storage, and applications that can make good use of them? How are these new speeds achieved? Are new types of signaling, cables and transceivers required? How will changes in PCIe standards keep up? And do the faster speeds come with different distance limitations?
Watch this SNIA Networking Storage Forum (NSF) webcast to learn how these new speeds are achieved, where they are likely to be deployed for storage, and what infrastructure changes are needed to support them.
After you watch the webcast, check out the Q&A blog at http://bit.ly/2ZPleUr
Alan Bumgarner, Intel; Alex McDonald, NetApp; John Kim, Mellanox
Traditionally, much of the IT infrastructure that we’ve built over the years can be divided fairly simply into storage (the place we save our persistent data), network (how we get access to the storage and get at our data) and compute (memory and CPU that crunches on the data). In fact, so successful has this model been that a trip to any cloud services provider allows you to order (and be billed for) exactly these three components.
We build effective systems in a cost-optimal way by using appropriate quantities of expensive and fast memory (DRAM for instance) to cache our cheaper and slower storage. But currently fast memory has no persistence at all; it’s only storage that provides the application the guarantee that storing, modifying or deleting data does exactly that.
Memory and storage differ in other ways. For example, we load from memory to registers on the CPU, perform operations there, and then store the results back to memory by using byte addresses. This load/store technology is different from storage, where we tend to move data back and fore between memory and storage in large blocks, by using an API (application programming interface).
New memory technologies are challenging these assumptions. They look like storage in that they’re persistent, if a lot faster than traditional disks or even Flash based SSDs, but we address them in bytes, as we do memory like DRAM, if more slowly. Persistent memory (PM) lies between storage and memory in latency, bandwidth and cost, while providing memory semantics and storage persistence. In this webcast, SNIA experts will discuss:
•Traditional uses of storage and memory as a cache
•How can we build and use systems based on PM?
•What would a system with storage, persistent memory and DRAM look like?
•Do we need a new programming model to take advantage of PM?
•Interesting use cases for systems equipped with PM
•How we might take better advantage of this new technology
This webinar is brought to you by the Vivit Testing Quality ALM Special Interest Group (SIG).
Join this webinar that will demonstrate how BNP Paribas started and performed evaluation of ALM Octane and is now using the tool to run complete agile testing lifecycles. The testing lifecycle will be presented as a live demo that follows Behavior-Driven Development (BDD) methodology and uses Gherkin notation for defining test suites.
The testing lifecycle starts with definition of requirements and user stories, heavily builds on test automation and advanced reporting, manages and synchronizes defects across tools, and feeds learnings from testing back into the requirements. It uses an integrated development and testing infrastructure including products like Confluence, Jira, Micro Focus ALM, Micro Focus ALM Octane, Jenkins, GIT, Cucumber, IntelliJ, TestCafe, and others.
BNP Paribas’s journey with agile test automation using ALM Octane started in 2017. The webinar presents in a nutshell how evaluation of ALM Octane was approached and conducted. It explains how new methods were introduced with the objective to leverage full tool functionality. The initiative has soon helped to increase testing efficiency and created important business value.
Webinar participants will learn:
• How BNP Paribas has conducted evaluation of ALM Octane
• Why ALM Octane is much more than just a successor of ALM/Quality Center
• How ALM Octane eases flexible management of its entire surrounding tool chain
• Why methodology like BDD shall precede tools, and not vice versa
• How Gherkin establishes a common language for the entire agile development lifecycle that helps integrating business, development, and testing
• How to use ALM Octane for pipeline management, reporting, defect synchronization, import of performance testing and BPT results, integration of in-sprint testing etc.
Matt Baldwin, NetApp and Former Founder StackPoint Cloud; Ingo Fuchs, NetApp; Mike Jochimsen, Kaminario
Kubernetes (k8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes promises simplified management of cloud workloads at scale, whether on-premises, hybrid, or in a public cloud infrastructure, allowing effortless movement of workloads from cloud to cloud. By some reckonings, it is being deployed at a rate several times faster than virtualization.
In this presentation, we’ll introduce Kubernetes and present use cases that make clear where and why you would want to use it in your IT environment. We’ll also focus on the enterprise requirements of orchestration and containerization, and specifically on the storage aspects and best practices.
•What is Kubernetes? Why would you want to use it?
•How does Kubernetes help in a multi-cloud/private cloud environment?
•How does Kubernetes orchestrate & manage storage? Can Kubernetes use Docker?
•How do we provide persistence and data protection?
•Example use cases
This webinar is brought to you by the Vivit DevOps Special Interest Group (SIG).
SecOps teams are always looking to strike a balance between staying ahead of new threats and not getting burned by older ones, all while having to do a hundred other things at once. The SecOps team at VMware will share their experience adopting a DevOps process for continuous improvement, while working to reduce the extreme utilization level of the Security Operations Center (SOC) team and to improve the overall security posture. The session will offer practical lessons learned, from a real 24/7 global SOC, without all the fancy buzzwords and feel good tips that don’t actually work in the real world.
Join this webinar to learn:
• How shifting the method of working has improved Security Monitoring, without having to spend more money
• What to be aware of when adopting new methods or techniques
• How to plan out goals and achieve them in environments where priorities constantly are shifting
Yamini Shastry, Viavi Solutions; David Rodgers, Teledyne LeCroy; Joe Kimpler, ATTO Technology
In the FCIA webcast “Protocol Analysis for High-Speed Fibre Channel Fabrics” experts covered the basics on protocol analysis tools and how to incorporate them into the “best practices” application of SAN problem solving.
Our experts return for this 201 course which will provide a deeper dive into how to interpret the output and results from the protocol analyzers. We will also share insight into using signal jammers and how to use them to correlate error conditions to be able to formulate real time solutions.
Root cause analysis requirements now encompass all layers of the fabric architecture, and new storage protocols that usurp the traditional network stack (i.e. FCoE, iWARP, NVMe over Fabrics, etc.) complicate analysis, so a well-constructed “collage” of best practices and effective and efficient analysis tools must be developed. In addition, in-depth knowledge of how to decipher the analytical results and then determine potential solutions is critical.
Join us for a deeper dive into Protocol Analysis tools and how to interpret the analytical output from them. We will review:
•Inter switch links (ISL) – How to measure and minimize fabric congestion
•Post-capture analysis – Graphing, Trace reading, Performance metrics
•Benefits of purposeful error injection
•More Layer 2-3 and translation layers debug
•Link Services and Extended Link Services - LRR Link Ready Rests
You can watch the 1st webcast on this topic on-demand at http://bit.ly/2MxsWR7
Alex McDonald, SNIA SSSI Co-Chair (Moderator), Tom Coughlin, Coughlin Associates, Motti Beck, Mellanox Technologies
Join SNIA Solid State Storage Initiative (SSSI) Education Chair and leading analyst Tom Coughlin and SSSI member Motti Beck of Mellanox Technologies for a journey into the requirements and trends in worldwide data storage for entertainment content acquisition, editing, archiving, and digital preservation. This webcast will cover capacity and performance trends and media projections for direct attached storage, cloud, and near-line network storage. It will also include results from a long-running digital storage survey of media and entertainment professionals. Learn what is needed for digital cinema, broadcast, cable, and internet applications and more.
John Burke, CIO and Principal Research Analyst, Nemertes Research
You need to rethink your WAN to survive the next 5 years. We can help show you how.
Think about it: half of your IT services come from the cloud, from folks such as Amazon Web Services, Google Cloud, IBM Cloud, Microsoft Azure and Office365, and Oracle Cloud. Mixing cloud and internal sources, you serve an increasingly scattered and mobile staff. IoT is turning the physical environment into both a provider and a consumer of IT services.
Is the WAN you built for Client/Server really going to serve?
No. IT needs to rethink its WAN and re-engineer the economics of wide-area networking.
Join Nemertes as we bring our WAN technology research study and freshly updated, one-of-a-kind cost and performance benchmarks to bear on the challenges of remaking your WAN to drive success in the cloud age. We'll discuss:
• SD-WAN and the real benefits it can deliver for performance and cost
• Other cloud-friendly network technologies such as direct-connect and WAN-Cloud Exchanges
• Up-to-date cost and provider performance data for MPLS and Internet services.
Philip Kufeldt, Univ. of California, Santa Cruz; Mike Jochimsen, Kaminario; Alex McDonald, NetApp
Cloud data centers are by definition very dynamic. The need for infrastructure availability in the right place at the right time for the right use case is not as predictable, nor as static, as it has been in traditional data centers. These cloud data centers need to rapidly construct virtual pools of compute, network and storage based on the needs of particular customers or applications, then have those resources dynamically and automatically flex as needs change. To accomplish this, many in the industry espouse composable infrastructure capabilities, which rely on heterogeneous resources with specific capabilities which can be discovered, managed, and automatically provisioned and re-provisioned through data center orchestration tools. The primary benefit of composable infrastructure results in a smaller grained sets of resources that are independently scalable and can be brought together as required. In this webcast, SNIA experts will discuss:
•What prompted the development of composable infrastructure?
•What are the solutions?
•What is composable infrastructure?
•Enabling technologies (not just what’s here, but what’s needed…)
•Status of composable infrastructure standards/products
•What’s on the horizon – 2 years? 5 Years
•What it all means
After you watch the webcast, check-out the Q&A blog bit.ly/2EOcAy8
Christine McMonigal, Intel; J Metz, Cisco; Alex McDonald, NetApp
“Why can’t I add a 33rd node?”
One of the great advantages of Hyperconvergence infrastructures (also known as “HCI”) is that, relatively speaking, they are extremely easy to set up and manage. In many ways, they’re the “Happy Meals” of infrastructure, because you have compute and storage in the same box. All you need to do is add networking.
In practice, though, many consumers of HCI have found that the “add networking” part isn’t quite as much of a no-brainer as they thought it would be. Because HCI hides a great deal of the “back end” communication, it’s possible to severely underestimate the requirements necessary to run a seamless environment. At some point, “just add more nodes” becomes a more difficult proposition.
In this webinar, we’re going to take a look behind the scenes, peek behind the GUI, so to speak. We’ll be talking about what goes on back there, and shine the light behind the bezels to see:
•The impact of metadata on the network
•What happens as we add additional nodes
•How to right-size the network for growth
•Tricks of the trade from the networking perspective to make your HCI work better
Now, not all HCI environments are created equal, so we’ll say in advance that your mileage will necessarily vary. However, understanding some basic concepts of how storage networking impacts HCI performance may be particularly useful when planning your HCI environment, or contemplating whether or not it is appropriate for your situation in the first place.
After you watch the webcast, check out the Q&A blog at http://bit.ly/2Va4wwH
Modern data centers are tasked with delivering intelligent multi-media responses to real-time human interactions. Massive amounts of data are being churned and sifted by highly parallel applications, such as Online Data Intensive Services (OLDI) and Artificial Intelligence (AI), which historically required specialized High-Performance Computing (HPC) infrastructure.
New advancements in high-speed distributed solid-state storage, coupled with remote direct memory access (RDMA) and new networking technologies to better manage congestion, are allowing these parallel environments to run atop more generalized next generation Cloud infrastructure. Generalized cloud infrastructure is also being deployed in the telecommunication operator’s central office.
The key to advancing cloud infrastructure to the next level is the elimination of loss in the network; not just packet loss, but throughput loss and latency loss.
There simply should be no loss in the data center network. Congestion is the primary source of loss and in the network, congestion leads to dramatic performance degradation. This presentation summaries work from the IEEE 802 Network Enhancements for the Next Decade Industry Connections Activity (Nendica).
The Nendica report describes the need for new technologies to combat loss in the data center network and introduces promising potential solutions.
Updating the network infrastructure for the 21st century
With virtualization and cloud computing revolutionizing the data center, it's time that the network has its own revolution. Join the Network Infrastructure channel on all the hottest topics for network and storage professionals such as software-defined networking, WAN optimization and more to maintain performance and service in your infrastructure
The Big IT Picture: Where Does Enterprise Software & Infrastructure Fit?Susan Odle- CEO, Pierre Paquette- CTO, and Richard Larocque- IT Management Knowledge Director, RAPA Consulting[[ webcastStartDate * 1000 | amDateFormat: 'MMM D YYYY h:mm a' ]]52 mins