The data center management community focuses on the holistic management and optimization of the data center. From technologies such as virtualization and cloud computing to data center design, colocation, energy efficiency and monitoring, the BrightTALK data center management community provides the most up-to-date and engaging content from industry experts to better your infrastructure and operations. Engage with a community of your peers and industry experts by asking questions, rating presentations and participating in polls during webinars, all while you gain insight that will help you transform your infrastructure into a next generation data center.
IoT is a technology that has the potential to make us healthy, wealthy, and wise especially in healthcare. Healthcare is just now adopting IoT to improve patient outcomes and decrease the cost of care.
In this webinar, you’ll learn:
- How to identify if an IoT solution will work for your use case.
- What others in healthcare are using IoT for.
- The challenges of IoT in healthcare
Modern vehicles are, as Bruce Schneier recently put it, actually computers with wheels rather than cars with a computer added on. Every part of the vehicle's operation is supervised, logged, and managed by digital signals on a complex vehicle network. If you have a crash, your car will tell investigators if you were speeding or swerved to avoid the impact. If you spend too long dawdling at the convenience store instead of visiting your customers, your employer will know about it. If you waste fuel, drive dangerously, or don't turn your lights on when you should, it'll be recorded.
This introduces a lot of familiar debates in security circles. Who owns the data? What counts as personally identifiable? What are acceptable standards for logging, retention, and disclosure? What happens if we get it wrong?
The bad news is the vehicle landscape, like enterprise security, is badly fragmented. The good news is we've learned a lot of useful lessons over the past 20 years which can be brought to bear on the problem, so solving it shouldn't take another 20.
In this presentation we'll review some of the mechanics of how vehicle data is generated, who can see it, and how it can be used and abused. We'll then talk about points of leverage for the industry, the manufacturers, the owners, and law enforcement, and see what common ground exists. Finally, we'll lay out some basic ideas any fleet operator or concerned individual can use to make decisions about what vehicles to use and how to manage the data footprints they generate.
Clearsense is a pioneer in healthcare data science solutions using Spark Streaming to provide real time updates to health care providers for critical health care needs. Clinicians are enabled to make timely decisions from the assessment of a patient's risk for Code Blue, Sepsis and other conditions based on the analysis of information gathered from streaming physiological monitoring along with streaming diagnostic data and the patient historical record. Additionally this technology is used to monitor operational and financial process for efficiency and cost savings. This talk discusses the architecture needed and the challenges associated with providing real time SLAs along with 100% uptime expectations in a multi-tenant Hadoop cluster.
Tectonic 1.6.4 continues the push to breaking open the cloud market. The features included in this release increase the portability of applications across hybrid cloud environments. CoreOS is providing an enterprise ready Kubernetes solution with automated operations that enable a cloud "as a service" operational model regardless of the underlying infrastructure, whether bare-metal, private IaaS, or public IaaS.
Rob Szumski, Tectonic product manager at CoreOS, leads this webinar through the changes in the Tectonic 1.6.4 release. He'll go over the features included from Kubernetes 1.6.4 and how Tectonic is including operators for etcd other tools.
Software plays an expanding and critical role in the success of future vehicles such as automobiles and trucks. Novel technologies that depend on the flexibility of software create new vulnerabilities and new ways to attack systems. This talk explores the expanding landscape of vulnerabilities that accompany the increasing reliance on software and then examines some key steps to help mitigate the increased risk: development of appropriate requirements from an analysis of risks, techniques that can be applied during development, and evaluation approaches for existing systems. The talk will conclude with a view of emerging approaches to further improve the delivery and sustainment of such critical software.
About the Presenter:
Dr. Mark Sherman is the Director of the Cyber Security Foundations group at CERT within CMU’s Software Engineering Institute. His team focuses on foundational research on the life cycle for building secure software and on data-driven analysis of cyber security. Before coming to CERT, Dr. Sherman was at IBM and various startups, working on mobile systems, integrated hardware-software appliances, transaction processing, languages and compilers, virtualization, network protocols and databases. He has published over 50 papers on various topics in computer science.
As data footprints continue to grow, so do the demands on enterprise storage. Scale-out NAS could be seen as a perfect solution. However, the demands of today's data-intensive file-based workloads have revealed the limitations of scale, performance, and visibility and control of legacy scale-out architectures.
To help their customers with this challenge, Hewlett Packard Enterprise (HPE), one of the world’s premiere enterprise technology companies, has partnered with Qumulo. The modern design of the Qumulo Core filesystem matched the best-of-breed HPE Apollo Servers, provide customers with a truly modern scale-out storage solution.
Join HPE and Qumulo as we discuss the attributes of our joint solution and what it means for the modern enterprise technology consumer.
Joel Groen is a seasoned Product Manager at Qumulo with over 15 years of experience building enterprise, cloud, and mobile technology products. At Qumulo, he is focused on driving technical alignments within the storage industry to help companies grow into petabyte scale infrastructures.
Paul Merrifield is the Business Unit Storage CTO for North America at HPE. He is responsible for studying and understanding the broad industry changes impacting information technology, the business implications associated with an industry in transition, and the translation of those challenges into HPE’s technology strategy and point-of-view.
Enterprises aren't the only organizations that can benefit from big data. Medium and large businesses can too. The challenge is, can these businesses build the same big data infrastructure enterprises do? The short answer is, they can't. But the secret is, they don’t have to.
In this webinar, join Storage Switzerland and Avere Systems as they show you how to shift large workload compute processing into the cloud in just 10 minutes. This quick onramp to unlimited cores can level the playing field and improve your ability to compete with the global giants.
The cloud has upended the way your customers want to adopt applications and underlying infrastructures. It’s important that your business stays ahead of this transformation. Your customers are looking to you as their trusted managed services advisor to “make it work.” Join us to learn best practices for transitioning your customers to the cloud.
Fog computing represents a tectonic shift for the future of transaction management, distributed supply chain and overall experience. It blurs the lines between the edge and the cloud and puts the focus on the systems which manage and balance the delivery of coherent, end-to-end sessions and associated transaction level agreements. As a result, this new technology is pervasive in several industries.
Join this panel of experts as they discuss solutions with specific industry use cases from smart fog for Cities, Buildings, Ports, and Maritime.
Moderator: Katalin Walcott, Work Group Chair Manageability at OpenFog Consortium & Principal Engineer - IoT/Fog Computing Orchestration Architecture at Intel
- Jeff Fedders, President at OpenFog Consortium & Chief Strategist, IoTG Strategy and Technology Office at Intel
- Mark Dixon, Senior Architect for Smarter Cities at IBM
- Matthew Bailey, President, Powering IoT - Smart City advisor and strategist to governments, technology corporations, and economic development agencies
The SNIA’s Scalable Storage Management Technical Work Group (SSM TWG) has created and published an open industry standard specification for storage management that defines a customer centric interface for the purpose of managing storage and related data services. This specification builds on the DMTF’s Redfish specification using RESTful methods and JSON formatting. This presentation provides an overview of basic Swordfish and Redfish concepts and shows how Swordfish extends Redfish. Examples showing how clients can traverse the models, highlighting how the two standards integrate seamlessly together, are given.
Cohesity is one of the rising stars in the world of data management. They have flipped the data protection market on its ear. In this CEO Series webcast, Arun Taneja, Founder and Consulting Analyst of Taneja Group will interview Mohit Aron, CEO of Cohesity, to understand the concept of Hyperconverged Secondary Storage and why it matters to the industry. We will explore the advantages it provides for your data protection, test/dev, and data analytics workloads and how Cohesity is different from other solutions on the market. It is time to say goodbye to the old, staid methods of protecting data. The traditional methods simply don’t make sense in the new world of Big Data, Multi and Hybrid cloud and web-scale applications. Join the webcast for a whirlwind tour of new ideas and methods in this space.
Be the first to hear about Cisco and NetApp’s exciting new FlexPod solution. This revolutionary design couples operational simplicity, guaranteed QoS, and granular scale-out with FlexPod’s proven architecture. The new solution will appeal to Infrastructure buyers, as well as, Cloud and Virtualization buyers who are building next generation data centers.
•Learn how the solution can provide you with proven performance, agility and value through the latest NetApp and Cisco technologies
• Experience how FlexPod converged infrastructure can reduce risk, deliver faster time to market and improve business outcomes
• See how NetApp technology simplifies storage management and scales on demand
In this webinar, Jason Stamper, analyst for Data Platforms and Analytics at 451 Research, will look at some of the latest trends that are being seen in IoT and specifically analytics at the edge of the network — in other words close to where the data is generated.
He will also identify a number of data platform and analytics themes that are becoming more critical in the IoT era: security and data governance; infrastructure including edge analytics and server less computing; data processing; data integration and messaging.
Storage and infrastructure are going through a lot of changes. With technologies like Software-defined Storage as well as new infrastructure like Hyper-converged, there is a lot for IT admins to consider.
Join us in this webinar where we’ll cover Software-defined Storage, Flash, Hyper-converged and Cloud Storage. You learn why these technologies and infrastructure models are or aren't being utilized by companies and the areas where they have not lived up to their promises.
How can you better embrace multiple cloud and non-cloud IT environments in order to accelerate your digital business?
How can you succeed in a multi-cloud world? We’ve gathered a range of experts opinions including the analyst perspective, the Vodafone view and a Vodafone customer CEO’s outlook.
William Fellowes, Analyst, 451 Research
Neville Roberts, CEO, Planixs
James Griffin, Cloud Evangelist, Vodafone
Key talking points:
-Common challenges of managing IT environments
-What challenges enterprises face in different countries
-How to select the right cloud for the right application
-Choosing the right cloud vendors
-How to manage these environments simply and securely
Hear from Vodafone’s key customer Planixs (http://www.planixs.com) on:
- How they manage multi cloud environments
- The key things they consider when selecting vendors
L’évolution en taille et en complexité de vos applications et de votre infrastructure réseau a une répercussion directe sur votre politique de sécurité. Cependant, une gestion manuelle de politiques de sécurité complexes peut se traduire souvent par des risques inutiles, des coûts plus élevés et une incapacité à suivre le rythme de votre activité.
Au cours de cette session, nous vous présenterons une approche de la gestion de la politique de sécurité centrée sur les applications, qui vous permettra de gérer facilement et automatiquement des politiques complexes sur plusieurs Firewalls. La solution présentée vous aidera à améliorer votre positionnement en termes de sécurité et d’agilité opérationnelle, à garantir une conformité continue et à réduire le risque, tout en réconciliant les lacunes de communication entre les équipes applicatives et réseau.
Are there basic storage terms you should understand, but maybe you don’t?
Then welcome to this webcast series, “Everything You Always Wanted to Know about Storage, but were too Proud to Ask” where we’re going to take an irreverent, yet still informative look, at the parts of a storage solution in Data Center architectures. We’ll start with the very basics – The Naming of the Parts. We’ll break down the entire storage picture and identify the places where most of the confusion falls. Join us in this first webcast – Part Chartreuse – where we’ll cover:
•What an initiator is
•What a target is
•What a storage controller is
•What a RAID is, and what a RAID controller is
•What a Volume Manager is
•What a Storage Stack is
Oh, and why is this series named after colors, instead of numbers? Because there is no order - each is a standalone seminar. So don’t let pride get in your way.
If you think Hadoop is not in your future, think again. According to a recent survey, 97% of organizations working with Hadoop, anticipate that they will onboard Analytics and BI workloads to Hadoop. When this happens, the companies that have disregarded the Big Data opportunity, may be left behind.
The good news is that onboarding your Business Intelligence workloads to Hadoop is not as complicated as it used to be just a few short years ago. If you understand some key concepts, the transition can be a lot simpler and more successful - allowing you to recycle current skillsets, while avoiding both a rip and replace of your technical stack or replacement of business analysts with data scientists.
In this interactive session, Josh Klahr, VP of Product at AtScale, will take you through real-life examples of company successes with BI on Hadoop. He will dissect lessons and learnings gathered along the Hadoop journey. Some of these include:
*Don’t move and copy data
*Don’t have multiple definitions of reality
*Don’t scale up with proprietary hardware
*Don’t lock yourself in proprietary stacks
This session will also offer best practices on what to ‘DO’’ and a good set of rules in what Klahr calls the “Do’s and Don’ts of BI on Hadoop”.
Apstra Operating System (AOS) 1.2 marks a significant milestone for network operations. New operational tooling -- out-of-the-box as well as user-created in Python -- means network engineers will get more done while making fewer mistakes.
Join Caringo Product Manager Eric Dey for a detailed look at the recent changes to Swarm Object Storage and a peek at the product roadmap. You will learn about recent updates to all aspects of Swarm such the foundational storage cluster, the user interface, and support for NFSv4 file sharing. Eric will also review the best-of-breed features that can help you reduce your storage TCO when implementing cloud storage for data protection, management, organization, and search at massive scale.
Despite billions of dollars invested in cyber security measures, companies are still falling behind when it comes to cyber attack prevention. Case in point, The recent malware campaign “WannaCry” that was able to infect more than 300 thousand systems in short duration of time. This is a testimony to the fact that investments in proactive, rather than reactive defense strategy is acutely needed. Current cyber range solutions are often siloed efforts that takes weeks to set up and cover limited scenarios.
Addressing these problems requires a new approach with on demand self-service environments to train incident response teams in a holistic manner across the entire organization and simulate a comprehensive set of attacks on IT infrastructure.
In this webinar you will learn how you can use the power of Quali sandboxes and Ixia Breaking Point solution in creation of a Cyber Range training environment to
-Rapidly provision full-stack, real-world cyber threat environments
-Generate thousands of unique attacks mixed with a large variety of real life traffic profiles
-Reports and grades that measures trainees abilities to neutralize attacks while maintaining traffic continuity.
In this talk, Ram will provide a unified framework for Internet of Things, Cyber-Physical Systems, and Smart Networked Systems and Societies, and then discuss the role of ontologies for interoperability.
The Internet, which has spanned several networks in a wide variety of domains, is having a significant impact on every aspect of our lives. These networks are currently being extended to have significant sensing capabilities, with the evolution of the Internet of Things (IoT). With additional control, we are entering the era of Cyber-physical Systems (CPS). In the near future, the networks will go beyond physically linked computers to include multimodal-information from biological, cognitive, semantic, and social networks.
This paradigm shift will involve symbiotic networks of people (social networks), smart devices, and smartphones or mobile personal computing and communication devices that will form smart net-centric systems and societies (SNSS) or Internet of Everything. These devices – and the network -- will be constantly sensing, monitoring, interpreting, and controlling the environment.
A key technical challenge for realizing SNSS/IoE is that the network consists of things (both devices & humans) which are heterogeneous, yet need to be interoperable. In other words, devices and people need to interoperate in a seamless manner. This requires the development of standard terminologies (or ontologies) which capture the meaning and relations of objects and events. Creating and testing such terminologies will aid in effective recognition and reaction in a network-centric situation awareness environment.
Before joining the Software and Systems Division (his current position), Ram was the leader of the Design and Process group in the Manufacturing Systems Integration Division, Manufacturing Engineering Lab, where he conducted research on standards for interoperability of computer-aided design systems.
We live in an IoT world. Connected devices now include TVs, refrigerators, security systems, phones, music players, smart assistants, DSL modems, cars, and even toothbrushes. Besides privacy and personal security concerns, these devices pose significant risk of cyber attacks. IoT devices have been used in devastating DDoS attacks that have paralyzed key Internet services, emergency services, and heating systems. In addition to run-of-the-mill hackers and hacktivists, they are the first line of attack in any low-to-medium scale cyber conflict between nation states.
Vulnerable IoT devices represent a direct threat to safety, life, property, business continuity, and general stability of the society.
This talk will discuss the security challenges surrounding IoT devices, and what is needed for a balanced framework that forces vendors to implement a reasonable level of best practice without causing them undue burden and risk.
About the Presenter:
Tatu Ylonen is a cybersecurity pioneer with over 20 years of experience from the field. He invented SSH (Secure Shell), which is the plumbing used to manage most networks, servers, and data centers and implement automation for cost-effective systems management and file transfers. He is has also written several IETF standards, was the principal author of NIST IR 7966, and holds over 30 US patents - including some on the most widely used technologies in reliable telecommunications networks.
Security assessments drastically reduce your organization’s risk of suffering a data breach by identifying poor InfoSec and privacy practices among vendors, partners, contractors, and other third parties.
For most businesses, these assessments are a slow, unscalable, manual process that strains InfoSec teams and creates a backlog of security evaluations.
During this webcast, Jonathan Osmolski, Manager of Enterprise Records and Information Governance at Pekin Insurance, and Hariom Singh, Director of Product Management for Qualys Security Assessment Questionnaire (SAQ) will show you how you can free your organization from unreliable and labor-intensive manual processes, and optimize the accuracy of audit results.
You will learn how Pekin Insurance:
> Replicated its manual 76-question assessment process within SAQ’s web-based UI in just two hours
> Simplified the design, distribution, tracking, and analysis of multiple vendor risk assessment campaigns
> Gained improved visibility into its compliance performance metrics
Increased the overall productivity and efficiency of its InfoSec team
This webcast will include a live demo and Q&A session.
Digital Service Providers need analytics to improve their operations and to exploit new revenues streams. The higher the quality of network intelligence, the more they can differentiate on their core asset to meet their subscriber’s needs and business partner expectations. Network engineering must take critical actions on very complex systems to improve the service quality for customers, while Telco marketing needs actionable, self-service, easy analytics to exploit most effective levers to increase revenues.
Procera Networks is a 100 MUSD+ company with products serving more than 450 million subscribers in 80 tier-1 operators and 500+ enterprises around the world. Procera leverages big data and advanced analytics to build a complete service network ScoreCard with continuous measurement reports for all type of traffic from all live subscribers all the time. In this webinar, we will discuss how records are pumped into a central Vertica analytical database from a distributed Packetlogic probe setup, and aggregated up with 15 different dimensions (e.g. location, handset, customer, tier, topology, line card, VNF) to see the true impact of these dimensions on quality delivered to subscribers.
We will also discuss how Procera helps Telco companies monetize network data enabling any “consumer industry” to deliver better-personalized marketing offers based on user habits and content consumption. Procera pushes the bar on advanced analytics using Vertica to store and mine the subscribers’ behavior gathered via a Deep Packet Inspection (DPI) engine detecting 2900 unique application signatures and 294 content categories. This rich fine-granular information is used to profile subscribers in relevant “marketing personas” for highly targeted offerings. We will discuss the key critical success factor to execute monetization strategies tapping from Procera experience and the requirements of the underlying analytics platform.
For most enterprises dealing with increased security threats, limiting machine data collection is not an option. But with finite IT budgets, few organizations can continue to absorb the high costs of scaling high-end Network Attached Storage (NAS) or moving to and expanding a block-based storage footprint. Join this webinar to discover options for more cost-effective solutions that enable large-scale machine data ingestion and fast data access for security analytics.
- The common challenges companies see when scaling security workflows
- Why a high-performance cache works to solve these issues
- How to integrate cloud into processing and storage for additional scalability and efficiencies
Presenters will build an actionable framework in just thirty minutes and then take questions.
The latest version of Data Center Automation (DCA) Suite version is now available. DCA is an end-to-end lifecycle management solution for servers, databases and middleware. Now, DCA comes with new features—container-based deployment, unified compliance, and ChatOps collaboration.
Join Nisarg Shah, Director Product Management, as he discusses the latest release, answers your questions, and shares the new DCA UI.
• See the DCA Suite—provisioning, patching, compliance, open API orchestration
• Learn how container-based deployment delivers quick time-to-value
• Discover why automated compliance and remediation matters
• Hear about unified compliance—not just for servers, but across applications
• Understand how ChatOps helps collaboration in the enterprise
• See a demo: DCA compliance in action—PCI compliance scan, auto-remediation, visual compliance dashboard
The volume of data streaming into the data center has been growing exponentially for decades. Bandwidth requirements are expected to continue growing 25 percent to 35 percent per year. At the same time, lower latency requirements continue to escalate. As a result, the design of services and applications—and how they are delivered—is rapidly evolving.
Instead of a single dedicated server, information requests coming into the data center are now fulfilled by multiple servers cooperating in parallelThe traditional three-tier network is quickly being replaced by spine-and-leaf networks. As a result, the physical infrastructure must be able to support higher link speeds and greater fiber density while enabling quick and easy migration to new more demanding applications.
This webinar will address:
Solutions, support & decisions
40G or 25G lanes?
Preterminated vs field-terminated cables
Duplex or Parallel transmissions
Singlemode, multimode or wideband multimode fiber?
Attendees with earn one BICSI Continuing Education Credit for attending.
Migrating to the cloud can transform an organization. But is your organization ready? It can be a struggle to translate buzz words into a successful paradigm shift from a legacy server/data center model to purchasing in a cloud broker model. And, if your move is not properly planned and effectively executed, your cloud initiative may result in unexpected cost overruns, or worse. A successful roadmap is not just about planning and migration, it’s also about aligning your organization to most effectively use the cloud.
In this live webinar, TierPoint Vice President of Professional Services, Matt Brickey and Dell EMC’s Director of Data Mobility, Peter Molnar, will share their combined expertise, including client use cases, derived in helping hundreds of clients “cloudify” their applications and data, and leverage the cloud to full advantage.
Join Matt and Peter as they answer key questions about creating a successful cloud roadmap, including:
- How to assess your organization’s cloud readiness
- How to evaluate which workloads are the best candidates for the cloud
- In a multi-cloud world, which cloud(s) are right for your organization?
- Crucial migration planning considerations and milestones
In this session, we will bring an NSX infrastructure online in a running physical environment – a powerful example of migrating to a virtual infrastructure with no downtime. We will discuss the linking of physical devices to the edge services gateway and configuring routing protocols on network devices and in the NSX environment. We will also explain why a clear knowledge of your ecosystem, from the physical switch on up, will help you better understand how the overlay and the underlay come together. This master class picks up after the software has been installed, at the point when you need to start securing and passing packets in your environment.
This session will include:
- A deep dive into setting up the physical and virtual network infrastructure.
- Demos you can follow to get NSX quickly running in your environment.
Business is moving faster than ever—and enterprise IT departments are struggling to keep up. Organizations need apps to run faster and more reliably, provision without the hassle, and scale on-demand while staying in budget. Hyper-converged infrastructure (HCI) makes all of that possible. Join us for the webcast to get a closer look at HCI, and find out how it can transform your data center.
You’ll learn about:
—The evolution of the modern data center
—What HCI is and the case for a software-defined approach
—The tangible—and intangible—benefits of HCI
—How HCI works in the real world
As the most recent outbreak of ransomware has proven once again, a debilitating attack can come from anywhere, with any sort of malware, and have a global impact. While headlines and marketing statements constantly shout “Zero Day”, even old malware can be used effectively to wreak havoc in a network if it’s not properly configured and up to date.
The threat landscape never stops evolving and neither should an enterprise’s cyber security strategy. New products, new features and efficient source of threat intelligence are just some of the tools that an enterprise should look for from their security vendors.
This session will look at the evolution of Advanced Threat Protection and how continuous development across the full range of technologies is crucial to maintaining security efficacy.
See the new and improved features in the NVMe version 1.3 specification. NVM Express hosts Jonmichael Hands of Intel and Co-Chairman of the NVMe Marketing Committee explaining the key changes in the spec. Join to learn the latest in non-volatile memory standards.
Designing and deploying an effective predictive analytics model that is integrated into a company’s daily business operations can be very challenging. Data scientists often use complex machine learning models to exploit large volumes of data from multiple environments and technologies to deliver analytics that the business needs.
Join us on this webcast as we walk you through the data science journey applied to a real case and learn how you can automate the entire data science workflow.
See how the integration of Dataiku, the collaborative data science platform, and Vertica, the ultra-fast analytics database platform with built-in machine learning, can help you speed the deployment of data-intensive predictive analytics.
Learn how to:
• Design connections to existing data sources with Dataiku
• Understand your datasets with built-in charting capabilities and data pre-aggregation
• Reduce the time it takes for the data preparation phase
• Leverage the scalability and speed of Vertica
Enterprise customers and service providers have been researching ways to find a modern day storage solution so they can move away from the traditional monolithic infrastructure environments. Customers have been looking for ways to remove silos, reduce OPEX/CAPEX, create operational efficiency – all with large scale-out capabilities and cloud agilities without sacrificing performance.
This webinar is to share how customers can achieve these goals with a truly software-defined storage solution. Companies have transformed seamlessly from traditional storage environments to flexible, cloud-like agile environments.
Hear from Steadfast, a service provider leader, who enables their customer base to focus on their company priorities to grow the business instead of spending the unnecessary time to maintain and manage their traditional storage infrastructure.
In this session, you will hear about:
- The modern day storage-as-a-service infrastructure with high-performance, extreme scale-out flexibility and cloud agility
- How to eliminate manual data migration
- Fully automating the storage infrastructure
- Steadfast’s IT transformation
Ensuring you’re prepared for the next series of challenges
Docker Containers and Kubernetes are taking the industry by storm, promising new levels of efficiency for both application developers and operations teams. Even organizations not engaged in a wholesale re-architecture of their application environments can benefit from desirable properties of containers like packaging, portability and service isolation.
While the benefits are real, many organizations face choppy waters as they evolve from initial public or private cloud deployments to production. While Kubernetes is impressive at abstracting resources and managing containers, new services still need to interact with existing applications, and challenges abound related to multi-tenancy, resource scarcity and ensuring that business priorities are met.
Join us for an informative and insightful look at containers in the enterprise, and learn about new technologies and strategies that can help ensure a smooth and trouble free evolution as you chart your course from pilot to production. Whether you’re advanced in your use of containers or just getting started, there are important insights to be gained.
•451 Group’s latest research and analysis
•The “state of containers” and their adoption in IT
•Benefit from case studies and lessons learned by early adopters
•Considerations when deploying applications to production
•Techniques to maximize utilization and optimize expenditures
•Strategies for supporting mixed container and non-container workloads
The Gemalto’s Breach Level Index reported 1.4 billion data records compromised worldwide in 2016, up 86% from 2015. Closer to home, there were 44 & 16 voluntarily reported breaches in Australia and New Zealand respectively. With the new Privacy Amendment (Notifiable Data Breaches) Act 2017 in Australia, these numbers are expected to increase dramatically as organisations are required to declare any “eligible data breaches”.
Navigating these regulations such as the Australian Privacy Act and European General Data Protection Regulation (GDPR) and the impact they will have can be daunting. Organisations must start planning ahead to mitigate the potential risks of being non-compliant. The implications of a data breach can go beyond compliance. In 2014, the Target breach had a massive impact on the company’s brand reputation, while last year’s announcement of the Yahoo! data breach cost the company nearly $1.7 billion in stock market value.
During this webinar, Helaine Leggat, an legal expert in data protection regulations will discuss the Australian Privacy Act Amendment in detail, what it means for businesses in Australia and internationally. Graeme Pyper, Regional Director at Gemalto will provide recommendations to help prepare for the 2018 deadline. We will share industry best practices and methodologies companies can evaluate to simplify a government audit process. Join our experts to ask questions and learn more about:
•The local and global government data privacy regulations (Australia and Europe)
•Gauging the true cost of a data breach and how to reduce the scope of risk
•Understanding privacy by design throughout business
•Strategies for simplifying operations for regulation and internal audits
•Determining current industry compliance, which may be applicable to the APA and GDPR
Is your disaster recovery (DR) strategy inhibiting your company’s growth? A hybrid DR approach can bring significant benefits but only if you go in with full knowledge and understanding. In this webinar, we’ll outline the four pitfalls that can sink a DR strategy.
How can a better storage infrastructure accelerate time to discovery for genomics research?
Join IDC research directors Dr. Alan Louie and Eric Burgener for an informative session as they discuss the life sciences research landscape and its key processes and requirements. They will share information about how today's data-intensive genomics workflows require high throughput and bandwidth to handle massive data sets in near real time. Brian Schwarz from Pure Storage will also share details about how a purpose-built "big data flash" scale-out NAS platform like FlashBlade can help research teams accelerate time to discovery.
Tune into the conversation to learn:
- Why the convergence of data storage and life sciences research go hand-in-hand
- Technologies and best practice methodologies for genomic analysis, including containers and workflows
- How to handle massive big data sets in real time
Many organisations would like more complete and accurate IT systems documentation, but cannot find a way around their current methods such as multiple isolated spreadsheets and diagrams. Our webinar will cover techniques to reduce workload and improve consistency for IT infrastructure technologies such as networks, servers, applications, data centres, cabling, etc. The larger the infrastructure the greater the benefits of our
As businesses are under increased competitive pressure to transform into digital enterprises, they are relying on technology as never before. IT Service Management, deployed across the enterprise and focused equally on IT and non-IT services, becomes the enabler of digital growth. IT Service Management is required to manage the increased velocity of changes and releases initiating from DevOps and Agile, and all existing processes - such as incident, change and request management - must support Agile. The support methods must fit the evolution of the digital workplace and the changing nature of work. To be able to address new requirements from digital business, ITSM needs holistic automation to become easier, faster and transformative.
In this webinar we will explore how much and what kind of Automation is needed in ITSM. You will learn how the HPE IT Service Management Automation solution is easier to manage and use, faster to install and manage with services that digital workforce needs. We will discuss what freedom and benefits you can enjoy with ITSM solutions based on container technology. We will show you a demonstration of a request-to-fulfillment process that can enable your IT to evolve into a strong digital business partner.
In this, the sixth entry in the “Everything You Wanted To Know About Storage But Were Too Proud To Ask,” popular webcast series we look into some of the nitties and the gritties of storage details that are often assumed.
When looking at data from the lens of an application, host, or operating system, it’s easy to forget that there are several layers of abstraction underneath before the actual placement of data occurs. In this webcast we are going to scratch beyond the first layer to understand some of the basic taxonomies of these layers.
In this webcast we will show you more about the following:
•Storage APIs and POSIX
•Block, File, and Object storage
•Byte Addressable and Logical Block Addressing
•Log Structures and Journaling Systems
It’s an ambitious project, but these terms and concepts are at the heart of where compute, networking and storage intersect. Having a good grasp of these concepts ties in with which type of storage networking to use, and how data is actually stored behind the scenes.
Join us on July 6th for this session. We look forward to seeing you there!
Today, with hybrid cloud, collocation sites and managed service options, you can build a robust disaster recovery environment without physically building data centers. Saving money and time. In this webinar you’ll hear best practices to leverage new technologies for an optimal hybrid DR solution.
Join IDC research directors Dr. Alan Louie and Eric Burgener for an informative session as they discuss the life sciences research landscape and its key processes and requirements. They will also share information about how today's data-intensive genomics workflows require high throughput and bandwidth to handle massive data sets in near real time. And then Brian Schwarz from Pure Storage will share details about how a purpose-built "big data flash" scale-out NAS platform like FlashBlade can help research teams accelerate time to discovery.
Want to deploy an enterprise-grade Cloud solution without the headaches? Check out this joint webinar from Intel / Supermicro / Canonical on how to design, build and manage your OpenStack or Kubernetes with the Supermicro platform and value added services from Canonical.
You will also learn about some of the most common use cases for Kubernetes and OpenStack: Machine Learning, NFV, CI / CD, and Transcoding.
Supermicro’s solutions offer highly scalable Ultra Enterprise Servers that can have up to 44 Cores, 3TB of memory, high performance NVMe storage on each node, and support for a wide variety of workloads. The solutions also support networking options such as SFP+, 10GBASE-T, 40G and InfiniBand, making them an ideal choice for Canonical’s Foundation cloud deployments.
Using these Supermicro servers, we have built a hyper-converged solution stack that has been tested and validated in Supermicro labs. Customers can choose a best-in class hardware platform with OIL validation, the leading production OS for OpenStack deployments and networking overlay delivered as a fully managed service. Using Juju, the application and service-modeling tool, foundation cloud customers can integrate the infrastructure and operations that they need.
Join Arturo Suarez from Canonical, Srini Bala from Supermicro, and Michael J. Kadera from Intel as they explore a rich landscape of opportunities that combines Juju on Supermicro’s certified platforms to help you tackle the challenges of building and maintaining complex microservices based solutions like OpenStack and Kubernetes.
Data breaches in 2016 got even more personal with big hacks of adult entertainment sites and social media databases. Hackers mined these for gold, in other words, valuable data to create social engineering attacks, ransom operations and identity theft. According to Gemalto’s Breach Level Index, the number of stolen, compromised or lost records increase by 86% in 2016, while the number of breaches decreased by 4%. Hackers are going after more data than ever before, and they are finding it in large databases that are left relatively insecure.
Whether consciously or not, hackers have grasped the idea of situational awareness. They have figured out how to exploit these golden opportunities by keeping a pulse on what is going on. It seems too simple to be true, but it goes back to the age-old principle of information is power. Getting the information comes from being aware of one’s surroundings. To become situationally aware, companies need to change their mindset- building a walled garden isn’t an option anymore. During the webinar, we will look at the major data breach trends and findings from 2016 and discuss how this information can help develop your situational awareness. Join us as we cover topics like:
-What we can learn from Jason Bourne about knowing one’s surroundings
-What we can learn from hackers to better protect valuable data
-What we as security professionals can do by going back to the basics of accountability, integrity, auditability, availability and confidentiality
-How to change our mindset in a new era of a hacker driven gold rush
Hear from Lee Caswell, VP of Products for storage and availability, how the latest innovations in VMware vSAN 6.6 are helping customers simplify and accelerate their digital transformation.
VMware vSAN, our award-winning software powering leading hyper-converged infrastructure (HCI) solutions, continues to evolve as a core building block for the Software-Defined Data Center. After Lee’s introduction, vSAN product experts will cover how vSAN enables you to seamlessly extend virtualization to secure hyper-converged storage, to reach your objectives for cost and efficiency, and to scale to tomorrow’s business needs. We’ll also share insights gained from our vSAN beta program where we focused on features and capabilities that informed vSAN 6.6, including security, protection, and management.
Join us to learn how that latest features in vSAN 6.6 can help set your storage and availability agenda in 2017 and beyond.
IT environments are becoming larger and more complex through organic growth as well as acquisition. Accompanying initiatives—like datacenter migrations, for example—are expensive. Done well, these moves cost $1,200 to $6,000 per server. But when things go wrong? A poorly optimized load balancer could mean downtime for critical application servers – a price that could skyrocket to more than $25,000 per server.
What if IT teams could be guided through the process? What if they had a way to map the plan and troubleshoot issues before they became outages?
In this webinar, we'll share a story from an ExtraHop customer who underwent a large datacenter migration after it acquired a new business. Not only did their IT team undergo a successful migration, they also decreased troubleshooting time and cost to the company by 85 percent – freeing them to move beyond reactive firefighting to proactive solution building for the business.