The storage community on BrightTALK is made up of thousands of storage and IT professionals. Find relevant webinars and videos on storage architecture, cloud storage, storage virtualization and more presented by recognized thought leaders. Join the conversation by participating in live webinars and round table discussions.
Die zunehmende Digitalisierung, der Einsatz moderner bildgebender Analyseverfahren sowie die
gesetzlichen Verpflichtungen zur Speicherung von Gesundheitsdaten lassen den Speicherbedarf
im Gesundheitswesen explodieren und stellen eine große Herausforderung an die effiziente Abläufe und Datenabfragen dar.
•Softwarebasierte Speicherlösungen – wie SUSE Enterprise Storage – bieten eine sichere und kostengünstige Alternative, ohne Kompromisse bei Funktionalitäten und Sicherheit.
•Die Einführung von SAP Anwendungen ermöglicht schnellen Zugriff und intelligente Abfragen in Echtzeit. SUSE Enterprise Linux für SAP Applications ist die führende Plattform für SAP und SAP HANA Anwendungen.
In diesem Webinar geben unsere Experten einen Überblick zu diesen Themen speziell im Gesundheitssektor und stehen für Ihre Fragen zur Verfügung. In weiteren Webinaren werden die Themen vertieft.
Amazon’s S3 (Simple Storage Service) is recognized as the de facto standard interface for interacting with object stores. Deploying an object solution at scale requires rich and robust security that both protects data on the infrastructure and ensures only the right level of access is granted to end users. Join us for the second of our S3 webinars, when we discuss all things security related. You will learn:
- Accessing S3 resources using access keys and signing.
- How to use Identity and Account Management.
- Supporting external users.
- Using code to manage access permissions.
- How data is protected in-flight and at rest.
- Encryption choices; using S3, Key management or customer supplied keys
Join this webinar to see how the CloudPhysics Public Cloud Planning Rightsizer identifies opportunities to lower your costs of running applications on the public cloud.
The Public Cloud Planning Rightsizer automatically identifies on-premises virtual machines (VMs) that are over-provisioned with more resources (such as CPU and memory) than they use. This lets you optimize instance matching to the ideal cloud instances. Rightsizing reveals the verifiable cost of running workloads in the cloud. Now you can answer the question, “will we save money by migrating applications to the cloud?”
This webinar shows how Public Cloud Planning Rightsizer collects resource utilization data from each VM on a fine-grained basis, and then analyzes those data across time to discover the VM’s actual resource needs. Imagine an on-premises VM configured with 8 vCPUs: if the Rightsizer shows that it has never used more than 2 vCPUs, you can Rightsize that VM to a smaller instance in the cloud, saving substantial funds.
If you’re running Hadoop for big data analytics this webinar is for you!
Currently, there are two competing architectures for how to implement Hadoop Distributed File System (HDFS). The original HDFS approach utilizes storage co-located with the compute servers, but that can often present challenges of wasted compute and/or storage when you scale. An emerging alternative relies on dedicated storage resources shared by the compute cluster, providing a cost-effective and reliable solution.
Join Engineering Fellow and Chief Data Scientist Janet George, as she compares and contrasts these two approaches and provides definitive quantities guidelines to planners and architects to help identify the best solutions for your Big Data and analytic needs.
Between these two methods, we’ll compare and contrast:
-Data Reliability and Bit Loss
-Cost of Capacity
The ever changing Cloud Service Provider marketplace is filled with growing opportunities and increasing competition. Mike Slisinger, Cloud Solutions Architect at Nutanix, and Chris Feltham, Cloud Solution Sales Manager at Intel, will discuss how Nutanix and Intel collaborate on cloud technologies and solutions to help Cloud Service Providers solve infrastructure challenges and simplify operations. We will also discuss how current Nutanix and Intel powered Service Providers are building differentiated services that provide true business value to their customers.
Attending this webcast should provide Cloud Service Providers with a good understanding of how Intel and Nutanix can help reduce costs of offering cloud services while enabling and growing new revenue streams for business.
На прошедшей в мае конференции EMC World мы представили новейшее дополнение к портфелю облачных решений EMC – Native Hybrid Cloud™ (NHC™) – инновационную платформу для разработки и доставки облачных приложений. Полностью интегрированная и оптимизированная платформа NHC позволяет внедрять инновации и достигать новых масштабов их распространения. Она помогает ИТ-специалистам и группам операционной поддержки теснее сотрудничать с разработчиками, делая возможной быструю разработку и запуск приложений, обеспечивая необходимый уровень защиты, управления и прозрачности в ИТ-инфраструктуре. Native Hybrid Cloud поможет корпорациям во воплотить мечту об облачных приложениях и практиках DevOps, получив среду, которая «просто работает». В состав платформы входят следующие компоненты, прекрасно работающие вместе:
- Native Hybrid Cloud использует Pivotal Cloud Foundry®;
- Сервисы для разработчиков и ИТ-поддержки
- Гибкий выбор для запуска платформ IaaS – облачных и на физическом оборудовании – Native Hybrid Cloud основана на VCE® VxRack™ System 1000, гипер-конвергентной инфраструктуре типа rack-scale, которая предоставляет принципиально новые возможности IaaS для разработки, развертывания и работы приложений. Узнайте больше на нашем вебинаре!
From managing Big Data to developing new applications, we are driven by open source technologies.
As more traditional companies consume, contribute and generally start engaging with open source, there is a critical need for a coordinated and thought through approach.
Best in class companies are creating Open Source offices which drive engagement with community, coordinate how the company works with the community and generally puts in place best practices in compliance, community and communication. Join Nithya Ruff, Director of Open Source Strategy to hear how SanDisk established an open source office and lessons we've learned.
The role of today’s IT leadership is transforming. In addition to mounting demands, technical challenges and operational considerations, there are economic realities to consider. Budgets are under real pressure and as a result, companies are always looking for ways to increase the efficiency of their IT organizations to hold or reduce costs.
To alleviate some of this pain, new generations of technologies have emerged, such as converged infrastructure, or the Unified Compute Platform (UCP) from Hitachi. This type of infrastructure replaces in-house solutions and provides increased automation and control. But, is technology alone an effective long-term tool to maintain or reduce costs?
IT Economics, from HDS, is a proven methodology that can help identify, measure and materially reduce the costs of IT. When paired with the appropriate technology, purpose-built assessments and services help map costs and plan investments around build strategies. Attend this webinar and learn more about this model, a combination of the right technology and the right strategic considerations for your IT investments.
Join us for a live webcast on July 20th, 2016, with Hitachi’s Chief Economist to gain insight into:
•IT cost and efficiency models, from in-house to Converged Infrastructures.
•A true picture of your current storage and related IT costs, along with accurate future projections.
•A clear path to manage storage and IT costs more effectively, while still meeting increasing demands.
Hyperconverged infrastructures combine compute and storage components into a modular, scale-out platform that typically includes a hypervisor and some comprehensive management software. The technology is usually sold as self-contained appliance modules running on industry-standard server hardware with internal HDDs and SSDs. This capacity is abstracted and pooled into a shared resource for VMs running on each module or ‘node’ in the cluster. Hyperconverged infrastructures are sold as stand-alone appliances or as software that companies or integrators can use to build their own compute environments for private or hybrid clouds, special project infrastructures or departmental/remote office IT systems.
Understand what hyperconvergence is – and is not
Understand the capabilities this technology can bring
Discussion of where this technology is going
How and where it is being used in the Enterprise
Digital transformation is on the agenda of every company and creates a new focus on agile software development. Join us to learn how platform as a service for software developers and operations (DevOps) transforms the underlying infrastructure cloud. We will cover the IT requirements and the important role of scale-out infrastructure, infrastructure as code and containers for such clouds.
From May 2018, the EU rules on data protection are changing, and all companies with more than 250 employees will need to reassess their practices. What’s more, the penalties for non-compliance are changing too—so now’s the time to get prepared.
The virtualization wave is beginning to stall as companies confront application performance problems that can no longer be addressed effectively.
DataCore’s Parallel I/O breakthrough not only solves the immediate performance problem facing multi-core virtualized environments, but it significantly increases the VM density possible per physical server. In effect, it achieves remarkable cost reductions through maximum utilization of CPUs, memory and storage while fulfilling the promise of virtualization.
Join us for this webinar where we will take an inside look into DataCore’s Parallel I/O technology and show you what it can do for businesses running Microsoft SQL Server to improve the performance of database-driven applications.
Cloud storage has transformed the storage industry, however interoperability challenges that were overlooked during the initial stages of growth are now emerging as front and center issues. Join this Webcast to learn the major challenges that businesses leveraging services from multiple cloud providers or moving from one cloud provider to another face.
The SNIA Cloud Data Management Interface standard (CDMI) addresses these challenges by offering data interoperability between clouds. SNIA and Tata Consultancy Services (TCS) have partnered to create a SNIA CDMI Conformance Test Program to help cloud storage providers achieve CDMI conformance.
As interoperability becomes critical, end user companies should include the CDMI standard in their RFPs and demand conformance to CDMI from vendors.
Join us on July 19th to learn:
•Critical challenges that the cloud storage industry is facing
•Issues in a multi-cloud provider environment
•Addressing cloud storage interoperability challenges
•How the CDMI standard works
•Benefits of CDMI conformance testing
•Benefits for end user companies
What is Digital Transformation? Why is it important? What does it mean for you? Your business? Your future?
Bob Plumridge is the Chief Technology Officer for Hitachi Data Systems in Europe Middle East & Africa. In this role Bob is responsible for aligning technology vision with business strategy and evangelising this vision to the press, analysts, existing and potential customers.
Greg Kinsey heads the Industrial IoT business at Hitachi Insight Group, with overall responsibility for strategy, innovation, product portfolio, and business P&L for digital transformation solutions in the manufacturing industry.
This week on White Space, we look at the safest data center locations in the world, as rated by real estate management firm Cushman & Wakefield.
It will come as no surprise that Iceland comes out on top, while the US and the UK have barely made the top 10.
French data center specialist Data4 is promoting Paris as a global technology hub, where it is planning to invest at least €100 million. Another French data center owned by Webaxys is repurposing old Nissan Leaf car batteries in partnership with Eaton.
We’ve also heard industry body TechUK outline an optimistic vision of Britain outside the EU – as long as the country remains within the single market and subscribes to the principles of the General Data Protection Regulation.
Eric Dey, Caringo Product Manager for Cloud, explains how Caringo's scale-out object storage integrates with Amazon S3 and is fully integrated with Active Directory (AD) and other LDAP servers. With extensive experience in product engineering, Eric will explain how Caringo makes integration simple, even allowing Users (or Admins) to create S3 access keys via our API or our Swarm Content Portal User Interface.
Running Oracle in your data center often presents one of three challenges: storage performance, scale, or complexity. Frustratingly, solving one challenge often leaves you dealing with another – a modern day whack-a-mole. If you’re continually pouring money into new server hardware or software licenses, but still coming up short on performance, this webinar is for you!
Join Rob Callaghan, Sr. Marketing Manager, to learn what 3 steps you need to make to solve these data center challenges, without compromising the others, and different ways you can apply flash technology to achieve Oracle nirvana.
You’ll leave this webinar knowing how to:
- Reduce Storage Bottlenecks
- Consolidate Hardware
- Increase Performance
Curious about how best to use automation and orchestration for storage service offerings? Or wondering how best to develop a self-service catalog for storage services? How about pricing those services?
Wonder and worry no more. We’ve got the answers to these questions and many more.
On July 14, we’re clearing all the confusion around automation and orchestration for storage systems.
Find out how these technologies are already reshaping customer expectations, and hear how Service Providers have found success. We’ll examine how to ensure you build the services customers want, price them to maximize your ROI, and monitor performance to strengthen customer loyalty and retention.
Richard Hardy - SE Director and Chief Architect, U.S. Service Providers
Matt Robinson - CTO Ambassador, Service Provider Architect
Object storage is a secure, simple, scalable, and cost-effective means of embracing the explosive growth of unstructured data enterprises generate every day.
Many organizations, like large service providers, have already begun to leverage software-defined object storage to support new application development and DevOps projects. Meanwhile, legacy enterprise companies are in the early stages of exploring the benefits of object storage for their particular business and are searching for how they can use cloud object storage to modernize their IT strategies, store and protect data while dramatically reducing the costs associated with legacy storage sprawl.
This Webcast will highlight the market trends towards the adoption of object storage , the definition and benefits of object storage, and the use cases that are best suited to leverage an underlying object storage infrastructure.
In this webcast you will learn:
•How to accelerate the transition from legacy storage to a cloud object architecture
•Understand the benefits of object storage
•Primary use cases
•How an object storage can enable your private, public or hybrid cloud strategy without compromising security, privacy or data governance
Featuring speakers from F5, Illumio, Nutanix, Rubrik, and Workspot. Compare and evaluate 4 leading hyperconverged platform-optimized solutions that expand the capabilities of the Nutanix enterprise cloud platform: F5 application delivery, Illumio adaptive security, Rubrik data protection, and Workspot VDI.
• Workspot's cloud-native, infinitely and instantly scalable orchestration architecture (aka VDI 2.0) enables enterprise-class VDI deployment in hours, in which you can use all your existing infrastructure (apps, desktops and data).
• Rubrik eliminates backup pain with automation, instant recovery, unlimited replication, and data archival at infinite scale -- with zero complexity.
• Visualization 2.0 from Illumio shows you a live, interactive map of all of your application traffic across your data centers and clouds, and identifies applications for secure migration to the Nutanix platform.
• F5 delivers your mission critical applications on an enterprise cloud that uniquely delivers the agility, pay-as-you-grow consumption, and operational simplicity of the public cloud without sacrificing the predictability, security, and control of on-premises infrastructure.
Most organizations making an investment in NetApp Filers count on the system to store user data and host virtual machine datastores from an environment like VMware. In addition these organizations want their NetApp systems to do more and be the repository for the next wave of unstructured data; data generated by machines. NetApp systems are busting at the seams, so these organizations are trying to decide what to do next.
To help you find out what to do next, join Storage Switzerland and Caringo for our live webinar and learn:
1. What are the modern unstructured data use cases
2. The challenges NetApp faces in addressing its customers’ issues
3. Other solutions; can all-flash or object storage solve these challenges
4. Making the move - how to migrate from NetApp to other systems
5. How to re-purpose, instead of replacing your NetApp
Applications - the lifeblood of modern business - can be in a sorry state of affairs given today's forced alignment with server, OS, and storage boundaries. This can not only cause deployment delays and complexity, but it also results in underutilized hardware and inflated operational costs. There is a drive to embrace new technologies and methodologies in the enterprise, but this presents significant challenges. Limited application-awareness at the infrastructure level makes it nearly impossible to deliver on the promised SLAs and the tight coupling of applications and underlying operating software (OS or hypervisors) compromises application portability as well as developer productivity.
A growing number of enterprises are turning to application containers to support more efficient and effective development and deployment in an application-centric IT paradigm. By abstracting applications from the underlying infrastructure, containers can simplify application deployment, and enable seamless portability across machines and clouds. Containers can also enable significant cost savings by consolidating multiple applications per machine without compromising performance or predictability. Join us to learn more about container adoption in the enterprise and how a container-based server and storage virtualization environment can help take your software-defined datacenter transformation to the next level of an application-defined datacenter.
В рамках вебинара, представители департамента профессиональных услуг расскажут о том, как предлагаемый департаментом сервис позволит создать и поддерживать устойчивость бизнеса, доступность сервисов и предоставить безопасность.
The latest intelligent software-defined storage management solution from SUSE® is the first commercially supported solution based on the Jewel release of the Ceph open source project, ensuring customers are first to get supported and easy access to the rapidly advancing Ceph community innovation.
Enhancements to SUSE Enterprise Storage 3 include early access to the following new features of Ceph:
•POSIX compliant Ceph filesystem (CephFS) adds native filesystem access, so customers now have unified block, object and file access in their SUSE Enterprise Storage cluster.
•Multisite object replication provides asynchronous active/active multi-cluster environment to ensure replication at distance for improved disaster recovery, along with truly long-distance replication for block using asynchronous block (RDB) mirroring.
•A new framework to simplify management by providing the foundation for an advanced graphical user interface management tool (using openATTIC), as well as orchestration of the cluster using Salt.
The industry was surprised when Dell announced its intent to acquire EMC for $67 billion, the largest tech deal ever. Merging two large stagnate companies with very different cultures and high-level of overlap in products can pose significant challenges.
Join this webinar to learn about:
- The acquisition implications and how it’ll affect your long-term storage investment
- The uncertainty on Dell and EMC’s roadmap and which products will continue to be invested in
-Alternate storage solutions that enable you to transform data into insights and value for your organization
Worried that storage infrastructure can’t support petabyte growth or next-generation workloads? Do you want to move more workloads to the cloud to help reduce costs and enable new opportunities for your business? If so, this webinar is for you!
Red Hat Ceph storage is a massively scalable (we’re talking petabytes and beyond), software-defined storage solution that delivers unified storage (block, file, object) for your cloud environment. However, the challenge with PB scale, is maintaining high-performance and data center efficiency. That’s where Red Hat and SanDisk come to play!
Red Hat and SanDisk have partnered to deliver a Ceph-tested, Red Hat approved, and SanDisk flash-accelerated solution that delivers extreme performance, boundless scale, efficiency, and resiliency for Ceph and OpenStack environments. In this webinar Brent Compton of Red Hat and Venkat Kolli of SanDisk will discuss:
•Challenges faced within cloud environments
•Benefits of Red Hat Ceph for file, block and object storage
•Benefits of Running Ceph on the InfiniFlash™ System
•Configuration and use-cases
Don't let limitations stop you, and imagine the impossible today. To petabytes and beyond!
High Availability doesn’t trump Disaster Recovery and there is nothing simple about creating a recovery capability for your business – unless you have a set of data protection and business continuity services that can be applied intelligently to your workload, managed centrally, and tested non-disruptively. The good news is that developing such a capability, which traditionally required the challenge of selecting among multiple point product solutions then struggling to fit them into a coherent disaster prevention and recovery framework, just got a lot easier.
Join us and learn how DataCore’s Software-Defined and Hyper-Converged Storage platform provides the tools you need and a service management methodology you require to build a fully functional recovery strategy at a cost you can afford.
Join us for this insightful look into object storage for developers with Caringo Product Manager Ryan Meek. Ryan will take a close look at best-of-breed object storage architectures and discuss best practices for product integration through the HTTP REST API and the upcoming Dart SDK module and Search API.
Enterprises are widely adopting hyperconverged infrastructure to transform the way they deliver IT services. At the same time, with dropping prices and increasing storage density, we’ve reached an inflection point that is transforming decisions around all flash deployments as well. If HCI is the path to the future, shouldn’t your storage decisions reflect that? With emerging technologies such as NVMe and 3D CrossPoint rapidly coming into the market, this session will dig into the new realities for enterprise datacenters and what could possibly be the ideal way to deploy flash.
Today, companies are increasingly looking into HCI solutions as server virtualization becomes pervasive, the cost of server-side flash drops, and demand increases for operational efficiency without silos.
Join us to learn about HCI trends and VMware hyper-converged software. We’ll discuss how your environment can benefit, and how you can build a simple, efficient and very cost-effective hyper-converged infrastructure—without starting from scratch.
We think differently. We innovate through software and challenge the IT status quo.
We pioneered software-based storage virtualization. Now, we are leading the Software-defined and Parallel Processing revolution. Our Application-adaptive software exploits the full potential of servers and storage to solve data infrastructure challenges and elevate IT to focus on the applications and services that power their business.
DataCore parallel I/O and virtualization technologies deliver the advantages of next generation enterprise data centers – today – by harnessing the untapped power of multicore servers. DataCore software solutions revolutionize performance, cost-savings, and productivity gains businesses can achieve from their servers and data storage.
Join this webinar to meet DataCore, learn about what we do and how we can help your business.
Are you a storage admins running business-critical workloads on vSphere? Replacing a traditional storage environment with hyper-converged infrastructure (HCI) solutions can give you a simpler, more efficient way to manage resources—and eliminate the guesswork that often leads to overprovisioning.
Learn what HCI can do to alleviate some pressure. We’ll discuss how VMware Virtual SAN 6.2 powers HCI with a new operational model for shared storage, including features that complement high-end SANS.
By offloading just one virtualized workload from SAN to Virtual SAN, you can save more expensive SAN or NAS for higher value workloads.
Topics in the webcast include:
-The advantages of VM-centric, policy-based storage
- Avoiding frequent storage requests for transient workloads
- Simplifying capacity planning by scaling compute and storage in tandem
- Focusing on optimizing production workloads
Easy to learn, and with the broadest set of consumption models, see how Virtual SAN hyper-converged storage powers radically simple HCI solutions that solve critical problems for storage admins.
From Microsoft’s SQL Server® 2005 to the latest SQL Server 2016, there are many generations of the platform which are being used in production environments. If you are evaluating whether or not to migrate your current platforms or are architecting a brand-new infrastructure for later generations, this webinar is for you!
In this webinar, Lee Howard, Sr. Solutions Engineer for SanDisk®, will discuss the importance of understanding the current demands of your SQL Server environment and how to leverage enterprise tools like SQLIO, IOMETER, SQLIOSim, and DPACK to measure:
- Memory Utilization
- Queue Depth
Lee will also share 3 key performance symptoms flash can treat – all backed up by quantifiable data. By pinpointing where the main performance bottlenecks lie, you can ensure you’re building a better SQL Server architecture to obtain the greatest performance gains and best ROI.
This is your time to build it better…with a little help from flash storage.
Ethernet technology had been a proven standard for over 30 years and there are many networked storage solutions based on Ethernet. While storage devices are evolving rapidly with new standards and specifications, Ethernet is moving towards higher speeds as well: 10Gbps, 25Gbps, 50Gbps and 100Gbps….making it time to re-introduce Ethernet Networked Storage.
This live Webcast will start by providing a solid foundation on Ethernet networked storage and move to the latest advancements, challenges, use cases and benefits. You’ll hear:
•The evolution of storage devices - spinning media to NVM
•New standards: NVMe and NVMe over Fabric
•A retrospect of traditional networked storage including SAN and NAS
•How new storage devices and new standards would impact Ethernet networked storage
•Ethernet based software-defined storage and the hyper-converged model
•A look ahead at new Ethernet technologies optimized for networked storage in the future
Register today for this live Webcast where our experts will be on hand to answer your questions.
OpenStack private cloud deployments can become too complex and unpredictable, often requiring long implementation times and ongoing optimization. In this session, experts from VMware and EMC will discuss how VMware Integrated OpenStack (VIO), coupled with EMC XtremIO all-flash scale-out storage can dramatically simplify OpenStack implementations while delivering enterprise-class reliability, consistent predictable workload performance, and easy scalability.
As enterprise organizations turn towards hyperscale cloud services for DevOps, line of business, and data protection needs, one thing has become clear: IT departments need to maintain control over their data, regardless of where it resides. Join IDC as we explore how NetApp and IBM SoftLayer are enabling a Data Fabric that weaves together on-premises and off-premises systems in a single, centrally-controllable entity. Plus, find out how Data Fabric empowers you to better leverage the scalability and efficiency of hybrid cloud while still ensuring security.
Curtis Price, Program Vice President for Infrastructure Services, IDC
Melanie Posey, Research Vice President of Hosting and Managed Network Services programs, IDC
Louise Ledeen, Strategic Sales Executive - Cloud Solutions and Services for IBM Global Alliance, NetApp
Jarod Rodriguez, Cloud Business Architect - Service Providers, NetApp
In an increasingly connected world, enterprises need the ability to leverage leading-edge technology to respond faster to change, improve key business processes and access the real-time information that enables innovative action. For organisations considering how to take advantage of SAP S/4HANA as the digital core to drive business transformation, the right strategy can accelerate the journey.
Join this webtech to understand:
•The role of S/4HANA in the digital economy
•How a business outcomes approach can help focus IT priorities
•Practical considerations for planning your journey to S/4HANA and beyond
Although most VMware Virtual SAN design and sizing exercises are straightforward, careful planning at the outset can help you reduce future operations even more. In Part 1 of this two-part webinar, we’ll discuss deployment options for Virtual SAN and the advantages of each.
We’ll start with a brief introduction to VMware Hyper-Converged Software (HCS), composed of vSphere, vCenter and Virtual SAN, and then discuss how its openness and flexibility allows it to be deployed a variety of ways to meet your specific needs:
♣VxRail, a turn-key HCI appliance
♣Virtual SAN Ready Nodes, a pre-certified hardware stack ready to run HCS
♣Build Your Own, complete flexibility to choose from certified components (any server that can run vSphere can be turned into an HCI solution)
Understanding key criteria for each will help you reach the same end goal of having an HCI solution with all the ongoing benefits around cost, flexibility, and performance—no matter which option you choose.
For many firms, particularly smaller and medium-sized ones, disaster recovery (DR) systems are seen as necessary but expensive and they are low on the list of priorities. This was certainly true in previous years, when backup/DR involved copying data regularly to a duplicate set of hardware, often in a second datacenter. However, with the growth of cloud and prevalence of colocation, there are ever more options for backup and DR, including DR-as-a-Service options that can be much more cost-effective.
However, IT decision makers have been primed to proceed with caution when it comes to the transition of services from inside their four walls to a cloud provider’s data center – and for good reason. The dizzying array of services offered today, and the perceived loss of control, weigh heavy on the minds of admins and leadership alike.
If disaster recovery is important to you, but you’re unsure how to navigate the litany of products and services available today, get started by understanding the basics, and learn about what your peers are doing to solve the real world problems of DR.
In this webinar we’ll take a look at 3 key considerations when doing DR planning:
1.How have cloud services changed the possibilities for RTO / RPO objectives?
2.Why are latency and carrier options important to DR planning?
3.Why (and how) are other organizations leveraging colocation as part of their overall DR strategy?
Regardless of your industry, databases form the core of your profitability. Whether online transaction processing systems, Big Data analytics systems, or reporting systems, databases manage your most important information – the kind of data that directly supports decisions and provides immediate feedback on business actions and results. The performance of databases has a direct bearing on the profitability of your organization, so smart IT planners are always looking for ways to improve the performance of databases and the apps that use them.
Join Augie Gonzalez, Subject Matter Expert, DataCore to see if hyper covergence holds an answer to reducing latency and driving performance in database operations. But be careful, not all hyper converged solutions show dramatic improvements across the I/O path.