The World Leaders in Active Transactional Data Replication
Once believed to be impossible, WANdisco's patented technology allows Big Data to be stored and queried with absolute reliability and security, unleashing limitless possibilities for innovation. That's Hadoop without limits. We cover topics such as hardening Hadoop for the enterprise, simplifying audit and compliance, and getting the most out of your multi-data center Hadoop investment. These interactive presentations are targeted at enterprise architects and IT infrastructure staff who are designing and implementing big data environments with Hadoop, HBase and related technologies.
Russ Hil, Account & Renewals Manager Americas at WANdisco, Owen Ofiesh, Software Configuration Manager at MaxLinear
Join us to learn how MaxLinear relies on WANdisco to improve productivity with Subversion Multisite delivering results such as:
- A 24/7 continuous integration environment with zero Subversion downtime
- Improved administrative efficiencies with Access Control
- Elimination of the effects of network failures and the dependency on legacy backup procedures
- Overcoming the challenges with Subversion mirrors
About the Presenters:
Russ Hill, Account & Renewals Manager Americas at WANdisco. Russ Hill works with our existing SCM install base as an account manager and renewals specialist. He works closely with the WANdisco Professional Services team on all SCM service opportunities in North America and is currently responsible for all new SCM opportunities within the Americas.
Owen Ofiesh Software Configuration Manager at MaxLinear. Owen Ofiesh is the Software Configuration Manager for MaxLinear, a global chip design firm. With over 15 years experience in configuration management, he has a strong background in many of the most common SCM tools and platforms. Owen has worked with WANdisco Subversion MultiSite for over six years and has a great understanding of how it compares and contrasts with other SCM tools.
Join this discussion with expert panel Paul Scott-Murphy of WANdisco, Jean-Pierre Dijcks from Oracle, and Nick Collins of Accenture, to learn how to:
- Deploy Oracle Big Data Appliance (BDA) and Big Data Cloud Service (BDCS) in environments running any mix of HCFS compatible distributions with a path for full cluster migration with no downtime and no data loss.
- Meet enterprise SLAs with Oracle Maximum Availability Architecture.
- Replicate selected data among multiple big data systems and verify that they remain consistent regardless of where they are ingested or changed.
- Replicate data at any geographic distance with signficantly low RPO and RTO.
- Complete data transfer in approximately half the time of DistCp rgardless of the load imposed on the cluster.
- Overcome the limitations of traditional approaches that leverage DistCp or dual-ingest methods.
About the Presenters:
Paul Scott-Murphy VP of Product Management at WANdisco. Paul Scott-Murphy has overall responsibility for the definition and management of WANdisco's product strategy, the delivery of product to market and its success. This includes direction of the product management team, product strategy, requirements definitions, feature management and prioritization, roadmaps, coordination of product releases with customer and partner requirements, user testing and feedback.
Jean-Pierre Dijcks, Master Product Manager at Oracle. Highly experienced product manager with 15 years of experience in enterprise software and enterprise data. Currently responsible for all product management aspects (technology, GTM, Enablement etc.) of Oracle BDA and BDCS.
Nick Collins, Principal Applications Systems Analyst at Accenture. Nicholas Collins is a Principal Applications Systems Analyst at MD Anderson Cancer Center, where he serves as the chair over architecture for the Department of Clinical Analytics and Informatics. He has worked with Oracle technologies for over ten years and is a Master Level CDMP.
Join us as Paul Scott-Murphy, WANdisco VP of Product Management, discusses disaster recovery for Hadoop. Learn how to fully operationalize Hadoop to exceed the most demanding SLAs across clusters running any mix of distributions any distance apart, including how to:
- Enable continuous read/write access to data for automated forward recovery in the event of an outage
- Eliminate the expense of hardware and other infrastructure normally required for DR on-premises
- Handle out of sync conditions with guaranteed consistency across clusters
- Prevent administrator error leading to extended downtime and data loss during disaster recovery
Cloud migration and hybrid cloud with no downtime and no disruption:
If business-critical applications with continually changing data are really moving to the cloud, the typical lift and shift approach of copying your data onto an appliance and shipping it back to the cloud vendor to load onto their storage days later, isn’t going to work. Nor will the one-way batch replication solutions that can’t maintain consistency between on-premises and cloud storage. Join us as we discuss how to migrate to the cloud without production downtime and post-migration deploy a true hybrid cloud, elastic data center solution that turns the cloud into a real-time extension of your on-premises environment. These capabilities enable a host of use cases, including using the cloud for offsite disaster recovery with no downtime and no data loss.
Fusion® 2.10, the new major release from WANdisco, adds support for seamless data replication at petabyte scale from Network File Systems for NetApp devices to any mix of on-premises and cloud environments. NetApp devices are now able to continue processing normal operations while WANdisco Fusion® allows data to replicate in phases with guaranteed consistency and no disruption to target environments, including those of cloud storage providers. This new capability supports hybrid cloud use cases for on-demand burst-out processing for data analytics and offsite disaster recovery with no downtime and no data loss.
Join James Malone, Google Cloud Dataproc Product Manager and Paul Scott-Murphy, WANdisco VP of Product Management, as they explain how to address the challenges of operating hybrid environments that span Google and on-premises services, showing how active data replication that guarantees consistency can work at scale. Register now to learn how to provide local speed of access to data across all environments, allowing hybrid solutions to leverage the power of Google Cloud.
In the traditional world of EDW, ETL pipelines are a troublesome bottleneck when preparing data for use in the data warehouse. ETL pipelines are notoriously expensive and brittle, so as companies move to Hadoop they look forward to getting rid of the ETL infrastructure.
But is it that simple? Some companies are finding that in order to move data between clusters for backup or aggregation purposes, whether on-premises or to the cloud, they are building systems that look an awful lot like ETL.
The cloud greatly extends disaster recovery options, yields significant cost savings by removing the need for DR hardware and support staff on-premises, and provides insurance against a total on-premises infrastructure failure. However, solutions available for cloud DR vary greatly, directly impacting the amount of downtime and data loss experienced after an outage. Join us as we review the solutions available and explain how the cloud can be used for on-premises system DR with virtually zero downtime and data loss.
Hadoop clusters are often built around commodity storage, but architects now have a wide selection of Big Data storage choices, including solid-state or spinning disk for clusters and enterprise storage for compatibility layers and connectors.
In this webinar, our CTO will review the storage options available to Hadoop architects and provide recommendations for each use case, including an active-active replication option that makes data available across multiple storage systems.