Hi [[ session.user.profile.firstName ]]

WANdisco | Big Data & Cloud

  • Date
  • Rating
  • Views
  • Keeping Subversion In Sync Across Global Data Centers: Case Study with MaxLinear
    Keeping Subversion In Sync Across Global Data Centers: Case Study with MaxLinear Russ Hil, Account & Renewals Manager Americas at WANdisco, Owen Ofiesh, Software Configuration Manager at MaxLinear Recorded: Aug 24 2017 45 mins
    Join us to learn how MaxLinear relies on WANdisco to improve productivity with Subversion Multisite delivering results such as:

    - A 24/7 continuous integration environment with zero Subversion downtime
    - Improved administrative efficiencies with Access Control
    - Elimination of the effects of network failures and the dependency on legacy backup procedures
    - Overcoming the challenges with Subversion mirrors

    About the Presenters:

    Russ Hill, Account & Renewals Manager Americas at WANdisco. Russ Hill works with our existing SCM install base as an account manager and renewals specialist. He works closely with the WANdisco Professional Services team on all SCM service opportunities in North America and is currently responsible for all new SCM opportunities within the Americas.

    Owen Ofiesh Software Configuration Manager at MaxLinear. Owen Ofiesh is the Software Configuration Manager for MaxLinear, a global chip design firm. With over 15 years experience in configuration management, he has a strong background in many of the most common SCM tools and platforms. Owen has worked with WANdisco Subversion MultiSite for over six years and has a great understanding of how it compares and contrasts with other SCM tools.
  • Maximum Availability Architecture for Oracle BDA and BDCS
    Maximum Availability Architecture for Oracle BDA and BDCS Paul Scott-Murphy, Product Management, WANdisco,Jean-Pierre Dijcks, Product Manager, Oracle, Nick Collins, Analyst, Accenture Recorded: Jul 20 2017 55 mins
    Join this discussion with expert panel Paul Scott-Murphy of WANdisco, Jean-Pierre Dijcks from Oracle, and Nick Collins of Accenture, to learn how to:

    - Deploy Oracle Big Data Appliance (BDA) and Big Data Cloud Service (BDCS) in environments running any mix of HCFS compatible distributions with a path for full cluster migration with no downtime and no data loss.
    - Meet enterprise SLAs with Oracle Maximum Availability Architecture.
    - Replicate selected data among multiple big data systems and verify that they remain consistent regardless of where they are ingested or changed.
    - Replicate data at any geographic distance with signficantly low RPO and RTO.
    - Complete data transfer in approximately half the time of DistCp rgardless of the load imposed on the cluster.
    - Overcome the limitations of traditional approaches that leverage DistCp or dual-ingest methods.

    About the Presenters:

    Paul Scott-Murphy VP of Product Management at WANdisco. Paul Scott-Murphy has overall responsibility for the definition and management of WANdisco's product strategy, the delivery of product to market and its success. This includes direction of the product management team, product strategy, requirements definitions, feature management and prioritization, roadmaps, coordination of product releases with customer and partner requirements, user testing and feedback.

    Jean-Pierre Dijcks, Master Product Manager at Oracle. Highly experienced product manager with 15 years of experience in enterprise software and enterprise data. Currently responsible for all product management aspects (technology, GTM, Enablement etc.) of Oracle BDA and BDCS.

    Nick Collins, Principal Applications Systems Analyst at Accenture. Nicholas Collins is a Principal Applications Systems Analyst at MD Anderson Cancer Center, where he serves as the chair over architecture for the Department of Clinical Analytics and Informatics. He has worked with Oracle technologies for over ten years and is a Master Level CDMP.
  • Disaster Recovery for Hadoop
    Disaster Recovery for Hadoop Paul Scott-Murphy, WANdisco VP Product Management Big Data/Cloud Recorded: May 11 2017 40 mins
    Join us as Paul Scott-Murphy, WANdisco VP of Product Management, discusses disaster recovery for Hadoop. Learn how to fully operationalize Hadoop to exceed the most demanding SLAs across clusters running any mix of distributions any distance apart, including how to:

    - Enable continuous read/write access to data for automated forward recovery in the event of an outage
    - Eliminate the expense of hardware and other infrastructure normally required for DR on-premises
    - Handle out of sync conditions with guaranteed consistency across clusters
    - Prevent administrator error leading to extended downtime and data loss during disaster recovery
  • Cloud migration & hybrid cloud with no downtime and no disruption
    Cloud migration & hybrid cloud with no downtime and no disruption Paul Scott-Murphy, WANdisco VP Product Management Big Data/Cloud and James Curtis, 451 Research Senior Analyst Recorded: Apr 13 2017 46 mins
    Cloud migration and hybrid cloud with no downtime and no disruption:
    If business-critical applications with continually changing data are really moving to the cloud, the typical lift and shift approach of copying your data onto an appliance and shipping it back to the cloud vendor to load onto their storage days later, isn’t going to work. Nor will the one-way batch replication solutions that can’t maintain consistency between on-premises and cloud storage. Join us as we discuss how to migrate to the cloud without production downtime and post-migration deploy a true hybrid cloud, elastic data center solution that turns the cloud into a real-time extension of your on-premises environment. These capabilities enable a host of use cases, including using the cloud for offsite disaster recovery with no downtime and no data loss.
  • Continuous Replication and Migration for Network File Systems
    Continuous Replication and Migration for Network File Systems Paul Scott-Murphy, WANdisco VP Product Management Big Data/Cloud Recorded: Apr 11 2017 43 mins
    Fusion® 2.10, the new major release from WANdisco, adds support for seamless data replication at petabyte scale from Network File Systems for NetApp devices to any mix of on-premises and cloud environments. NetApp devices are now able to continue processing normal operations while WANdisco Fusion® allows data to replicate in phases with guaranteed consistency and no disruption to target environments, including those of cloud storage providers. This new capability supports hybrid cloud use cases for on-demand burst-out processing for data analytics and offsite disaster recovery with no downtime and no data loss.
  • Building a truly hybrid cloud with Google Cloud
    Building a truly hybrid cloud with Google Cloud James Malone, Google Cloud Dataproc Product Manager and Paul Scott-Murphy, WANdisco VP of Product Management Recorded: Mar 30 2017 50 mins
    Join James Malone, Google Cloud Dataproc Product Manager and Paul Scott-Murphy, WANdisco VP of Product Management, as they explain how to address the challenges of operating hybrid environments that span Google and on-premises services, showing how active data replication that guarantees consistency can work at scale. Register now to learn how to provide local speed of access to data across all environments, allowing hybrid solutions to leverage the power of Google Cloud.
  • ETL and big data: Building simpler data pipelines
    ETL and big data: Building simpler data pipelines Paul Scott-Murphy Recorded: Feb 14 2017 61 mins
    In the traditional world of EDW, ETL pipelines are a troublesome bottleneck when preparing data for use in the data warehouse. ETL pipelines are notoriously expensive and brittle, so as companies move to Hadoop they look forward to getting rid of the ETL infrastructure.

    But is it that simple? Some companies are finding that in order to move data between clusters for backup or aggregation purposes, whether on-premises or to the cloud, they are building systems that look an awful lot like ETL.
  • Using the cloud for on-premises disaster recovery
    Using the cloud for on-premises disaster recovery Paul Scott-Murphy, VP Product Management Recorded: Jan 26 2017 53 mins
    The cloud greatly extends disaster recovery options, yields significant cost savings by removing the need for DR hardware and support staff on-premises, and provides insurance against a total on-premises infrastructure failure. However, solutions available for cloud DR vary greatly, directly impacting the amount of downtime and data loss experienced after an outage. Join us as we review the solutions available and explain how the cloud can be used for on-premises system DR with virtually zero downtime and data loss.
  • Big data storage: Options and recommendations
    Big data storage: Options and recommendations Jagane Sundar, WANdisco CTO Recorded: Jan 11 2017 41 mins
    Hadoop clusters are often built around commodity storage, but architects now have a wide selection of Big Data storage choices, including solid-state or spinning disk for clusters and enterprise storage for compatibility layers and connectors.

    In this webinar, our CTO will review the storage options available to Hadoop architects and provide recommendations for each use case, including an active-active replication option that makes data available across multiple storage systems.
  • Big data replication to Amazon S3
    Big data replication to Amazon S3 Paul Scott-Murphy, VP Product Management Recorded: Dec 14 2016 45 mins
    Paul Scott-Murphy, WANdisco VP of Product Management will explain the benefits of moving to the cloud and review the AWS tools available for cloud migration and hybrid cloud deployments.

Embed in website or blog