Hi [[ session.user.profile.firstName ]]

Three ways to get a handle on data governance in hybrid cloud environments

Join experts from New Context and WANdisco as they explain the tools and techniques for securely managing big data in hybrid cloud environments.

Attendees will learn what's required for effective data governance with the highest levels of availability and performance on-premises and in the cloud.
Recorded Nov 3 2016 37 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Experts from New Context and WANdisco
Presentation preview: Three ways to get a handle on data governance in hybrid cloud environments
  • Channel
  • Channel profile
  • Disaster Recovery for Hadoop Recorded: May 11 2017 40 mins
    Paul Scott-Murphy, WANdisco VP Product Management Big Data/Cloud
    Join us as Paul Scott-Murphy, WANdisco VP of Product Management, discusses disaster recovery for Hadoop. Learn how to fully operationalize Hadoop to exceed the most demanding SLAs across clusters running any mix of distributions any distance apart, including how to:

    - Enable continuous read/write access to data for automated forward recovery in the event of an outage
    - Eliminate the expense of hardware and other infrastructure normally required for DR on-premises
    - Handle out of sync conditions with guaranteed consistency across clusters
    - Prevent administrator error leading to extended downtime and data loss during disaster recovery
  • Cloud migration & hybrid cloud with no downtime and no disruption Recorded: Apr 13 2017 46 mins
    Paul Scott-Murphy, WANdisco VP Product Management Big Data/Cloud and James Curtis, 451 Research Senior Analyst
    Cloud migration and hybrid cloud with no downtime and no disruption:
    If business-critical applications with continually changing data are really moving to the cloud, the typical lift and shift approach of copying your data onto an appliance and shipping it back to the cloud vendor to load onto their storage days later, isn’t going to work. Nor will the one-way batch replication solutions that can’t maintain consistency between on-premises and cloud storage. Join us as we discuss how to migrate to the cloud without production downtime and post-migration deploy a true hybrid cloud, elastic data center solution that turns the cloud into a real-time extension of your on-premises environment. These capabilities enable a host of use cases, including using the cloud for offsite disaster recovery with no downtime and no data loss.
  • Continuous Replication and Migration for Network File Systems Recorded: Apr 11 2017 43 mins
    Paul Scott-Murphy, WANdisco VP Product Management Big Data/Cloud
    Fusion® 2.10, the new major release from WANdisco, adds support for seamless data replication at petabyte scale from Network File Systems for NetApp devices to any mix of on-premises and cloud environments. NetApp devices are now able to continue processing normal operations while WANdisco Fusion® allows data to replicate in phases with guaranteed consistency and no disruption to target environments, including those of cloud storage providers. This new capability supports hybrid cloud use cases for on-demand burst-out processing for data analytics and offsite disaster recovery with no downtime and no data loss.
  • Building a truly hybrid cloud with Google Cloud Recorded: Mar 30 2017 50 mins
    James Malone, Google Cloud Dataproc Product Manager and Paul Scott-Murphy, WANdisco VP of Product Management
    Join James Malone, Google Cloud Dataproc Product Manager and Paul Scott-Murphy, WANdisco VP of Product Management, as they explain how to address the challenges of operating hybrid environments that span Google and on-premises services, showing how active data replication that guarantees consistency can work at scale. Register now to learn how to provide local speed of access to data across all environments, allowing hybrid solutions to leverage the power of Google Cloud.
  • ETL and big data: Building simpler data pipelines Recorded: Feb 14 2017 61 mins
    Paul Scott-Murphy
    In the traditional world of EDW, ETL pipelines are a troublesome bottleneck when preparing data for use in the data warehouse. ETL pipelines are notoriously expensive and brittle, so as companies move to Hadoop they look forward to getting rid of the ETL infrastructure.

    But is it that simple? Some companies are finding that in order to move data between clusters for backup or aggregation purposes, whether on-premises or to the cloud, they are building systems that look an awful lot like ETL.
  • Using the cloud for on-premises disaster recovery Recorded: Jan 26 2017 53 mins
    Paul Scott-Murphy, VP Product Management
    The cloud greatly extends disaster recovery options, yields significant cost savings by removing the need for DR hardware and support staff on-premises, and provides insurance against a total on-premises infrastructure failure. However, solutions available for cloud DR vary greatly, directly impacting the amount of downtime and data loss experienced after an outage. Join us as we review the solutions available and explain how the cloud can be used for on-premises system DR with virtually zero downtime and data loss.
  • Big data storage: Options and recommendations Recorded: Jan 11 2017 41 mins
    Jagane Sundar, WANdisco CTO
    Hadoop clusters are often built around commodity storage, but architects now have a wide selection of Big Data storage choices, including solid-state or spinning disk for clusters and enterprise storage for compatibility layers and connectors.

    In this webinar, our CTO will review the storage options available to Hadoop architects and provide recommendations for each use case, including an active-active replication option that makes data available across multiple storage systems.
  • Big data replication to Amazon S3 Recorded: Dec 14 2016 45 mins
    Paul Scott-Murphy, VP Product Management
    Paul Scott-Murphy, WANdisco VP of Product Management will explain the benefits of moving to the cloud and review the AWS tools available for cloud migration and hybrid cloud deployments.
  • Three ways to get a handle on data governance in hybrid cloud environments Recorded: Nov 3 2016 37 mins
    Experts from New Context and WANdisco
    Join experts from New Context and WANdisco as they explain the tools and techniques for securely managing big data in hybrid cloud environments.

    Attendees will learn what's required for effective data governance with the highest levels of availability and performance on-premises and in the cloud.
  • Big data storage: options & recommendations Recorded: Oct 20 2016 51 mins
    Jagane Sundar, WANdisco CTO
    Hadoop clusters are often built around commodity storage, but architects now have a wide selection of Big Data storage choices, including solid-state or spinning disk for clusters and enterprise storage for compatibility layers and connectors.

    In this webinar, our experts will review the storage options available to Hadoop architects and provide recommendations for each use case, including an active-active replication option that makes data available across multiple storage systems.
  • New Hive and performance features in WANdisco Fusion 2.9 Recorded: Oct 6 2016 47 mins
    WANdisco
    WANdisco Fusion 2.9 delivers new levels of performance and scalability, with a number of enhancements that make it easier to support hybrid big data deployments with continuous and consistent access across any mix of on-premise and cloud environments.

    One of the most significant new features is the application of WANdisco’s patented active transactional replication to the Hive metastore, used to support familiar SQL-like access to Hadoop. Changes made to table definitions and other objects in the Hive metastore are replicated to Hive metastore instances deployed with other Hadoop clusters. Data added to the underlying Hadoop cluster referencing the new object definitions is replicated to other Hadoop clusters in a subsequent step, to guarantee consistent access across clusters and locations.

    WANdisco Fusion 2.9’s patented technology also:

    • Enables throughput in excess of 100,000 transactions per minute - well beyond the largest enterprise requirements
    • Ensures operations are not affected by transient network outages or “flapping WAN” links
    • Supports scenarios in hybrid cloud deployments where on-premise environments cannot accept inbound network connection requests

    Join us on October 6th at 10AM Pacific, 1PM Eastern as we present and demo the latest release of WANdisco Fusion.
  • New Hive and performance features in WANdisco Fusion 2.9 (EMEA) Recorded: Oct 5 2016 50 mins
    Paul Scott-Murphy, VP Product Management
    WANdisco Fusion 2.9 delivers new levels of performance and scalability, with a number of enhancements that make it easier to support hybrid big data deployments with continuous and consistent access across any mix of on-premise and cloud environments.

    One of the most significant new features is the application of WANdisco’s patented active transactional replication to the Hive metastore, used to support familiar SQL-like access to Hadoop. Changes made to table definitions and other objects in the Hive metastore are replicated to Hive metastore instances deployed with other Hadoop clusters. Data added to the underlying Hadoop cluster referencing the new object definitions is replicated to other Hadoop clusters in a subsequent step, to guarantee consistent access across clusters and locations.

    WANdisco Fusion 2.9’s patented technology also:

    •Enables throughput in excess of 100,000 transactions per minute - well beyond the largest enterprise requirements
    •Ensures operations are not affected by transient network outages or “flapping WAN” links
    •Supports scenarios in hybrid cloud deployments where on-premise environments cannot accept inbound network connection requests

    Join us on October 5th as we present and demo the latest release of WANdisco Fusion.
  • Migrating your big data infrastructure to cloud Recorded: Sep 8 2016 59 mins
    GigaOM analyst William McKnight with experts from Qubole and WANdisco
    GigaOM analyst William McKnight will be joined by experts from Qubole and WANdisco, who will explain the benefits of moving to the cloud and review the tools available for cloud migration and hybrid cloud deployments.

    Learn what's required to avoid the downtime and business disruption that often accompany cloud migration projects.

    Limited Time Offer - View Qubole and WANdisco's Special Quick Start Package: http://bit.ly/2cGupC6
  • Deploying mission critical applications on Hadoop, on-premises and in the cloud Recorded: Jul 21 2016 63 mins
    Jim Wankowski, IBM Worldwide Cloud Data Services and James Campigli, Co-Founder, Chief Product Officer of WANd
    Global enterprises have quietly funneled enormous amounts of data into Hadoop over the last several years. Hadoop has transformed the way organizations deal with big data. By making vast quantities of rich unstructured and semi-structured data quickly and cheaply accessible, Hadoop has opened up a host of analytic capabilities that were never possible before, to drive business value.

    The challenges have revolved around operationalizing Hadoop to enterprise standards, and leveraging cloud-based Hadoop as a service (HaaS) options offering a vast array of analytics applications and processing capacity that would be impossible to deploy and maintain in-house.

    This webcast will explain how solutions from IBM and WANdisco address these challenges by supporting:

    - Continuous availability with guaranteed data consistency across Hadoop clusters any distance apart, both on-premises and in the cloud.
    - Migration to cloud without downtime and hybrid cloud for burst-out processing and offsite disaster recovery.
    - Flexibility to eliminate Hadoop distribution vendor lock-in and support migration to cloud without downtime or disruption.
    - IBM's BigInsights in the cloud, and BigSQL, which allows you to run standard ANSI compliant SQL against your Hadoop data.
  • Build An Effective, Fast And Secure Data Engine With Hortonworks & WANdisco Recorded: Jun 23 2016 45 mins
    Dave Russell, Hortonworks and Mark Lewis, WANdisco
    Data is coming from everywhere. The first challenge is just being able to get hold of it, curate and convey it in a secure and transparent manner. Hortonworks Data Flow is the tool that collects data at the edge, processes and secures data in motion and brings data into your data-at-rest platform (HDP).

    Once you have your data collected in a valuable data lake, you need resilience, control over its location, and safety against failure. That’s where Wandisco Fusion & Hortonworks HDP come in. With Wandisco Fusion on HDP, an enterprise can now build an effective, fast and secure data engine out of multiple Hadoop clusters, getting the most business value out of its HDP deployment with a reliable and high-performing Big Data service.

    Join Hortonworks & WANdisco on this webinar to learn how to make this into reality.
  • Bringing Hadoop into the Banking Mainstream Recorded: Jun 9 2016 60 mins
    James Curtis (451 Research) & Jim Campigli (WANdisco)
    Global banks have the most rigorous availability, performance and data security standards. Join 451 Research and WANdisco as we explore the cutting-edge techniques leading financial services firms are using to fully operationalize Hadoop to meet these standards and leap ahead of their competition. Register for this webinar and get the free white paper entitled "Bringing Hadoop into the Banking Mainstream"
  • Making Hybrid Cloud a Reality Recorded: Apr 21 2016 39 mins
    Jim Campigli and Jagane Sundar
    Solutions for seamlessly moving data between on-premise and cloud environments are virtually non-existent. This webinar explains how to achieve a true hybrid cloud deployment that supports on-demand burst-out processing in which data moves in and out of the cloud as it changes, and enable the cloud to be used for offsite disaster recovery without downtime or data loss.
  • ETL and Big Data: Building Simpler Data Pipelines Recorded: Feb 11 2016 53 mins
    Chris Almond, Solutions Architect
    In the traditional world of EDW, ETL pipelines are a troublesome bottleneck when preparing data for use in the data warehouse. ETL pipelines are notoriously expensive and brittle, so as companies move to Hadoop they look forward to getting rid of the ETL infrastructure.

    But is it that simple? Some companies are finding that in order to move data between clusters for backup or aggregation purposes, they are building systems that look an awful lot like ETL.
  • No More DR Sites Recorded: Oct 22 2015 42 mins
    Brett Rudenstein
    Disaster recovery sites are typically underutilized with idle hardware and software that are only used in an emergency. Why let your valuable resources remain idle?

    In this webinar, you’ll learn how you can take full advantage of the resources in what would ordinarily be your DR site by using active-active replication to provide full utilization as well as complete failover with lower RPO and RTO.
  • EMEA/APAC - Hadoop Migration and Upgrade without Downtime or Data Loss Recorded: Oct 8 2015 41 mins
    Paul Scott Murphy
    Migrating your Hadoop cluster between versions or distributions is difficult. It is a critical process that if done incorrectly can lead to the loss of data, system downtime, and disruption of business activities.

    In this webinar, learn how you can mitigate the risk in a migration through the development of a comprehensive migration strategy and leveraging tools like those from WANdisco to simplify and automate your migration.
The World Leaders in Active Transactional Data Replication
Once believed to be impossible, WANdisco's patented technology allows Big Data to be stored and queried with absolute reliability and security, unleashing limitless possibilities for innovation. That's Hadoop without limits. We cover topics such as hardening Hadoop for the enterprise, simplifying audit and compliance, and getting the most out of your multi-data center Hadoop investment. These interactive presentations are targeted at enterprise architects and IT infrastructure staff who are designing and implementing big data environments with Hadoop, HBase and related technologies.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Three ways to get a handle on data governance in hybrid cloud environments
  • Live at: Nov 3 2016 5:00 pm
  • Presented by: Experts from New Context and WANdisco
  • From:
Your email has been sent.
or close