Hybrid Architecture for Big Data: On-Premises and Public Cloud

Presented by

Anant Chintamaneni, Vice President, Products, BlueData; Jason Schroedl, Vice President, Marketing, BlueData

About this talk

Watch this on-demand webinar to learn how to deploy Hadoop, Spark, and other Big Data tools in a hybrid cloud architecture. More and more organizations are using AWS and other public clouds for Big Data analytics and data science. But most enterprises have a mix of Big Data workloads and use cases: some on-premises, some in the public cloud, or a combination of the two. How do you support the needs of your data science and analyst teams to meet this new reality? In this webinar, we discussed how to: -Spin up instant Spark, Hadoop, Kafka, and Cassandra clusters – with Jupyter, RStudio, or Zeppelin notebooks -Create environments once and run them on any infrastructure, using Docker containers -Manage workloads in the cloud or on-prem from a common self-service user interface and admin console -Ensure enterprise-grade authentication, security, access controls, and multi-tenancy Don’t miss watching this webinar on how to provide on-demand, elastic, and secure environments for Big Data analytics – in a hybrid architecture.

Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (57)
Subscribers (32462)
Hewlett Packard Enterprise (HPE) is transforming how enterprises deploy AI / Machine Learning (ML) and Big Data analytics. HPE’s container-based software platform makes it easier, faster, and more cost-effective for enterprises to innovate with AI / ML and Big Data technologies – either on-premises, in the public cloud, or in a hybrid architecture. With HPE, our customers can spin up containerized environments within minutes, providing their data scientists with on-demand access to the applications, data, and infrastructure they need.