Hi [[ session.user.profile.firstName ]]

Succeeding with a Cloud Data Lake - from Architecture to Operations

To be successful, data lakes must evolve to support the ever-growing needs of organizations for real-time data; new exploration, discovery and analysis; or batch and streaming data pipelines. Whether you’re thinking about complementing your data warehouse with a data lake, moving your on-premises data lake to the cloud, or if you’re already operating a cloud data lake, this webinar is a must-attend.

We’ll share key lessons learned in the last 18 months working with companies like Gannett, Nextdoor, Expedia, Zillow and others, which are running cloud data lakes at massive scale and delivering remarkable returns.We will also share best practices for building a cloud data lake operation, from people and tools to processes.

In this webinar, we’ll cover:
- Benefits of building a data lake in the cloud
- How to set the foundation for your data lake, including storage, access, metadata, and more
- Best practices for governing your data lake (privacy, security, financial governance)
- Tools required for managing and processing data in your data lake
Recorded Nov 7 2019 45 mins
Your place is confirmed,
we'll send you email reminders
Presented by
Rangasayee Chandrasekaran and Akil Murali from Qubole
Presentation preview: Succeeding with a Cloud Data Lake - from Architecture to Operations

Network with like-minded attendees

  • [[ session.user.profile.displayName ]]
    Add a photo
    • [[ session.user.profile.displayName ]]
    • [[ session.user.profile.jobTitle ]]
    • [[ session.user.profile.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(session.user.profile) ]]
  • [[ card.displayName ]]
    • [[ card.displayName ]]
    • [[ card.jobTitle ]]
    • [[ card.companyName ]]
    • [[ userProfileTemplateHelper.getLocation(card) ]]
  • Channel
  • Channel profile
  • Mastering Data Governance on Cloud Data Lakes with Multiple Engines Nov 20 2019 6:00 pm UTC 60 mins
    Dhiraj Sehgal, Director of Product Marketing & Akil Murali, Director of Product Management, Security and Governance at Qubole
    As more organizations run ETL workloads, analytics, and machine learning on data residing in data lakes, there are inherent privacy and integrity risks that must be addressed. How then, should organizations preserve privacy and control access to this data as per regulations such as GDPR and CCPA.

    While most organizations have put some measures for data governance in data lakes, current high-level file-level security measures and accepted best practices are not sufficient for data privacy and integrity requirements.

    In this webinar, Qubole data privacy and integrity experts will cover:

    - Maintaining data integrity and keeping sensitive information safe irrespective of open-source engine
    - Providing granular data access controls and the ability to mask data with Apache Ranger
    - Avoiding lost updates, dirty reads, stale reads and enforcing app-specific integrity constraints
    - Complying with “right to be forgotten” and “right to be erased” by ensuring that data in the data lake is current and deleted when necessary
    - A demo of Qubole’s built-in Apache Ranger and ACID support for data privacy and integrity
  • Succeeding with a Cloud Data Lake - from Architecture to Operations Recorded: Nov 7 2019 45 mins
    Rangasayee Chandrasekaran and Akil Murali from Qubole
    To be successful, data lakes must evolve to support the ever-growing needs of organizations for real-time data; new exploration, discovery and analysis; or batch and streaming data pipelines. Whether you’re thinking about complementing your data warehouse with a data lake, moving your on-premises data lake to the cloud, or if you’re already operating a cloud data lake, this webinar is a must-attend.

    We’ll share key lessons learned in the last 18 months working with companies like Gannett, Nextdoor, Expedia, Zillow and others, which are running cloud data lakes at massive scale and delivering remarkable returns.We will also share best practices for building a cloud data lake operation, from people and tools to processes.

    In this webinar, we’ll cover:
    - Benefits of building a data lake in the cloud
    - How to set the foundation for your data lake, including storage, access, metadata, and more
    - Best practices for governing your data lake (privacy, security, financial governance)
    - Tools required for managing and processing data in your data lake
  • Best Practices: How To Build Scalable Data Pipelines for Machine Learning Recorded: Oct 10 2019 41 mins
    Jorge Villamariona and Pradeep Reddy, Qubole
    Data engineers today serve a wider audience than just a few years ago. Companies now need to apply machine learning (ML) techniques on their data in order to remain relevant. Among the new challenges faced by data engineers is the need to build and fill Data Lakes as well as reliably delivering complete large-volume data sets so that data scientists can train more accurate models.

    Aside from dealing with larger data volumes, these pipelines need to be flexible in order to accommodate the variety of data and the high processing velocity required by the new ML applications. Qubole addresses these challenges by providing an auto-scaling cloud-native platform to build and run these data pipelines.

    In this webinar we will cover:
    - Some of the typical challenges faced by data engineers when building pipelines for machine learning.
    - Typical uses of the various Qubole engines to address these challenges.
    - Real-world customer examples
  • Key Differences Between On-Prem and Cloud Data Platforms Recorded: Oct 3 2019 47 mins
    Purvang Parikh, Qubole
    Cloud service models have become the new norm for enterprise deployments in almost every category — and big data is no exception. The separation of storage and compute in the cloud afford unparalleled scale, efficiency, and economics compared to on-premise solutions.

    If you are using Cloudera, Hortonworks or MapR, you should attend this webinar to learn the key differences between on-premise and cloud solutions, considerations for selecting cloud data lakes and data warehouses, and how to build the right architecture for your organizations analytics and machine learning needs.

    In this webinar, we’ll cover:

    - Difference between hosting an on-premise data platform in the cloud versus adopting a cloud-native architecture for data processing in the cloud
    - How a cloud data lake architecture differs from cloud data warehouses
    - How to move your data to the cloud and leverage big data engines like Apache Spark, Presto, Hive and more
    - Avoiding security and cost pitfalls that can derail your migration to the cloud
    - Demo of Qubole’s cloud-native platform
  • Right Tool for the Job: Using Qubole Presto for Interactive and Ad-Hoc Queries Recorded: Oct 3 2019 57 mins
    Goden Yao, Product Manager at Qubole
    Presto is the go-to query engine of Qubole customers for interactive and reporting use cases due to its excellent performance and ability to join unstructured and structured data in seconds. Many Qubole customers use Presto along with their favorite BI tools, such as PowerBI, Looker and Tableau to explore data and run queries.

    Two key criteria to look for in a query engine for interactive analytics are performance and cost. You want best-in-class performance to meet the short deadlines of interactive workloads, while reducing and/or controlling costs.

    Qubole Presto is a cloud-optimized version of open source Presto, with enhancements that improve performance, reliability and cost. In this webinar, we’ll cover:

    - When to use Presto versus other engines like Apache Spark
    - How to enable self-service access to your data lake
    - The key advantages of Qubole Presto over Open Source Presto
    - Live demo of running interactive and ad hoc queries using Qubole Presto
    - How customers like iBotta, Tivo and Return Path leverage Qubole Presto
  • Right Tool for the Job: Running Apache Spark at Scale in the Cloud Recorded: Sep 26 2019 49 mins
    Ashwin Chandra Putta, Sr. Product Manager at Qubole
    Apache Spark is powerful open source engine used for processing complex, memory-intensive workloads. However, running Apache Spark in the cloud can be complex and challenging. Qubole has re-engineered Apache Spark, optimising its performance and efficiency while reducing any administrative overheads. Today, Qubole runs some of the world’s largest Apache Spark clusters in the cloud.

    In this webinar, we’ll take a deeper look at the use cases for Apache Spark, including ETL and machine learning, and compare Apache Spark on Qubole versus Open Source Apache Spark. We’ll cover:

    - Why Apache Spark is essential for big data processing
    - How to deploy Spark at scale in the cloud and enable all data users
    - The enhancements made to Qubole Spark
    - A live demo and real-world examples of Apache Spark on Qubole
  • Leveraging Streaming and Batch Data Sets for ML Applications Recorded: Sep 25 2019 32 mins
    Jorge Villamariona and Ojas Mulay from Qubole
    Data Engineering is fast emerging as the most critical function in Analytics and Machine Learning programs. The ability to build and manage data pipelines for streaming and batch data sets are critical for the downstream success of your ML applications.

    In this webinar, you will learn how to use Qubole’s cloud-native platform to acquire and transform data sets for data science and analytics, make data sets available to different users, and fully leverage your data lake throughout your organization. Our experts will also walk through a real-world example of how to use Apache Spark and Airflow, as well as Notebooks, to build an end-to-end solution.

    Attendees will learn how to:

    + Ingest data to/from a cloud storage data lake
    + Perform interactive data analysis and build AI/ML models
    + Transform data sets with Spark and build interactive dashboards
    + Seamlessly interact with other data sources
    + Deploy end-to-end data pipeline using Apache Airflow
  • Mastering Data Discovery on Cloud Data Lakes Recorded: Sep 19 2019 44 mins
    Rangasayee Chandrasekaran, Product Manager, Qubole
    In order to capture and analyze new and different types of data, corporations are augmenting their data warehouses and data marts with cloud data lakes. Certainly, capturing new and different types of data is important, but providing access to all users, providing tools that allow them to work the way they already do, and deriving value from those datasets remains the ultimate goal.

    In this webinar, we will outline data processing challenges faced by analysts in the enterprise and a live demo of Qubole's Workbench—a powerful user interface that reduces time-to-insight by extending Qubole's multi-engine capabilities to data analysts and data scientists. Workbench enables data discovery combining unstructured, semi-structured, and structured data in data lakes or data warehouses for analytics, machine learning, or processing with engines such as Apache Spark.

    Attendees will learn:
    -- Common data processing challenges for analytics
    -- The value of data lakes
    -- Best practices for working with structured and semi-structured datasets
    -- When to use Apache Spark, Presto and other engines
  • Data Engineering Pitfalls and How to Avoid Them Recorded: Sep 12 2019 45 mins
    Jorge Villamariona & Minesh Patel from Qubole
    Successful data engineering requires a wide range of technical skills to build and maintain diverse data pipelines. Whether your data engineering team builds traditional data pipelines that feed business intelligence reports and dashboards or leading edge streaming applications, data users throughout your organization rely on your ability to create consistent and reliable pipelines to feed any and all business applications.

    In this webinar you will learn how to avoid some of the most common data engineering pitfalls such as:

    -- Data team misalignment
    -- Not fully understanding your data customers
    -- Not using the right tools for the job
    -- Always pursuing home-grown solutions

    This webinar will cover simple, yet practical solutions for these simple but way too common challenges often faced by data engineering teams.
  • Comparing, Contrasting and Selecting Engines and Clusters (Abstract) (AWS) Recorded: Aug 14 2019 65 mins
    Alex Aidun- Instructor, Purvang Parikh - SA
    * Understand the similarities and differences between the Engines and the Clusters
    * List the relevant use cases and when to use each Engine / Cluster
    * Identify the starting instance type for Engines / Clusters and use cases
  • Mastering Data Discovery on Cloud Data Lakes Recorded: Aug 14 2019 45 mins
    Rangasayee Chandrasekaran, Product Manager, Qubole
    In order to capture and analyze new and different types of data, corporations are augmenting their data warehouses and data marts with cloud data lakes. Certainly, capturing new and different types of data is important, but providing access to all users, providing tools that allow them to work the way they already do, and deriving value from those datasets remains the ultimate goal.

    In this webinar, we will outline data processing challenges faced by analysts in the enterprise and a live demo of Qubole's Workbench—a powerful user interface that reduces time-to-insight by extending Qubole's multi-engine capabilities to data analysts and data scientists. Workbench enables data discovery combining unstructured, semi-structured, and structured data in data lakes or data warehouses for analytics, machine learning, or processing with engines such as Apache Spark.

    Attendees will learn:
    -- Common data processing challenges for analytics
    -- The value of data lakes
    -- Best practices for working with structured and semi-structured datasets
    -- When to use Apache Spark, Presto and other engines
  • Migrating to a Modern Cloud-Native Data Lake with Microsoft Azure and Qubole Recorded: Jul 30 2019 60 mins
    Jeff King, Sr. Program Manager at Microsoft & Anita Thomas, Principal Product Manager at Qubole
    Cloud service models have become the new norm for enterprise deployments in almost every category — and big data is no exception. As the volume, variety, and velocity of data increase exponentially, the cloud offers a more efficient and cost-effective option for managing the unpredictable and bursty workloads associated with big data compared to traditional on-premises data centers.

    Organizations looking to scale their big data projects and implement a data-driven business culture can do so with greater ease on the cloud. However, adopting a cloud deployment model requires a cloud-first re-architecture and a platform approach rather than a simple lift and shift of data applications and pipelines.

    Join experts from Microsoft and Qubole as they discuss the modern cloud-native data lake architecture, how it contrasts with cloud data warehouses and how the use of Azure Data Lake Storage and Qubole can deliver secure, enterprise-scale analytics and machine learning. In this webinar, you'll learn:

    - Benefits of migrating to a modern cloud-native data lake
    - Choosing the right data architecture
    - Getting your data lake right with Azure ADLS and Qubole
    - Defining per-user data access controls on ADLS using Active Directory
    - Demo
  • Data Engineering Pitfalls and How to Avoid Them Recorded: Jul 11 2019 46 mins
    Jorge Villamariona & Minesh Patel from Qubole
    Successful data engineering requires a wide range of technical skills to build and maintain diverse data pipelines. Whether your data engineering team builds traditional data pipelines that feed business intelligence reports and dashboards or leading edge streaming applications, data users throughout your organization rely on your ability to create consistent and reliable pipelines to feed any and all business applications.

    In this webinar you will learn how to avoid some of the most common data engineering pitfalls such as:

    -- Data team misalignment
    -- Not fully understanding your data customers
    -- Not using the right tools for the job
    -- Always pursuing home-grown solutions

    This webinar will cover simple, yet practical solutions for these simple but way too common challenges often faced by data engineering teams.
  • Enterprise-Scale Big Data Analytics on Google Cloud Platform (GCP) Recorded: Jun 19 2019 57 mins
    Naveen Punjabi from Google & Anita Thomas from Qubole
    As companies scale their data infrastructure on Google Cloud, they need a self-service data platform with integrated tools that enables easier, more collaborative processing of big data workloads.

    Join Qubole and Google experts to learn:

    - Why a unified experience with native notebooks, a command workbench, and integrated Apache Airflow are a must for enabling data engineers and data scientists to collaborate using the tools, languages, and engines they are familiar with.

    - The importance of enhanced versions of Apache Spark, Hadoop, Hive and Airflow, along with dedicated support and specialized engineering teams by engine, for your big data analytics projects.

    - How workload-aware autoscaling, aggressive downscaling, intelligent Preemptible VM support, and other administration capabilities are critical for proper scalability and reduced TCO.

    - How you can deliver day-1 self-service access to process the data in your GCP data lake or BigQuery data warehouse, with enterprise-grade security.
  • Right Tool for the Job: Running Apache Spark at Scale in the Cloud Recorded: May 30 2019 50 mins
    Ashwin Chandra Putta, Sr. Product Manager at Qubole
    Apache Spark is powerful open source engine used for processing complex, memory-intensive workloads. However, running Apache Spark in the cloud can be complex and challenging. Qubole has re-engineered Apache Spark, optimizing its performance and efficiency while reducing any administrative overheads. Today, Qubole runs some of the world’s largest Apache Spark clusters in the cloud.

    In this webinar, we’ll take a deeper look at the use cases for Apache Spark, including ETL and machine learning, and compare Apache Spark on Qubole versus Open Source Apache Spark. We’ll cover:

    - Why Apache Spark is essential for big data processing
    - How to deploy Spark at scale in the cloud and enable all data users
    - The enhancements made to Qubole Spark
    - A live demo and real-world examples of Apache Spark on Qubole
  • Right Tool for the Job: Using Qubole Presto for Interactive and Ad-Hoc Queries Recorded: May 23 2019 58 mins
    Goden Yao, Product Manager at Qubole
    Presto is the go-to query engine of Qubole customers for interactive and reporting use cases due to its excellent performance and ability to join unstructured and structured data in seconds. Many Qubole customers use Presto along with their favorite BI tools, such as PowerBI, Looker and Tableau to explore data and run queries.

    Two key criteria to look for in a query engine for interactive analytics are performance and cost. You want best-in-class performance to meet the short deadlines of interactive workloads, while reducing and/or controlling costs.

    Qubole Presto is a cloud-optimized version of open source Presto, with enhancements that improve performance, reliability and cost. In this webinar, we’ll cover:

    - When to use Presto versus other engines like Apache Spark
    - How to enable self-service access to your data lake
    - The key advantages of Qubole Presto over Open Source Presto
    - Live demo of running interactive and ad hoc queries using Qubole Presto
    - How customers like iBotta, Tivo and Return Path leverage Qubole Presto
  • Why You Need a Cloud Platform to Succeed with Big Data Recorded: Jan 24 2019 54 mins
    Matheen Raza and Sandeep Dabade from Qubole
    As the volume, variety, and velocity of data increases, the cloud is the most efficient and cost-effective option for machine learning and advanced analytics. Organizations looking to scale their big data projects can do so with greater ease with a cloud-native data platform.

    Qubole provides a single platform for data engineers, analysts, and scientists that supports multiple use cases -- from machine learning to predictive analytics. The platform saves organizations up to 50 percent in data processing costs by leveraging multiple engines like Apache Spark, Presto, and Hive, and automatically provisions, manages, and optimizes cloud resources.

    ​Join experts from Qubole as they demonstrate how to get the most out of your data on the cloud. In this webinar, you'll learn:

    - The benefits of a single platform and centralized access to data
    - How to pick the right data processing engines and tools
    - To save money with intelligent cluster management and financial governance
    - Key considerations to evaluate cloud data platforms
  • Delivering Self-Service Analytics and Discovery from your Data Lake Recorded: Dec 13 2018 31 mins
    Jorge Villamariona, Qubole
    As corporations augment their corporate data warehouses and data marts with cloud data lakes in order to support new big data requirements, the question about how to grant governed access to those data lakes becomes more pressing. Certainly, capturing new and different types of data is important but deriving value from those datasets remains the ultimate goal.

    Whether or not the data lake consumers write SQL or leverage 3rd party BI and visualization tools, what matters is that they can continue to be productive using the skills and tools they already know. The difference is that now those tools and skills should be used with back-end engines that can can help them quickly sift through petabytes of data and at the same time provide support for fast interactive queries.

    This means that in order for those data lake investments to succeed it is important for data admins to provide: SQL access to all authorized data, support for BI tools, cross-team collaboration capabilities, and governed self-service.

    In this webinar we will cover:
    - Data collaboration and access using SQL
    - Tools that enable fast self-service for different teams
    - Considerations for choosing the right SQL back-end for your use case
  • Best Practices: How To Build Scalable Data Pipelines for Machine Learning Recorded: Nov 28 2018 42 mins
    Jorge Villamariona and Pradeep Reddy, Qubole
    Data engineers today serve a wider audience than just a few years ago. Companies now need to apply machine learning (ML) techniques on their data in order to remain relevant. Among the new challenges faced by data engineers is the need to build and fill Data Lakes as well as reliably delivering complete large-volume data sets so that data scientists can train more accurate models.

    Aside from dealing with larger data volumes, these pipelines need to be flexible in order to accommodate the variety of data and the high processing velocity required by the new ML applications. Qubole addresses these challenges by providing an auto-scaling cloud-native platform to build and run these data pipelines.

    In this webinar we will cover:
    - Some of the typical challenges faced by data engineers when building pipelines for machine learning.
    - Typical uses of the various Qubole engines to address these challenges.
    - Real-world customer examples
  • Keeping Costs Under Control When Processing Big Data in the Cloud Recorded: Nov 13 2018 48 mins
    Amit Duvedi and Balaji Mohanam, Qubole
    The biggest mistake businesses make when spending on data processing services in the cloud is in assuming that cloud will lower their overall cost. While the cloud has the potential to offer better economics both in the short and long-term, the bursty nature of big data processing requires following cloud engineering best practices, such as upscaling and downscaling infrastructure and leveraging the spot market for best pricing, to realize such economics.

    Businesses also fail to appreciate the potential of runaway costs in a 100% variable cost environment, something they rarely have to worry about in a fixed cost on-premise environment. In the absence of financial governance, companies leave themselves vulnerable to cost overruns where even a single rogue query can result in tens of thousands of dollars in unbudgeted spend.

    In this webinar you’ll learn how to:

    - Identify areas of cost optimization to drive maximum performance for the lowest TCO
    - Monitor total costs at the application, user, and account level
    - Provide admins the ability to control and design the infrastructure spend
    - Automatically optimize clusters for lower infrastructure spend based on custom-defined parameters
Elemental to Big Data
At our core, we are a team of engineers who eat, sleep, and live big data. We believe that ubiquitous access to information is the key to unlocking a company's success. To achieve this, a big data platform must be agile, flexible, scalable, and proactive to anticipate a company's needs.

Embed in website or blog

Successfully added emails: 0
Remove all
  • Title: Succeeding with a Cloud Data Lake - from Architecture to Operations
  • Live at: Nov 7 2019 6:00 pm
  • Presented by: Rangasayee Chandrasekaran and Akil Murali from Qubole
  • From:
Your email has been sent.
or close