Not Your Father's Database

Logo
Presented by

Vida Ha

About this talk

This session will cover a series of use cases where you can store your data cheaply in files and analyze the data with Apache Spark, as well as use cases where you want to store your data into a different data source to access with Spark DataFrames. Here’s an example outline of some of the topics that will be covered in the talk: Use cases to store in file systems to use with Apache Spark: 1. Analyzing a large set of data files. 2. Doing ETL of a large amount of data. 3. Applying Machine Learning & Data Science to a large dataset. 4. Connecting BI/Visualization tools to Apache Spark to analyze large datasets internally. Use cases to store your data into databases for use with Apache Spark: 1. Random access, frequent inserts, and updates of rows of SQL tables. Databases have better performance for these use cases. 2. Supporting Incremental updates of Databases into Spark. It’s not performant to update your Spark SQL tables backed by files. Instead, you can use message queues and Spark Streaming or doing an incremental select to make sure your Spark SQL tables stay up to date with your production databases. 3. External Reporting with many concurrent requests. While Spark’s ability to cache your file data in memory will allow you to get back to fast interactive querying, that may not optimal for supporting many concurrent requests. It’s better to use Spark to ETL your data to summary tables or some other format into a traditional database to serve your reports if you have many concurrent users to support. 4. Searching content. A Spark job can certainly be written to filter or search for any content in files that you’d like. ElasticSearch is a specialized engine designed to return search results quicker.
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (92)
Subscribers (39062)
No matter at what stage of your data journey you’re in, this channel will help you get a better understanding of the fundamental concepts of the Databricks Lakehouse platform and the problems we’re helping to solve for data teams.