Querying 100s of petabytes of data demands optimized query speed specifically when data accumulates over time. We have to ensure that the queries remain efficient because over time you may end up with a lot of small files and your data might not be optimally organized.
In this talk, we will cover:
- Apache Iceberg table format
- Problems in the data lake: small files, unorganized files
- Techniques such as: partitioning, compaction, metrics filtering
- Overlapping metrics problem
- Solving it using sorting, Z-order clustering