Traditionally data had been produced and stored at multiple sources and given the high cost of
the traditional Data warehousing system, only pristine data could be used for analytics and reporting.
When a data analyst needed to derive business value from this data, he/she required communication across teams, integrations between products, and a high level of coordination to get the information they needed.
There was a clear need for a central platform, being the single source of truth for all the data and
providing quick access to the data in a self-service manner. With the advent of Big Data and Cloud
systems, the storage and compute got cheaper which means that even the raw and intermediate data can also be made available for analytics by bringing them to the central location. This led to the
evolution of Enterprise data lake within organizations. The concept of data lake has evolved from a
Hadoop-based on-premise data lake to an object-storage cloud-based data lake with compute being
provisioned only when the need arises.
Big Data and Cloud Engineering team at Abzooba has been playing a key role in helping organizations achieve their goal of digital transformation. We have been a trusted partner for our clients in their Data Modernization, Cloud Migration and Advanced Analytics journey and this has also helped us to learn some of the best practices while designing cloud-native big data solutions.
During this Webinar, following topics would be discussed
1. Best Practices of designing a data lake solution
2. Introduction to Olive and Pine, building blocks for Data Lakes
3. Demo on Olive (Data Ingestion Framework)
About Bijoy Bora: Bijoy is the SVP & Global Head, BD&C Engineering practice, Abzooba Inc. He is also a Professor of Practice, in their MS Data Science program, School of Engineering & Computer Science, University of the Pacific, San Francisco. CA. Bijoy was also the Co-founder of Zaloni Inc.