Best Practices for Building a Fast and Reliable IoT Data Pipeline

Presented by

Ryan Murray, Chris Furlong, Jeff King

About this talk

The amount of data produced by IoT is expected to reach 4.4 zettabytes in 2020, up from just 0.1 zettabytes in 2013. Of course, the fundamental principle of IoT is making swift, data-driven decisions—all this data is only valuable if it can be analyzed. Enterprises need to collect data from multiple IoT devices and store that data in a data lake with the ultimate goal of analyzing and gaining insights from it. Sounds simple, right? Unfortunately, setting up a fast and reliable data pipeline that enables enterprises to obtain value from their IoT data can be overwhelmingly complex and costly. Join us to learn from subject matter experts from Microsoft, Software AG and Dremio as we explore these challenges and best practices for addressing them. What you will learn: - Strategies for building a scalable and cost-effective data lake architected for large-scale analytics - Best practices for storing data emitted from IoT devices in a highly-efficient format that’s suitable for analytical queries - How to run ad-hoc queries as well as more sophisticated analytical queries directly against IoT data stored in the data lake - How to build a data pipeline that empowers data scientists to aggregate and analyze IoT and business data from multiple sources for maximum insight
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (8)
Subscribers (454)
Dremio