In this online talk, Technology Evangelist Kai Waehner will discuss and demo how you can leverage technologies such as TensorFlow with your Kafka deployments to build a scalable, mission-critical machine learning infrastructure for ingesting, preprocessing, training, deploying and monitoring analytic models.
He will explain challenges and best practices for building a scalable infrastructure for machine learning using Confluent Cloud on Google Cloud Platform (GCP), Confluent Cloud on AWS and on-premise deployments.
The discussed architecture will include capabilities like scalable data preprocessing for training and predictions, combination of different deep learning frameworks, data replication between data centers, intelligent real-time microservices running on Kubernetes and local deployment of analytic models for offline predictions.
Join us to learn about the following:
-Extreme scalability and unique features of Confluent Cloud
-Building and deploying analytic models using TensorFlow, Confluent Cloud and GCP components such as Google Storage, Google ML Engine, Google Cloud AutoML and Google Kubernetes Engine in a hybrid cloud environment
-Leveraging the Kafka ecosystem and Confluent Platform in hybrid infrastructures