Industry analysts predict Deep Learning will account for the majority of cloud workloads. Additionally, training of Deep Learning models will represent the majority of server applications in the next few years. Among Deep Learning workloads, foundation models -- a new class of AI models that are trained on broad data (typically via self-supervision) using billions of parameters – are expected to consume the majority of the infrastructure.
This webcast will discuss how Deep Learning models are gaining prominence in various industries, and provide examples of the benefits of AI adoption. We’ll enumerate considerations for selection of Deep Learning infrastructure in on-premises and cloud datacenters. Our presentation will include an assessment of various solution approaches, and identify challenges faced by enterprises in their adoption of AI and Deep Learning technologies. We’ll answer questions like:
· What benefits are enterprises enjoying from innovations in AI, Machine Learning, and Deep Learning?
· How should cost, performance, and flexibility be traded off when designing deep learning infrastructure?
· How are cloud native AI software stacks such as Kubernetes being leveraged by organizations to reduce complexity with rapidly evolving software stacks (TensorFlow, PyTorch, etc.)?
· What are the challenges in operationalizing Deep Learning infrastructure?
· How can Deep Learning solutions scale?
· Besides cost, time-to-train, data storage capacity and data bandwidth, what else should be considered when selecting a Deep Learning infrastructure?