Enhance LLMs with RAG and Accelerate Enterprise AI with Pure and NVIDIA

Presented by

Anuradha Karuppiah - NVIDIA, Calvin Nieh- Pure Storage, Robert Alvarez - Pure Storage

About this talk

The benefits and ROI of Generative AI for enterprises are clear with retrieval-augmented generation (RAG). RAG provides company-specific responses by enhancing generic large language models (LLMs) with proprietary data. This session will show how an enterprise implementation of RAG with Pure Storage® and NVIDIA speeds-up data processing, increases scalability, and provides real-time responses more easily than creating custom LLMs from scratch. Attend to get technical insight and see a demonstration of distributed and accelerated GenAI RAG pipelines: Learn the benefits of enhancing LLMs with RAG for enterprise-scale GenAI applications Understand how to accelerate the RAG pipeline and deliver enhanced insight using NVIDIA NeMo Retriever microservices and the Pure Storage FlashBlade//S™
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (299)
Subscribers (11202)
Pure is redefining the storage experience and empowering innovators by simplifying how people consume and interact with data.