AI is no longer a side project, enterprise security standards should apply
As enterprises embrace generative AI and Large Language Models (LLMs), the need for robust, real-time AI runtime security becomes paramount. From prompt injection attacks and data leakage to malicious URLs and training data poisoning, the threat landscape for LLMs is evolving, and so must your defenses.
In this on-demand session, learn how Palo Alto Networks AI Runtime Security API Intercept and NVIDIA NeMo Guardrails work together to deliver a defense-in-depth approach for securing GenAI applications. You’ll discover how to implement semantic guardrails, enforce data privacy policies, and insert custom security controls directly into your LLM pipelines, ensuring protection at every layer, from the user prompt to the model response
● How NeMo Guardrails orchestrates and enforces conversation-level policies for LLMs.
● How API Intercept provides dynamic, inline threat detection against prompt injections, data exfiltration, and AI-specific exploits.
Whether you're an AI application developer or a cybersecurity leader, this webinar will equip you with actionable tools and proven strategies to secure AI deployments, protect sensitive data, and maintain trust in your GenAI systems.
Rajath Narasimha is a Senior Product Marketing Manager at NVIDIA, driving the go-to-market strategy for Foundation Models and GenAI safety and security. Previously, he held product management and marketing roles for over a decade, building and launching datacenter, cloud, and HPC/AI products at leading semiconductor companies. He holds a master's degree in Electrical and Computer Engineering from San Francisco State University.
Speakers:
Jason Roberts, Senior Partner Engineer, Cloud & AI Security. Palo Alto Networks
Tom Prenderville, Technical Director, Technology Partnerships. Palo Alto Networks
Rajath Narasimha, Senior Product Marketing Manager, Foundation Models. NVIDIA