The rise of Generative AI (GenAI) and Large Language Models (LLMs) has transformed how businesses operate, but it also introduces new security risks. Sensitive data can inadvertently leak during interactions with these tools, as seen in high-profile incidents like the Samsung data leak (https://mashable.com/article/samsung-chatgpt-leak-details)
Also, Traditional Data Loss Prevention (DLP) tools are ill-equipped to detect and prevent such leaks in LLM transactions, creating significant security blind spots.
In this session, we will examine few real-world examples of AI-related data exposure, explore why LLM transactions pose unique challenges for existing security frameworks, and discuss practical approaches to mitigating these risks. Attendees will gain insights into evolving their DLP strategies to address the dynamic threats posed by GenAI tools in enterprise environments.