Confidential computing presents new methods to secure processes as they run, which in turn can isolate code and data from attack, insider threat and exposure to service providers and unauthorized oversight. This is ideal to address complex new problems facing enterprises today spanning intimate collaboration over sensitive code and data even between untrusted entities, and generative AI, which presents completely new challenges where rights management, privacy, and control are essential for success and to avoid common issues such as model poisoning, data leakage, model compromise, and theft.
This session dives into these challenges and illustrate the essential need to protect running code and data, particularly with emerging LLM technology. The session will include demonstrations and use cases covering financial services and healthcare where protecting models, data, and code with a confidential computing-based zero-trust architecture can help organizations meet the pressure of cloud and AI technology with a much-reduced risk profile.