Generative AI and ML promise to change the way all business functions operate, including GRC. While the potential for productivity gains is clear, the risks associated with their use and ways to address them are still being understood. As technology advances, risk management grows in importance, and requirements continue to increase in number and visibility. In order to encourage the secure use of AI, we need to develop new ways of governing AI risk.
This session will explore the evolution of traditional governance processes like policies, training and awareness, third-party risk, data governance, access control, and monitoring. We’ll also explore several novel and emerging techniques for mitigating AI risks, such as enforcing security-centric prompts and model guidance, and the opportunity for prompt libraries and enterprise controls that lead to secure AI use within the enterprise.
Let’s explore the future of this trend and the guardrails to help your organization navigate this next major evolution of technology.
Learning objectives for this session:
- Assess the risks that come with generative AI and AIML-based technologies.
- Explore how to extend your existing governance processes to cover AI risks.
- Discuss emerging practices for effectively integrating security into AI-enabled use cases.
* CPE: This is a webinar recording. Credits are not offered for on-demand viewing.