Jim Reavis, Co-Founder & CEO, CSA & Prof. Yu Chien Siang, Chief Innovation & Trust Officer, Amaris AI
Traditionally, cyber security systems have been adding AI capabilities to detect unknown malware, zero day attacks and to analyse logs to detect abnormalities to discover frauds, insider attacks and denial of service malfunctions. However, these sophisticated AI systems are themselves attackable via Adversarial Examples. For instance, one could bypass an email anti-phishing system, physical camera monitoring or IoT control systems to subvert and evade the enterprise security monitoring infrastructure. Thus, this presentation addresses the evolution of AI robustness cum security, what would be best practices and design principles to operate AI securely, how to measure how strong the AI model would be and what are the common knowhow in this area of Adversarial Attacks. Importantly, we would need to know how to defend future Smart Nation and strategic AI systems well and be able to manage the fast developing AI risks and vulnerabilities.