Generative AI tools such as GitHub Copilot and ChatGPT seem to hold promise for developers looking to write code more efficiently and find quick answers to programming questions. But especially in these early days, carefree reliance on such tools can introduce a range of issues related to software functionality, licensing, and security. Superficially valid suggestions can result in vulnerable code that increases risk and requires additional remediation work down the line. And that’s even before considering the potential for abuse if such tools are used irresponsibly or with malicious intent.
To systematically catch vulnerabilities that AI-generated application code can introduce, your AppSec teams can use techniques like dynamic application security testing (DAST) and software composition analysis (SCA), running automatic checks in the development pipeline.
The webcast featuring Invicti will examine how DAST and other methods of application security testing and analysis can help to mitigate the security risks associated with AI-generated code. It will also warn viewers of other potential AI dangers that developers should look out for, including:
- Importing AI-suggested libraries that don’t exist (but can be spoofed by malicious actors)
- Privacy concerns surrounding AI engine queries
- Superficially correct code that introduces business logic vulnerabilities
- Possible code licensing violations
This webcast has been produced in collaboration with SC Media, Cyber Risk Alliance.