The commitment to developing responsible AI is a shared goal across the industry, recognizing the importance of collaboration among partners. With the cybersecurity field facing a shortfall of 3.5 million unfilled jobs, it's clear that not every organization can afford to build specialized teams for conducting red team exercises on AI and Large Language Model (LLM) products.
To address this, Microsoft’s industry-leading AI Red Team has taken the initiative to disseminate its learnings and best practices within the cybersecurity community and is now launching PyRIT (Python Risk Identification Toolkit for Generative AI), an industry-first automation framework. This toolkit, created by the Microsoft AI Red Team, is designed to bolster the safety and security of LLM endpoints.
PyRIT is an open-source resource created to enable security professionals and machine learning engineers to proactively identify risks in their generative AI systems. Having been rigorously tested in over 60 red teaming exercises of generative AI systems, PyRIT serves as a powerful supplement to manual testing efforts. It assists in identifying system vulnerabilities more swiftly by adapting its tactics based on the generative AI system's responses, thereby efficiently achieving the security professional's objectives.
During the upcoming webinar, participants will be guided on how to effectively utilize PyRIT for red teaming generative AI systems. This includes setting up targets, leveraging datasets, exploring various attack strategies, and utilizing the memory functionality. This session offers an opportunity to learn from the industry's best practices on empowering Red Teams and enhancing organizational security.