Microsoft has released an open-source tool that is created to help developers assess the security of the artificial intelligence systems they are working on. It’s called the Counterfit project and is now available on GitHub.
The Redmond firm itself has already used Counterfit to test its own AI models, on the company’s red team. In addition, other sections of Microsoft are also exploring the use of the tool in AI development.
Microsoft simulating cyberattacks with Counterfit
According to Microsoft’s disclosures on Github, Counterfit has a command-line tool and a generic automation layer for assessing the security of machine learning systems.
This allows developers to simulate cyberattacks against AI systems to test security. Anyone can download the tool and deploy it through Azure Shell, to run in the browser, or locally in an Anaconda Python environment.
“Our tool makes published attack algorithms accessible to the security community and helps provide an extensible interface from which to build, manage and launch attacks on AI models,” Microsoft said.
The tool comes preloaded with sample attack algorithms. In addition, security professionals can also use the built-in cmd2 scripting engine to hook into Counterfit from other existing offensive tools for testing purposes.
According to ITPro, Microsoft developed the tool out of a need to assess its systems for vulnerabilities. Counterfit began as attack scripts written to attack individual AI models and gradually evolved into an automation tool for attacking multiple systems.
The company says it has worked with several of its partners, customers, and government entities to test the tool with AI models in their environments.