The American secret service NSA, along with the FBI and authorities from Australia, Canada, New Zealand and the UK, have released documents with best practice published for the safe implementation of AI systems (pdf). According to the service, attackers targeting AI systems can use attack vectors unique to those systems as well as traditional techniques.
“Because of the multitude of attack vectors, defenses must be diverse and comprehensive,” the document said. “Advanced attackers often combine several more complex attack vectors. This combination can penetrate multiple layers of defense.” Therefore, organizations are required by the service to implement a number of best practices.
This concerns things like securing and hardening the infrastructure on which the AI system will run, testing the AI models in use, being able to perform automatic rollbacks, not immediately deploying the AI model in the business environment, checking open APIs, actively monitoring how the AI model behaves and conducting audits and penetration tests. It is also recommended to choose an AI supplier that takes a ‘safe by design’ approach and actively looks at the security of their systems.
“Falls down a lot. General tv buff. Incurable zombie fan. Subtly charming problem solver. Amateur explorer.”