Return to Info Bytes List
 

Info Byte Detail

NIST Describes AI Manipulation Threats and Mitigation Strategies

The National Insititute of Standards and Technology (NIST) published a new report that describes the types of cyberattacks that manipulate the behavior of AI systems – as well as potential mitigations for these threats and the limitations of various approaches.

Computer scientists at NIST, with collaborators in academia and industry, wrote the new report, titled Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), to support the development of trustworthy AI. The publication can help government and commercial organizations put NIST’s AI Risk Management Framework into practice.

The report’s summary states:

  • AI systems can malfunction when exposed to untrustworthy data, and attackers are exploiting this issue.
  • New guidance documents the types of these attacks, along with mitigation approaches.
  • No foolproof method exists as yet for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise.

The report considers the four major types of attacks: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker’s goals and objectives, capabilities, and knowledge. The report also provides recommendations for mitigating these risks and enhancing the security and trustworthiness of AI systems.

Info Byte Source: NIST.gov
Skip to content