A blue and white logo for a social media management tool called Socialionals.

Adversarial machine learning

Share This
« Back to Glossary Index

The study of adversarial machine learning concentrates on the interplay between machine learning systems and their potentially damaging adversaries. This specialty has grown and diversified over time, thanks to the work of many researchers. It investigates how harmful entities can take advantage of and manipulate machine learning procedures, often with the goal of avoiding detection or inducing misclassification. This field encompasses a broad range of attack methods, from disguising spam messages to tampering with autonomous vehicle systems. Crucially, the focus of this discipline is not only on identifying and comprehending these threats, but also on formulating and executing robust defense strategies. These strategies may encompass various tactics such as multi-stage countermeasures, noise identification methods, and methods for assessing the repercussions of attacks. The continual research and exploration in this area are crucial for guaranteeing the security and dependability of machine learning systems.

Adversarial machine learning is the study of the attacks on machine learning algorithms, and of the defenses against such attacks. A survey from May 2020 exposes the fact that practitioners report a dire need for better protecting machine learning systems in industrial applications.

Most machine learning techniques are mostly designed to work on specific problem sets, under the assumption that the training and test data are generated from the same statistical distribution (IID). However, this assumption is often dangerously violated in practical high-stake applications, where users may intentionally supply fabricated data that violates the statistical assumption.

Most common attacks in adversarial machine learning include evasion attacks, data poisoning attacks, Byzantine attacks and model extraction.

« Back to Glossary Index
Keep up with updates
en_USEnglish