Adversarial Exploitation of Policy Imitation

Adversarial Exploitation of Policy Imitation

Can someone copy a black-box robot brain just by watching how it acts? This study says yes.

Researchers Vahid Behzadan and William Hsu show that deep reinforcement learning (DRL) agents—used in games, robotics, and trading—can be cloned via policy imitation. An attacker who can repeatedly query an agent (ask for its action in many situations) can build a new dataset and learn a look‑alike policy. That replica then enables black‑box attacks that push the original agent toward bad decisions, threatening both confidentiality (the policy can be stolen) and integrity (it can be manipulated).

Why it matters: Unlike classic classifier theft, this attack leverages imitation learning—a standard DRL tool—so extraction is practical without access to rewards or model internals.

Mitigations discussed include rate‑limiting and auditing queries, adding randomness or noise to outputs, watermarking behaviors, detecting imitation attempts, and training with adversarial scenarios.

Paper: http://arxiv.org/abs/1906.01121v1

Paper: http://arxiv.org/abs/1906.01121v1

Register: https://www.AiFeta.com

#AI #ReinforcementLearning #Security #AdversarialML #ModelExtraction #ImitationLearning #DRL

Read more