Poster
Boosting Adversarial Transferability through Augmentation in Hypothesis Space
Yu Guo · Weiquan Liu · Qingshan Xu · Shijun Zheng · Shujun Huang · Yu Zang · Siqi Shen · Chenglu Wen · Cheng Wang
[
Abstract
]
Abstract:
Adversarial Examples can mislead deep neural networks with subtle perturbations, causing them to make incorrect predictions. Notably, adversarial examples crafted for one model can also deceive other models, a phenomenon known as the transferability of adversarial examples. To improve transferability, existing research has designed various mechanisms centered on the complex interactions between data and models. However, their improvements are relatively limited. Moreover, since these methods are often designed for a specific data modality, this greatly restricts their scalability on other data modalities. In this work, we observe a mirroring relationship between model generalization and adversarial example transferability. Motivated by this observation, we propose an augmentation-based attack, called (perator-erturbation-based tochastic optimization), which constructs a stochastic optimization problem by input transformation operators and random perturbations, and solves this problem to generate adversarial examples with better transferability. Extensive experiments on both images and 3D point clouds demonstrate that OPS significantly outperforms existing SOTA methods in terms of both performance and cost, showcasing the universality and superiority of our approach.
Live content is unavailable. Log in and register to view live content