Skip to yearly menu bar Skip to main content


Learning to Transform Dynamically for Better Adversarial Transferability

Rongyi Zhu · Zeliang Zhang · Susan Liang · Zhuo Liu · Chenliang Xu

Arch 4A-E Poster #5
[ ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT


Adversarial examples, crafted by adding perturbations imperceptible to humans, can deceive neural networks. Recent studies identify the adversarial transferability across various models, i.e., the cross-model attack ability of adversarial samples. To enhance such adversarial transferability, existing input transformation-based methods diversify input data with transformation augmentation. However, their effectiveness is limited by the finite number of available transformations. In our study, we introduce a novel approach named Learning to Transform (L2T). L2T increases the diversity of transformed images by selecting the optimal combination of operations from a pool of candidates, consequently improving adversarial transferability. We conceptualize the selection of optimal transformation combinations as a trajectory optimization problem and employ a reinforcement learning strategy to effectively solve the problem. Comprehensive experiments on the ImageNet dataset, as well as practical tests with Google Vision and GPT-4V, reveal that L2T surpasses current methodologies in enhancing adversarial transferability, thereby confirming its effectiveness and practical significance.

Live content is unavailable. Log in and register to view live content