Skip to yearly menu bar Skip to main content


Poster

Data-free Universal Adversarial Perturbation with Pseudo-semantic Prior

Chanhui Lee · Yeonghwan Song · Jeany Son


Abstract:

Data-free Universal Adversarial Perturbation (UAP) is an image-agnostic adversarial attack that deceives deep neural networks using a single perturbation generated solely from random noise, without any data priors.However, traditional data-free UAP methods often suffer from limited transferability due to the absence of semantic information in random noise.To address this, we propose a novel data-free universal attack approach that generates a pseudo-semantic prior recursively from the UAPs, enriching semantic contents within the data-free UAP framework.Our method is based on the observation that UAPs inherently contain latent semantic information, enabling the generated UAP to act as an alternative data prior, by capturing a diverse range of semantics through region sampling.We further introduce a sample reweighting technique to emphasize hard examples by focusing on samples that are less affected by the UAP.By leveraging the semantic information from the pseudo-semantic prior, we also incorporate input transformations, typically ineffective in data-free UAPs due to the lack of semantic content in random priors, to boost black-box transferability.Comprehensive experiments on ImageNet show that our method achieves state-of-the-art performance in average fooling rate by a substantial margin, significantly improves attack transferability across various CNN architectures compared to existing data-free UAP methods, and even surpasses data-dependent UAP methods.Extensive experiments on the ImageNet dataset demonstrate that our PSP-UAP achieves a state-of-the-art average fooling rate and significantly enhances attack transferability on different CNN models as compared to other data-free universal attack methods.

Live content is unavailable. Log in and register to view live content