Skip to yearly menu bar Skip to main content


Paper
in
Workshop: 7th Safe Artificial Intelligence for All Domains (SAIAD)

Universal Shape of Strong Remote Adversarial Patches for Object Detection with Convolutional Neural Networks

Kento Oonishi · Tsunato Nakai


Abstract: In this study, we investigate the shapes of strong adversarial patches in convolutional neural networks (CNNs), which are commonly used in object detection systems. Adversarial patches pose a significant threat to CNNs, particularly when they can be placed remotely on the target objects (remote adversarial patches), making it crucial to estimate their attack effectiveness. Existing research has primarily focused on rectangular remote adversarial patches, and stronger shapes are expected to exist. Studies have been conducted to derive strong shapes for adversarial patches tailored to individual images, and have demonstrated that strong adversarial patches are not limited to rectangles. However, since these adversarial patches are not universal due to their specialization to individual images, their attack effects are limited. In this study, we aim to derive the new shapes of remote adversarial patches that are generally effective for all images (universal). First, we demonstrate that when a CNN is primarily composed of 3$\times$3 filter size convolutional layers with a stride of one, the effect of adversarial patches diffuses approximately concentrically around each pixel. Furthermore, we demonstrate that the shapes of strong remote adversarial patches vary based on the structure of the CNN. Finally, in the case of object detection that mimics autonomous driving, we show that strong remote adversarial patches are crescent-shaped.

Chat is not available.