Good Can Sometimes be Bad: A Unified Attack against 3D Point Cloud Classifier by a Flexible Isotropic Resampling
Abstract
To ensure the robustness of 3D point cloud Deep Neural Network(3D DNN), 3D adversarial attack targeting the inference stage and backdoor attack targeting the training stage are well studied. The success of both attacks usually requires a specified permissions that attacker must have. However, the obtainable permissions are uncertain due to the deployment environment changes in practical scenarios. This renders existing separately designed adversarial attack or backdoor attack ineffective. To solve this issue, this paper proposes a unified attack that can adapt to both 3D point cloud backdoor attack and adversarial attack, named UAtt3D. Furthermore, by observing existing attacks, their way to promise attack stealthiness is to limit the undesirable perturbation. This strategy requires moving the point position as little as possible, which restricts the attack intensity and is not suitable for our unified attack. Meanwhile, this strategy will inevitably cause a quality decrease on 3D point cloud due to the remaining malicious perturbation. Therefore, our UAtt3D explores a new avenue to guarantee attack stealthiness which improves the quality of attacked 3D point cloud rather than decreasing it. In detail, to simultaneously consider feature movement of adversarial attack and backdoor feature learning of backdoor attack, a flexible isotropic resampling is designed. It realigns the position of most points based on surface approximation and rays sampling. By fine tuning the resampled point cloud, adversarial point cloud and backdoored point cloud are obtained. Several experiments suggest that the proposed UAtt3D achieves outstanding stealthiness comparing with existing adversarial attacks and backdoor attacks from the subjective and objective perspective. Meanwhile, its attack efficiency is competitive.