CamPI: Physical Adversarial Examples through Camera Power Signal Injection
Abstract
Physical adversarial examples pose a concrete threat to real‑world computer vision systems. Existing works mainly generate physical adversarial examples by affixing patches or projecting light onto targets, which are usually visible and can expose the malicious intention. In this work, we reveal a new attack surface that generates invisible adversarial samples by injecting signals into the camera's power supply. We analyze the mechanism of injecting structural stripe patterns into cameras and demonstrate the feasibility of controllable fine-grained injection with signal modulation. We develop a simulation model to emulate the physically injected perturbation, and propose end-to-end optimization methodologies in both white-box and black-box settings to generate the injection signal parameters. We perform a simulated evaluation across seven classification models and carry out physical signal injection experiments with optimized signals. The results show that physical adversarial examples generated through camera power signal injection can disrupt computer vision performance. Our work introduces a new methodology for physical adversarial examples, emphasizing the need for securing computer vision systems in the physical world.