Skip to yearly menu bar Skip to main content


Overload: Latency Attacks on Object Detection for Edge Devices

Erh-Chung Chen · Pin-Yu Chen · I-Hsin Chung · Che-Rung Lee

Arch 4A-E Poster #48
[ ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT


Nowadays, the deployment of deep learning based applications is an essential task owing to the increasing demands on intelligent services. In this paper, we investigate latency attacks on deep learning applications. Unlike common adversarial attacks for misclassification, the goal of latency attacks is to increase the inference time, which may stop applications from responding to the requests within a reasonable time. This kind of attack is ubiquitous for various applications, and we use object detection to demonstrate how such kind of attacks work. We also design a framework named Overload to generate latency attacks at scale. Our method is based on a newly formulated optimization problem and a novel technique, called spatial attention. This attack serves to escalate the required computing costs during the inference time, consequently leading to an extended inference time for object detection. It presents a significant threat, especially to systems with limited computing resources. We have conducted experiments using YOLOv5 models on Nvidia NX. Compared to existing methods, our attacking method is simpler and more effective. The experimental results show that with latency attacks, the inference time of a single image can be increased ten times longer in reference to the normal setting. Moreover, our findings pose a potential new threat to all object detection tasks requiring non-maximum suppression (NMS), as our attack is NMS-agnostic.

Live content is unavailable. Log in and register to view live content