Poster
Tightening Robustness Verification of MaxPool-based Neural Networks via Minimizing the Over-Approximation Zone
Yuan Xiao · Yuchen Chen · Shiqing Ma · Chunrong Fang · Tongtong Bai · Mingzheng Gu · Yuxin Cheng · Yanwei Chen · Zhenyu Chen
The robustness of neural network classifiers is important in the safety-critical domain and can be quantified by robustness verification. At present, efficient and scalable verification techniques are always sound but incomplete, and thus, the improvement of verified robustness results is the key criterion to evaluate the performance of incomplete verification approaches. The multi-variate function MaxPool is widely adopted yet challenging to verify. In this paper, we present \textbf{Ti-Lin}, a robustness verifier for MaxPool-based CNNs with \textbf{Ti}ght \textbf{Lin}ear Approximation. Following the sequel of minimizing the over-approximation zone of the non-linear function of CNNs, we are the first to propose the provably neuron-wise tightest linear bounds for the MaxPool function. By our proposed linear bounds, we can certify larger robustness results for CNNs. We evaluate the effectiveness of Ti-Lin on different verification frameworks with open-sourced benchmarks, including LeNet, PointNet, and networks trained on the MNIST, CIFAR-10, Tiny ImageNet and ModelNet40 datasets. Experimental results show that Ti-Lin significantly outperforms the state-of-the-art methods across all networks with up to 78.6\% improvement in terms of the certified accuracy with almost the same time consumption as the fastest tool. Our code is available at \url{https://anonymous.4open.science/r/Ti-Lin-cvpr-72EE}.
Live content is unavailable. Log in and register to view live content