Verifying Neural Network Robustness with Dual Perturbations
Abstract
Safety-critical deep learning systems must be robust against real-world corruptions combining spatially correlated distortions and independent noise.Current deep neural network verification methods handle these perturbations separately, either checking independent pixel-wise perturbations or restricted convolutional transformations using predefined patterns.This gap prevents assessing robustness under realistic conditions where both perturbation types occur simultaneously.To address these limitations, we propose VeriDou, a framework that introduces:(i) universal convolutional perturbations that enable verification across continuous spatial distortion spaces, and(ii) dual perturbations that capture both convolutional distortions and independent pixel-level variations.Our evaluation on a set of diverse benchmarks with 14340 instances shows VeriDou's dual perturbations approach found substantially more adversarial examples on networks that existing methods claimed to be highly robust.This shows that VeriDou is able to explore a broader range of unsafe regions and thus enhances formal assessment of robustness.