ALLNet: Multi-task Dense Prediction for Degraded Images
Abstract
Multi-task dense prediction aims to simultaneously address multiple pixel-level tasks through a unified network for visual scene understanding. However, adverse environmental conditions limit the generalization and practicality of such tasks. To address this, we propose ALLNet, a novel framework that effectively explores degradation patterns and integrates multi-task collaborative information. Specifically, we design a MoE-based Mixture of Adaptive Experts (MaE) restoration component network that enhances degradation features through dynamic routing and guides task-specific feature extraction. Furthermore, we formulate a Task-aware Collaborative Refinement (TCR) module to capture global semantic correlations and cross-task dependencies, facilitating bidirectional collaboration between restoration and task-specific features on degraded images. To the best of our knowledge, this represents the first attempt at multi-task dense prediction under image degradation. Experimental results on degraded NYUD-v2 and PASCAL-Context benchmarks demonstrate that our architecture significantly outperforms existing methods in degraded scenarios.