Discovering Adaptive Task Dependencies for Efficient Multi-Task Representation Compression
Abstract
Traditional image compression prioritizes pixel fidelity but often preserves details irrelevant to downstream vision tasks. Compressing task-specific representations instead better aligns with task semantics, yet redundant information persists across correlated tasks. Existing multi-task compression methods typically rely on static dependency structures, leading to redundant bit allocation across correlated tasks and suboptimal rate-distortion performance. We present Adaptive Task Dependency Compression (ATDC), a framework that models per-image task relationships and encodes representations following an adaptive directed acyclic graph (DAG). ATDC infers pairwise task predictability via a learned correlation matrix, constructs a dynamic DAG to determine the optimal compression order, and encodes each task conditionally on its predecessors, achieving predictive redundancy removal and asymmetric information sharing across tasks. Experiments on the Taskonomy dataset demonstrate consistent gains in rate–distortion efficiency and task accuracy over both human-oriented codecs and state-of-the-art multi-task compression methods.The learned DAGs reveal interpretable, content-dependent task hierarchies, establishing adaptive dependency modeling as a principled paradigm for multi-task representation compression.