Virtual Immunohistochemistry Staining with Dual-Aligned Multi-Task Feature Guidance
Abstract
In hematoxylin-eosin (H&E) to virtual immunohistochemistry (IHC) staining, paired images enable supervised learning but suffer from inherent spatial dislocation, limiting pixel-level constraints. Thus, auxiliary tasks have been increasingly employed with paired data to provide complementary supervision. However, existing methods largely overlook the rich semantic information embedded in auxiliary task models. This paper proposes a novel framework for virtual IHC staining guided by dual-aligned multi-task features, which fully explores semantic cues from auxiliary tasks. To realize effective guidance, we address two obstacles: (1) the spatial mismatch between paired H&E and IHC feature representations; (2) the task gap between auxiliary task features and virtual staining features. To resolve the spatial mismatch, we generate an alignment matrix that aligns H&E and IHC features. Specifically, we first introduce structure-enhanced learning to restore semantic consistency in regions affected by inaccurate staining in virtual IHC images. Then, we separately cluster features from virtual IHC and real IHC images, and establish semantic correspondences using an active-passive matching mechanism. This ensures that only semantically aligned regions are matched, reducing the impact of staining variability on the alignment matrix. To bridge the task gap, we introduce a task-gap alignment module trained under the principle that auxiliary features are considered aligned if they improve the performance of the virtual IHC staining model. Extensive experiments on two public datasets with four biomarkers demonstrate the effectiveness of our framework. Our code will be publicly available.