Skip to yearly menu bar Skip to main content


Transcriptomics-guided Slide Representation Learning in Computational Pathology

Guillaume Jaume · Lukas Oldenburg · Anurag Vaidya · Richard J. Chen · Drew F. K. Williamson · Thomas Peeters · Andrew Song · Faisal Mahmood

Arch 4A-E Poster #175
[ ] [ Project Page ]
Thu 20 Jun 10:30 a.m. PDT — noon PDT
Oral presentation: Orals 3C Medical and Physics-based vision
Thu 20 Jun 9 a.m. PDT — 10:30 a.m. PDT


Self-supervised learning (SSL) has been successful in building patch embeddings of small histology images (e.g., 224 x 224 pixels), but scaling these models to learn slide embeddings from the entirety of giga-pixel whole-slide images (WSIs) remains challenging. Here, we leverage complementary information from gene expression profiles to guide slide representation learning using multimodal pre-training. Expression profiles constitute highly detailed molecular descriptions of a tissue that we hypothesize offer a strong task-agnostic training signal for learning slide embeddings. Our slide and expression (S+E) pre-training strategy, called TANGLE, employs modality-specific encoders, the outputs of which are aligned via contrastive learning. TANGLE was pre-trained on samples from three different organs: liver (n=6,597 S+E pairs), breast (n=1,020), and lung (n=1,012) from two different species (Homo sapiens and Rattus norvegicus). Across three independent test datasets consisting of 1,265 breast WSIs, 1,946 lung WSIs, and 4,584 liver WSIs, TANGLE shows significantly better few-shot performance compared to supervised and SSL baselines. When assessed using prototype-based classification and slide retrieval, TANGLE also shows a substantial performance improvement over all baselines. Code will be made available upon acceptance.

Live content is unavailable. Log in and register to view live content