Skip to yearly menu bar Skip to main content


Brain Decodes Deep Nets

Huzheng Yang · James Gee · Jianbo Shi

Arch 4A-E Poster #340
award Highlight
[ ] [ Project Page ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT


We developed a tool for visualizing and analyzing large pre-trained vision models by mapping them onto the brain, thus exposing their hidden inside. Our innovation arises from a surprising usage of brain encoding: predicting brain fMRI measurements in response to images. We report two findings. First, explicit mapping between the brain and deep-network features across dimensions of space, layers, scales, and channels is crucial. This mapping method, FactorTopy, is plug-and-play for any deep-network; with it, one can paint a picture of the network onto the brain (literally!). Second, our visualization shows how different training methods matter: they lead to remarkable differences in hierarchical organization and scaling behavior, growing with more data or network capacity. It also provides insight into finetuning: how pre-trained models change when adapting to small datasets. Our method is practical: only 3K images are enough to learn a network-to-brain mapping.

Live content is unavailable. Log in and register to view live content