Skip to yearly menu bar Skip to main content


Poster

Learning Large-Factor EM Image Super-Resolution with Generative Priors

Jiateng Shou · Zeyu Xiao · Shiyu Deng · Wei Huang · ShiPeiyao · Ruobing Zhang · Zhiwei Xiong · Feng Wu


Abstract: As the mainstream technique for capturing images of biological specimens at nanometer resolution, electron microscopy (EM) is extremely time-consuming for scanning wide field-of-view (FOV) specimens. In this paper, we investigate a challenging task of large-factor EM image super-resolution (EMSR), which holds great promise for reducing scanning time, relaxing acquisition conditions, and expanding imaging FOV. By exploiting the repetitive structures and volumetric coherence of EM images, we propose the first generative learning-based framework for large-factor EMSR. Specifically, motivated by the predictability of repetitive structures and textures in EM images, we first learn a discrete codebook in the latent space to represent high-resolution (HR) cell-specific priors and a latent vector indexer to map low-resolution (LR) EM images to their corresponding latent vectors in a generative manner. By incorporating the generative cell-specific priors from HR EM images through a multi-scale prior fusion module, we then deploy multi-image feature alignment and fusion to further exploit the inter-section coherence in the volumetric EM data. Extensive experiments demonstrate that our proposed framework outperforms advanced single-image and video super-resolution methods for $8\times$ and $16\times$ EMSR (\textit{i.e.}, with 64 times and 256 times less data acquired, respectively), achieving superior visual reconstruction quality and downstream segmentation accuracy on benchmark EM datasets. Code is available at \url{https://github.com/jtshou/GPEMSR}.

Live content is unavailable. Log in and register to view live content