Skip to yearly menu bar Skip to main content


Poster

SuperLightNet: Lightweight Parameter Aggregation Network for Multimodal Brain Tumor Segmentation

Feng Yu · Jiacheng Cao · Li Liu · Minghua Jiang


Abstract:

Multimodal 3D segmentation involves a significant number of 3D convolution operations, which requires substantial computational resources and high-performance computing devices in MRI multimodal brain tumor segmentation. The key challenge in multimodal 3D segmentation is how to minimize network computational load while maintaining high accuracy. To address the issue, a novel lightweight parameter aggregation network (SuperLightNet) is proposed to realize the efficient encoder and decoder for the high accurate and low computation. A random multiview drop encoder is designed to learn the spatial structure of multimodal images through a random multi-view approach for solving the high computational time complexity that has arisen in recent years with methods relying on transformers and Mamba. A learnable residual skip decoder is designed to incorporate learnable residual and group skip weights for addressing the reduced computational efficiency caused by the use of overly heavy convolution and deconvolution decoders. Experimental results demonstrate that the proposed method achieves a leading reduction in parameter count by 95.59\%, the 96.78\% improvement in computational efficiency, the 96.86\% enhancement in memory access performance, and the average performance gain of 0.21\% on the BraTS2019 and BraTS2021 datasets in comparison with the state-of-the-art methods. Code is available at https://github.com/WTU1020-Medical-Segmentation/SuperLightNet.

Live content is unavailable. Log in and register to view live content