Block-based Learned Image Compression without Blocking Artifacts
Jong Wook Kim ⋅ Suyong Bahk ⋅ TaeHwa Lee ⋅ HyunDong CHO ⋅ Donghyun Kim ⋅ Sung-Chang Lim ⋅ Jin Soo Choi ⋅ Hui Yong Kim
Abstract
Learned Image Compression (LIC) outperforms traditional codecs but suffers from excessive peak memory usage when handling high-resolution images. Consequently, block-based LIC has been studied to reduce peak memory and computational costs; however, this approach often introduces blocking artifacts that degrade visual quality.To mitigate this, the JPEG-AI standard introduced a patch-based scheme where overlapped blocks are coded independently using empirically determined overlap sizes. However, the experimental search for optimal overlaps is time-consuming and does not guarantee blocking-free reconstruction. To address these limitations, we propose an analytic framework modeling overlap propagation through convolutional and transposed convolutional layers to precisely determine the minimal overlaps for blocking-free reconstruction.Based on the minimum overlaps calculated, we provide the block-based implementation methodology for the convolution networks used in most CNN-based LIC models.Applied to four CNN-based LIC models on 4K images partitioned into 256$\times$256 blocks, our method achieves rate–distortion performance identical to full-image coding while reducing average peak memory usage to 18.7\% (encoder) and 17.9\% (decoder), and only with average computational cost of 4.23\% and 2.34\%, respectively. Notably, the proposed block-based framework does not require any re-training of the original model. Furthermore, it can also be applied to most CNN-based image processing neural networks without worrying about any performance degradation.
Successful Page Load