Poster
Towards Lossless Implicit Neural Representation via Bit Plane Decomposition
Woo Kyoung Han · Byeonghun Lee · Hyunmin Cho · Sunghoon Im · Kyong Hwan Jin
We quantify the upper bound on the size of the implicit neural representation (INR) model from a digital perspective. The upper bound of the model size increases exponentially as the required bit-precision increases. To this end, we present a bit-plane decomposition method that makes INR predict bit-planes, producing the same effect as reducing the upper bound of the model size. We validate our hypothesis that reducing the upper bound leads to faster convergence with constant model size. Our method achieves lossless representation in 2D image and audio fitting, even for high bit-depth signals, such as 16-bit, which was previously unachievable. We pioneered the presence of bit bias, which INR prioritizes as the most significant bit (MSB). We expand the application of the INR task to bit depth expansion, lossless image compression, and extreme network quantization. Source codes are included in the supplementary material.
Live content is unavailable. Log in and register to view live content