Residual Decoder Adapter: ID-Preserving Tokenizer Adaption for Autoregressive Text Rendering
Abstract
Visual Autoregressive (AR) models generate images by predicting discrete tokens that are decoded by a visual tokenizer.Despite demonstrating strong overall image generation ability, they still underperform on text rendering with blur strokes and disrupt letter shapes. In this work, we trace this limitation to the visual tokenizer, which struggles to reconstruct fine-grained detail.Improving the tokenizer is straightforward but expensive, as it necessitates retraining both the tokenizer and the AR model.Can we improve text rendering performance of AR models without retraining the existing tokenizer and AR model? To achieve this, we propose the Residual Decoder Adapter(\method) that upgrades an existing tokenizer post-hoc without changing its token space. Specifically, it refines the decoder output of the visual tokenizer by introducing two novel components: (i) a paired codebook that shares the token distribution with the original one; (ii) a parallel branch to learn the tiny differences (residual) between the reconstructed image and the ground-truth images in the pixel space. This residual design allows us to enhance the tokenizer non-invasively while preserving compatibility with prior AR models. \method substantially improves text rendering significantly by a large margin. For instance, we boost finetuned Janus-Pro OCR accuracy rises from 24.52\% to 58.26\% (TextVisionBlend), from 12.75\% to 36.81\% (StyledTextSynth) on competitive TextAtlas benchmark. Codes will be released.