VITAL: Vision-Encoder-centered Pre-training for LMMs in Visual Quality Assessment
Ziheng Jia ⋅ Linhan Cao ⋅ Jinliang Han ⋅ Zicheng Zhang ⋅ Jiaying Qian ⋅ Wang Jiarui ⋅ Zijian Chen ⋅ Guangtao Zhai ⋅ Xiongkuo Min
Abstract
Developing a robust visual quality assessment (VQualA) large multimodal model (LMM) requires achieving **versatility**, **powerfulness**, and **transferability**. However, existing VQualA LMMs typically focus on a single task and rely on full-parameter fine-tuning, which makes them prone to overfitting on specific modalities or task types, thereby limiting their generalization capacity and transferability. To address this, we propose a **vision-encoder-centered generative pre-training** pipeline and develop the **VITAL-Series** LMMs.(1) We adopt a machine-executed annotation–scrutiny paradigm, constructing over $4.5M$ vision–language (VL) pairs—the **largest VQualA training dataset to date**. (2) We employ a multi-task training workflow that simultaneously enhances the model’s quantitative scoring precision and strengthens its capability for quality interpretation across both image and video modalities. (3) Building upon the vision encoder, we realize the **efficient model zoo extension**: the model zoo exhibits strong zero-shot performance, and each paired decoder requires only a swift warm-up using less than $1/1000$ of the pre-training data to achieve performance comparable to the fully trained counterpart. Overall, our work lays a cornerstone for advancing toward the **foundation LMM for VQualA**.
Successful Page Load