KVSmooth: Mitigating Hallucination in Multi-modal Large Language Models through Key-Value Smoothing
Siyu Jiang ⋅ Feiyang Chen ⋅ Xiaojin Zhang ⋅ Kun He
Abstract
Despite the significant progress of Multi-modal Large Language Models (MLLMs) across diverse tasks, hallucination, which corresponds to the generation of visually inconsistent objects, attributes, or relations, remains a major obstacle to their reliable deployment. Unlike pure language models, MLLMs must ground their generation process in visual inputs; However, existing models often suffer from semantic drift during decoding, causing outputs to diverge from visual facts as the sequence length increases.To address this, we propose KVSmooth, a training-free, plug-and-play method that mitigates hallucination by performing attention–entropy–guided adaptive smoothing on hidden states. Specifically, KVSmooth applies an exponential moving average (EMA) to both keys and values in the KV-Cache while dynamically quantifying the sink degree of each token through its attention distribution entropy to adaptively adjust the smoothing strength. Unlike computationally expensive retraining or contrastive decoding methods, KVSmooth operates efficiently during inference without additional training or model modification. Extensive experiments demonstrate that KVSmooth significantly reduces hallucination ($\mathit{CHAIR}_{S}$ from $41.8 \rightarrow 18.2$) while improving overall performance ($F_1$ score from $77.5 \rightarrow 79.2$), achieving higher precision and recall simultaneously, whereas prior methods often sacrifice one for the other, thereby validating the effectiveness and generality of our method.
Successful Page Load