Concept Bottleneck Models (CBMs) provide an interpretable framework for neural networks by mapping visual features to predefined, human-understandable concepts. However, the application of CBMs is often constrained by insufficient concept annotations. Recently, multi-modal pre-trained models have shown promise in reducing annotation costs by aligning visual representations with textual concept embeddings. Nevertheless, the quality and completeness of the predefined concepts significantly affect the performance of CBMs.In this work, we propose Hybrid Concept Bottleneck Model (HybridCBM), a novel CBM framework to address the challenge of incomplete predefined concepts. Our method consists of two main components: a Static Concept Bank and a Dynamic Concept Bank. The Static Concept Bank directly leverages large language models (LLMs) for concept construction, while the Dynamic Concept Bank employs learnable vectors to capture complementary and valuable concepts continuously during training. After training, a pre-trained translator converts these vectors into human-understandable concepts, further enhancing model interpretability. Notably, HybridCBM is highly flexible and can be easily applied to any CBM to improve performance. Experimental results across multiple datasets demonstrate that HybridCBM outperforms current state-of-the-art methods and achieves comparable results to black-box models. Additionally, we propose novel metrics to evaluate the quality of the learned concepts, showing that they perform comparably to predefined concepts.
Live content is unavailable. Log in and register to view live content