Skip to yearly menu bar Skip to main content


Benchmarking Segmentation Models with Mask-Preserved Attribute Editing

Zijin Yin · Kongming Liang · Bing Li · Zhanyu Ma · Jun Guo

Arch 4A-E Poster #284
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT


When deploying segmentation models in practice, it is critical to evaluate their behaviors in varied and complex scenes.Different from the previous evaluation paradigms only in consideration of global attribute variations (e.g. adverse weather), we investigate both local and global attribute variations for robustness evaluation. To achieve this, we construct a mask-preserved attribute editing pipeline to edit visual attributes of real images with precise control of structural information. Therefore, the original segmentation labels can be reused for the edited images. Using our pipeline, we construct a benchmark covering both object and image attributes (e.g. color, material, pattern, style). We evaluate a broad variety of semantic segmentation models, spanning from conventional close-set models to recent open-vocabulary large models on their robustness to different types of variations. We find that both local and global attribute variations affect segmentation performances, and the sensitivity of models diverges across different variation types. We argue that local attributes have the same importance as global attributes, and should be considered in the robustness evaluation of segmentation models. Code:

Live content is unavailable. Log in and register to view live content