HG-Lane: High-Fidelity Generation of Lane Scenes under Adverse Weather and Lighting Conditions without Re-annotation
Abstract
Lane detection is a crucial task in autonomous driving, which is conducive to ensuring the safe operation of vehicles. However, current datasets like CULane and TuSimple have relatively limited data under extreme weather conditions, such as rain, snow and fog, which makes detection models unreliable in extreme conditions, potentially leading to serious safety-critical failures on the road. In this direction, we propose \textbf{\textit{HG-Lane}}, a \textbf{H}igh-fidelity \textbf{G}eneration framework for \textbf{Lane} Scenes under adverse weather and lighting conditions, without the need for re-annotation and training. Based on our framework, we further propose a benchmark that includes adverse weather and lighting conditions, with 30,000 images. Experiment results demonstrate that our method constantly and significantly improves the detection performance of all the related lane detection networks. Taking the state-of-the-art CLRNet as an example, the overall mF1 on our benchmark increases by 20.87%. The F1@50 for the overall, normal, snow, rain, fog, night, and dusk categories increases by 19.75%, 8.63%, 38.8%, 14.96%, 26.84%, 21.5%, and 12.04%, respectively. Code and dataset are included in the supplementary materials.