HTNav: A Hybrid Navigation Framework with Tiered Structure for Urban Aerial Vision-and-Language Navigation
Abstract
Inspired by the general Vision-and-Language Navigation (VLN) task, aerial VLN has drawn widespread attention, owing to its significant application value in areas such as logistics delivery and urban inspection. However, existing methods in complex urban environments face several challenges, including insufficient generalization to unknown scenes, suboptimal performance in long-distance path planning, and inadequate understanding of spatial continuity. To address these challenges, we propose HTNav, a new collaborative navigation framework that blends Imitation Learning (IL) and Reinforcement Learning (RL) into a hybrid IL-RL paradigm. This framework adopts a staged training mechanism to ensure the stability of the basic navigation strategy while enhancing its environmental exploration capability. By integrating a tiered decision-making mechanism, it achieves collaborative interaction between macro-level path planning and fine-grained action control. Furthermore, a map representation learning module is introduced to deepen its understanding of spatial continuity in open domains. On the CityNav benchmark, our method achieves state-of-the-art performance at all levels of scenes and task difficulties. Experimental results demonstrate that this framework significantly improves navigation precision and robustness in complex urban environments.