CoFL:用于语言条件导航的连续流场 / CoFL: Continuous Flow Fields for Language-Conditioned Navigation
1️⃣ 一句话总结
这篇论文提出了一个名为CoFL的端到端导航模型,它能够根据鸟瞰图观察和语言指令直接生成一个连续的流场,从而规划出平滑、反应灵敏的机器人运动轨迹,并在模拟和真实世界的实验中取得了优于现有方法的性能。
Language-conditioned navigation pipelines often rely on brittle modular components or costly action-sequence generation. To address these limitations, we present CoFL, an end-to-end policy that directly maps a bird's-eye view (BEV) observation and a language instruction to a continuous flow field for navigation. Instead of predicting discrete action tokens or sampling action chunks via iterative denoising, CoFL outputs instantaneous velocities that can be queried at arbitrary 2D projected locations. Trajectories are obtained by numerical integration of the predicted field, producing smooth motion that remains reactive under closed-loop execution. To enable large-scale training, we build a dataset of over 500k BEV image-instruction pairs, each procedurally annotated with a flow field and a trajectory derived from BEV semantic maps built on Matterport3D and ScanNet. By training on a mixed distribution, CoFL significantly outperforms modular Vision-Language Model (VLM)-based planners and generative policy baselines on strictly unseen scenes. Finally, we deploy CoFL zero-shot in real-world experiments with overhead BEV observations across multiple layouts, maintaining reliable closed-loop control and a high success rate.
CoFL:用于语言条件导航的连续流场 / CoFL: Continuous Flow Fields for Language-Conditioned Navigation
这篇论文提出了一个名为CoFL的端到端导航模型,它能够根据鸟瞰图观察和语言指令直接生成一个连续的流场,从而规划出平滑、反应灵敏的机器人运动轨迹,并在模拟和真实世界的实验中取得了优于现有方法的性能。
源自 arXiv: 2603.02854