Found-RL:基于基础模型增强的强化学习用于自动驾驶 / Found-RL: foundation model-enhanced reinforcement learning for autonomous driving
1️⃣ 一句话总结
这篇论文提出了一个名为Found-RL的平台,它通过异步批量推理框架和多种监督机制,将大型视觉语言模型的知识高效地融入强化学习,从而在保持实时推理速度的同时,显著提升了自动驾驶策略的样本效率和语义理解能力。
Reinforcement Learning (RL) has emerged as a dominant paradigm for end-to-end autonomous driving (AD). However, RL suffers from sample inefficiency and a lack of semantic interpretability in complex scenarios. Foundation Models, particularly Vision-Language Models (VLMs), can mitigate this by offering rich, context-aware knowledge, yet their high inference latency hinders deployment in high-frequency RL training loops. To bridge this gap, we present Found-RL, a platform tailored to efficiently enhance RL for AD using foundation models. A core innovation is the asynchronous batch inference framework, which decouples heavy VLM reasoning from the simulation loop, effectively resolving latency bottlenecks to support real-time learning. We introduce diverse supervision mechanisms: Value-Margin Regularization (VMR) and Advantage-Weighted Action Guidance (AWAG) to effectively distill expert-like VLM action suggestions into the RL policy. Additionally, we adopt high-throughput CLIP for dense reward shaping. We address CLIP's dynamic blindness via Conditional Contrastive Action Alignment, which conditions prompts on discretized speed/command and yields a normalized, margin-based bonus from context-specific action-anchor scoring. Found-RL provides an end-to-end pipeline for fine-tuned VLM integration and shows that a lightweight RL model can achieve near-VLM performance compared with billion-parameter VLMs while sustaining real-time inference (approx. 500 FPS). Code, data, and models will be publicly available at this https URL.
Found-RL:基于基础模型增强的强化学习用于自动驾驶 / Found-RL: foundation model-enhanced reinforcement learning for autonomous driving
这篇论文提出了一个名为Found-RL的平台,它通过异步批量推理框架和多种监督机制,将大型视觉语言模型的知识高效地融入强化学习,从而在保持实时推理速度的同时,显著提升了自动驾驶策略的样本效率和语义理解能力。
源自 arXiv: 2602.10458