DiffAttn:基于扩散模型和LLM增强语义推理的驾驶员视觉注意力预测 / DiffAttn: Diffusion-Based Drivers' Visual Attention Prediction with LLM-Enhanced Semantic Reasoning
1️⃣ 一句话总结
这篇论文提出了一个名为DiffAttn的新框架,它利用扩散模型来预测驾驶员在驾驶时会看向哪里,并通过结合大型语言模型来增强对道路安全关键信息的理解,从而在多个测试中取得了当前最好的预测效果。
Drivers' visual attention provides critical cues for anticipating latent hazards and directly shapes decision-making and control maneuvers, where its absence can compromise traffic safety. To emulate drivers' perception patterns and advance visual attention prediction for intelligent vehicles, we propose DiffAttn, a diffusion-based framework that formulates this task as a conditional diffusion-denoising process, enabling more accurate modeling of drivers' attention. To capture both local and global scene features, we adopt Swin Transformer as encoder and design a decoder that combines a Feature Fusion Pyramid for cross-layer interaction with dense, multi-scale conditional diffusion to jointly enhance denoising learning and model fine-grained local and global scene contexts. Additionally, a large language model (LLM) layer is incorporated to enhance top-down semantic reasoning and improve sensitivity to safety-critical cues. Extensive experiments on four public datasets demonstrate that DiffAttn achieves state-of-the-art (SoTA) performance, surpassing most video-based, top-down-feature-driven, and LLM-enhanced baselines. Our framework further supports interpretable driver-centric scene understanding and has the potential to improve in-cabin human-machine interaction, risk perception, and drivers' state measurement in intelligent vehicles.
DiffAttn:基于扩散模型和LLM增强语义推理的驾驶员视觉注意力预测 / DiffAttn: Diffusion-Based Drivers' Visual Attention Prediction with LLM-Enhanced Semantic Reasoning
这篇论文提出了一个名为DiffAttn的新框架,它利用扩散模型来预测驾驶员在驾驶时会看向哪里,并通过结合大型语言模型来增强对道路安全关键信息的理解,从而在多个测试中取得了当前最好的预测效果。
源自 arXiv: 2603.28251