A4-Agent:一种用于零样本可供性推理的智能体框架 / A4-Agent: An Agentic Framework for Zero-Shot Affordance Reasoning
1️⃣ 一句话总结
这篇论文提出了一个名为A4-Agent的零样本智能体框架,它通过协调三个预训练基础模型(分别负责想象交互过程、推理交互对象部件和精确定位交互区域),无需额外训练就能在多种物体和环境上实现比有监督方法更好的交互区域预测,解决了现有模型泛化能力差的问题。
Affordance prediction, which identifies interaction regions on objects based on language instructions, is critical for embodied AI. Prevailing end-to-end models couple high-level reasoning and low-level grounding into a single monolithic pipeline and rely on training over annotated datasets, which leads to poor generalization on novel objects and unseen environments. In this paper, we move beyond this paradigm by proposing A4-Agent, a training-free agentic framework that decouples affordance prediction into a three-stage pipeline. Our framework coordinates specialized foundation models at test time: (1) a $\textbf{Dreamer}$ that employs generative models to visualize $\textit{how}$ an interaction would look; (2) a $\textbf{Thinker}$ that utilizes large vision-language models to decide $\textit{what}$ object part to interact with; and (3) a $\textbf{Spotter}$ that orchestrates vision foundation models to precisely locate $\textit{where}$ the interaction area is. By leveraging the complementary strengths of pre-trained models without any task-specific fine-tuning, our zero-shot framework significantly outperforms state-of-the-art supervised methods across multiple benchmarks and demonstrates robust generalization to real-world settings.
A4-Agent:一种用于零样本可供性推理的智能体框架 / A4-Agent: An Agentic Framework for Zero-Shot Affordance Reasoning
这篇论文提出了一个名为A4-Agent的零样本智能体框架,它通过协调三个预训练基础模型(分别负责想象交互过程、推理交互对象部件和精确定位交互区域),无需额外训练就能在多种物体和环境上实现比有监督方法更好的交互区域预测,解决了现有模型泛化能力差的问题。
源自 arXiv: 2512.14442