菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-02
📄 Abstract - A2Eval: Agentic and Automated Evaluation for Embodied Brain

Current embodied VLM evaluation relies on static, expert-defined, manually annotated benchmarks that exhibit severe redundancy and coverage imbalance. This labor intensive paradigm drains computational and annotation resources, inflates costs, and distorts model rankings, ultimately stifling iterative development. To address this, we propose Agentic Automatic Evaluation (A2Eval), the first agentic framework that automates benchmark curation and evaluation through two collaborative agents. The Data Agent autonomously induces capability dimensions and assembles a balanced, compact evaluation suite, while the Eval Agent synthesizes and validates executable evaluation pipelines, enabling fully autonomous, high-fidelity assessment. Evaluated across 10 benchmarks and 13 models, A2Eval compresses evaluation suites by 85%, reduces overall computational costs by 77%, and delivers a 4.6x speedup while preserving evaluation quality. Crucially, A2Eval corrects systematic ranking biases, improves human alignment to Spearman's rho=0.85, and maintains high ranking fidelity (Kendall's tau=0.81), establishing a new standard for high-fidelity, low-cost embodied assessment. Our code and data will be public soon.

顶级标签: agents model evaluation benchmark
详细标签: embodied ai automatic evaluation benchmark curation agentic framework cost reduction 或 搜索:

A2Eval:具身智能体的代理化与自动化评估框架 / A2Eval: Agentic and Automated Evaluation for Embodied Brain


1️⃣ 一句话总结

这篇论文提出了一个名为A2Eval的自动化评估框架,它通过两个协作的智能代理来自动生成平衡的测试集和执行评估,从而大幅降低了传统具身智能模型评估的成本和时间,同时纠正了排名偏差,使评估结果更可靠、更高效。

源自 arXiv: 2602.01640