菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-13
📄 Abstract - The Agent's First Day: Benchmarking Learning, Exploration, and Scheduling in the Workplace Scenarios

The rapid evolution of Multi-modal Large Language Models (MLLMs) has advanced workflow automation; however, existing research mainly targets performance upper bounds in static environments, overlooking robustness for stochastic real-world deployment. We identify three key challenges: dynamic task scheduling, active exploration under uncertainty, and continuous learning from experience. To bridge this gap, we introduce \method{}, a dynamic evaluation environment that simulates a "trainee" agent continuously exploring a novel setting. Unlike traditional benchmarks, \method{} evaluates agents along three dimensions: (1) context-aware scheduling for streaming tasks with varying priorities; (2) prudent information acquisition to reduce hallucination via active exploration; and (3) continuous evolution by distilling generalized strategies from rule-based, dynamically generated tasks. Experiments show that cutting-edge agents have significant deficiencies in dynamic environments, especially in active exploration and continual learning. Our work establishes a framework for assessing agent reliability, shifting evaluation from static tests to realistic, production-oriented scenarios. Our codes are available at this https URL

顶级标签: agents benchmark model evaluation
详细标签: dynamic evaluation active exploration continual learning task scheduling workflow automation 或 搜索:

智能体的首个工作日:在工作场景中评估学习、探索与调度能力 / The Agent's First Day: Benchmarking Learning, Exploration, and Scheduling in the Workplace Scenarios


1️⃣ 一句话总结

这篇论文提出了一个名为M3E的动态评估环境,用于测试AI智能体在模拟真实工作场景中处理动态任务调度、主动探索和持续学习的能力,发现当前先进模型在这些方面仍存在明显不足。

源自 arXiv: 2601.08173