LATTICE:评估加密代理的决策支持效用 / LATTICE: Evaluating Decision Support Utility of Crypto Agents
1️⃣ 一句话总结
该论文提出了LATTICE基准,通过六个评估维度和十六种任务类型,利用大语言模型自动评分,系统性地衡量加密代理在实际用户场景中辅助决策的能力,并通过对六个真实加密助手的测试揭示了不同代理在决策支持质量上的关键差异。
We introduce LATTICE, a benchmark for evaluating the decision support utility of crypto agents in realistic user-facing scenarios. Prior crypto agent benchmarks mainly focus on reasoning-based or outcome-based evaluation, but do not assess agents' ability to assist user decision-making. LATTICE addresses this gap by: (1) defining six evaluation dimensions that capture key decision support properties; (2) proposing 16 task types that span the end-to-end crypto copilot workflow; and (3) using LLM judges to automatically score agent outputs based on these dimensions and tasks. Crucially, the dimensions and tasks are designed to be evaluable at scale using LLM judges, without relying on ground truth from expert annotators or external data sources. In lieu of these dependencies, LATTICE's LLM judge rubrics can be continually audited and updated given new dimensions, tasks, criteria, and human feedback, thus promoting reliable and extensible evaluation. While other benchmarks often compare foundation models sharing a generic agent framework, we use LATTICE to assess production-level agents used in actual crypto copilot products, reflecting the importance of orchestration and UI/UX design in determining agent quality. In this paper, we evaluate six real-world crypto copilots on 1,200 diverse queries and report breakdowns across dimensions, tasks, and query categories. Our experiments show that most of the tested copilots achieve comparable aggregate scores, but differ more significantly on dimension-level and task-level performance. This pattern suggests meaningful trade-offs in decision support quality: users with different priorities may be better served by different copilots than the aggregate rankings alone would indicate. To support reproducible research, we open-source all LATTICE code and data used in this paper.
LATTICE:评估加密代理的决策支持效用 / LATTICE: Evaluating Decision Support Utility of Crypto Agents
该论文提出了LATTICE基准,通过六个评估维度和十六种任务类型,利用大语言模型自动评分,系统性地衡量加密代理在实际用户场景中辅助决策的能力,并通过对六个真实加密助手的测试揭示了不同代理在决策支持质量上的关键差异。
源自 arXiv: 2604.26235