📄 论文总结
LiveTradeBench:利用大型语言模型寻找真实世界中的超额收益 / LiveTradeBench: Seeking Real-World Alpha with Large Language Models
1️⃣ 一句话总结
这篇论文提出了一个名为LiveTradeBench的实时交易测试平台,用于评估大型语言模型在动态金融市场中的决策能力,发现传统静态测试的高分并不代表实际交易表现优异,揭示了AI模型在真实不确定性环境下的能力差距。
Large language models (LLMs) achieve strong performance across benchmarks--from knowledge quizzes and math reasoning to web-agent tasks--but these tests occur in static settings, lacking real dynamics and uncertainty. Consequently, they evaluate isolated reasoning or problem-solving rather than decision-making under uncertainty. To address this, we introduce LiveTradeBench, a live trading environment for evaluating LLM agents in realistic and evolving markets. LiveTradeBench follows three design principles: (i) Live data streaming of market prices and news, eliminating dependence on offline backtesting and preventing information leakage while capturing real-time uncertainty; (ii) a portfolio-management abstraction that extends control from single-asset actions to multi-asset allocation, integrating risk management and cross-asset reasoning; and (iii) multi-market evaluation across structurally distinct environments--U.S. stocks and Polymarket prediction markets--differing in volatility, liquidity, and information flow. At each step, an agent observes prices, news, and its portfolio, then outputs percentage allocations that balance risk and return. Using LiveTradeBench, we run 50-day live evaluations of 21 LLMs across families. Results show that (1) high LMArena scores do not imply superior trading outcomes; (2) models display distinct portfolio styles reflecting risk appetite and reasoning dynamics; and (3) some LLMs effectively leverage live signals to adapt decisions. These findings expose a gap between static evaluation and real-world competence, motivating benchmarks that test sequential decision making and consistency under live uncertainty.
LiveTradeBench:利用大型语言模型寻找真实世界中的超额收益 / LiveTradeBench: Seeking Real-World Alpha with Large Language Models
这篇论文提出了一个名为LiveTradeBench的实时交易测试平台,用于评估大型语言模型在动态金融市场中的决策能力,发现传统静态测试的高分并不代表实际交易表现优异,揭示了AI模型在真实不确定性环境下的能力差距。