作为时间序列解释评判者的大语言模型 / LLM-as-a-Judge for Time Series Explanations
1️⃣ 一句话总结
这篇论文提出并验证了一种新方法,即使用大语言模型在没有标准答案的情况下,直接评估基于时间序列数据生成的文本解释是否正确,发现大语言模型作为“评判者”比作为“生成者”更可靠。
Evaluating factual correctness of LLM generated natural language explanations grounded in time series data remains an open challenge. Although modern models generate textual interpretations of numerical signals, existing evaluation methods are limited: reference based similarity metrics and consistency checking models require ground truth explanations, while traditional time series methods operate purely on numerical values and cannot assess free form textual reasoning. Thus, no general purpose method exists to directly verify whether an explanation is faithful to underlying time series data without predefined references or task specific rules. We study large language models as both generators and evaluators of time series explanations in a reference free setting, where given a time series, question, and candidate explanation, the evaluator assigns a ternary correctness label based on pattern identification, numeric accuracy, and answer faithfulness, enabling principled scoring and comparison. To support this, we construct a synthetic benchmark of 350 time series cases across seven query types, each paired with correct, partially correct, and incorrect explanations. We evaluate models across four tasks: explanation generation, relative ranking, independent scoring, and multi anomaly detection. Results show a clear asymmetry: generation is highly pattern dependent and exhibits systematic failures on certain query types, with accuracies ranging from 0.00 to 0.12 for Seasonal Drop and Volatility Shift, to 0.94 to 0.96 for Structural Break, while evaluation is more stable, with models correctly ranking and scoring explanations even when their own outputs are incorrect. These findings demonstrate feasibility of data grounded LLM based evaluation for time series explanations and highlight their potential as reliable evaluators of data grounded reasoning in the time series domain.
作为时间序列解释评判者的大语言模型 / LLM-as-a-Judge for Time Series Explanations
这篇论文提出并验证了一种新方法,即使用大语言模型在没有标准答案的情况下,直接评估基于时间序列数据生成的文本解释是否正确,发现大语言模型作为“评判者”比作为“生成者”更可靠。
源自 arXiv: 2604.02118