菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - BLITZRANK: Principled Zero-shot Ranking Agents with Tournament Graphs

Large language models have emerged as powerful zero-shot rerankers for retrieval-augmented generation, offering strong generalization without task-specific training. However, existing LLM reranking methods either rely on heuristics that fail to fully exploit the information revealed by each ranking decision or are inefficient when they do. We introduce a tournament graph framework that provides a principled foundation for $k$-wise reranking. Our key observation is that each $k$-document comparison reveals a complete tournament of $\binom{k}{2}$ pairwise preferences. These tournaments are aggregated into a global preference graph, whose transitive closure yields many additional orderings without further model invocations. We formalize when a candidate's rank is certifiably determined and design a query schedule that greedily maximizes information gain towards identifying the top-$m$ items. Our framework also gracefully handles non-transitive preferences - cycles induced by LLM judgments - by collapsing them into equivalence classes that yield principled tiered rankings. Empirically, across 14 benchmarks and 5 LLMs, our method achieves Pareto dominance over existing methods: matching or exceeding accuracy while requiring 25-40% fewer tokens than comparable approaches, and 7$\times$ fewer than pairwise methods at near-identical quality.

顶级标签: llm natural language processing model evaluation
详细标签: zero-shot ranking retrieval-augmented generation tournament graph preference aggregation efficiency 或 搜索:

闪电排序:基于锦标赛图原理的零样本排序智能体 / BLITZRANK: Principled Zero-shot Ranking Agents with Tournament Graphs


1️⃣ 一句话总结

这篇论文提出了一种基于锦标赛图的新型AI排序方法,它通过巧妙设计多文档比较策略,在保持高准确率的同时,大幅减少了计算量,让大语言模型能更高效地进行信息检索排序。

源自 arXiv: 2602.05448