菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-21
📄 Abstract - Continuous Semantic Caching for Low-Cost LLM Serving

As Large Language Models (LLMs) become increasingly popular, caching responses so that they can be reused by users with semantically similar queries has become a vital strategy for reducing inference costs and latency. Existing caching frameworks have proposed to decide which query responses to cache by assuming a finite, known universe of discrete queries and learning their serving costs and arrival probabilities. As LLMs' pool of users and queries expands, however, such an assumption becomes increasingly untenable: real-world LLM queries reside in an infinite, continuous embedding space. In this paper, we establish the first rigorous theoretical framework for semantic LLM response caching in continuous query space under uncertainty. To bridge the gap between discrete optimization and continuous representation spaces, we introduce dynamic $\epsilon$-net discretization coupled with Kernel Ridge Regression. This design enables the system to formally quantify estimation uncertainty and generalize partial feedback on LLM query costs across continuous semantic query neighborhoods. We develop both offline learning and online adaptive algorithms optimized to reduce switching costs incurred by changing the cached responses. We prove that our online algorithm achieves a sublinear regret bound against an optimal continuous oracle, which reduces to existing bounds for discrete query models. Extensive empirical evaluations demonstrate that our framework approximates the continuous optimal cache well while also reducing computational and switching overhead compared to existing methods.

顶级标签: llm systems
详细标签: semantic caching continuous embedding kernel ridge regression online learning inference optimization 或 搜索:

面向低成本大模型服务的连续语义缓存 / Continuous Semantic Caching for Low-Cost LLM Serving


1️⃣ 一句话总结

该研究首次提出了一个适用于连续查询空间的语义缓存理论框架,通过动态离散化和核岭回归方法,让大模型在服务用户时能高效复用先前回答,从而大幅降低计算成本,并保证在线决策的优化效果。

源自 arXiv: 2604.20021