基于词元级不确定性的LLM卸载中的精度-延迟权衡 / Accuracy-Delay Trade-Off in LLM Offloading via Token-Level Uncertainty
1️⃣ 一句话总结
这篇论文提出了一种基于词元级不确定性的智能卸载框架,通过动态选择在本地还是边缘服务器上执行大语言模型推理,在保证精度的同时有效降低了多用户环境下的延迟。
Large language models (LLMs) offer significant potential for intelligent mobile services but are computationally intensive for resource-constrained devices. Mobile edge computing (MEC) allows such devices to offload inference tasks to edge servers (ESs), yet introduces latency due to communication and serverside queuing, especially in multi-user environments. In this work, we propose an uncertainty-aware offloading framework that dynamically decides whether to perform inference locally or offload it to the ES, based on token-level uncertainty and resource constraints. We define a margin-based token-level uncertainty metric and demonstrate its correlation with model accuracy. Leveraging this metric, we design a greedy offloading algorithm (GOA) that minimizes delay while maintaining accuracy by prioritizing offloading for highuncertainty queries. Our experiments show that GOA consistently achieves a favorable trade-off, outperforming baseline strategies in both accuracy and latency across varying user densities, and operates with practical computation time. These results establish GOA as a scalable and effective solution for LLM inference in MEC environments.
基于词元级不确定性的LLM卸载中的精度-延迟权衡 / Accuracy-Delay Trade-Off in LLM Offloading via Token-Level Uncertainty
这篇论文提出了一种基于词元级不确定性的智能卸载框架,通过动态选择在本地还是边缘服务器上执行大语言模型推理,在保证精度的同时有效降低了多用户环境下的延迟。
源自 arXiv: 2602.07958