菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-17
📄 Abstract - Latency-aware Human-in-the-Loop Reinforcement Learning for Semantic Communications

Semantic communication promises task-aligned transmission but must reconcile semantic fidelity with stringent latency guarantees in immersive and safety-critical services. This paper introduces a time-constrained human-in-the-loop reinforcement learning (TC-HITL-RL) framework that embeds human feedback, semantic utility, and latency control within a semantic-aware Open radio access network (RAN) architecture. We formulate semantic adaptation driven by human feedback as a constrained Markov decision process (CMDP) whose state captures semantic quality, human preferences, queue slack, and channel dynamics, and solve it via a primal--dual proximal policy optimization algorithm with action shielding and latency-aware reward shaping. The resulting policy preserves PPO-level semantic rewards while tightening the variability of both air-interface and near-real-time RAN intelligent controller processing budgets. Simulations over point-to-multipoint links with heterogeneous deadlines show that TC-HITL-RL consistently meets per-user timing constraints, outperforms baseline schedulers in reward, and stabilizes resource consumption, providing a practical blueprint for latency-aware semantic adaptation.

顶级标签: systems reinforcement learning agents
详细标签: semantic communication human-in-the-loop latency control constrained markov decision process radio access network 或 搜索:

面向语义通信的延迟感知人在环路强化学习 / Latency-aware Human-in-the-Loop Reinforcement Learning for Semantic Communications


1️⃣ 一句话总结

这篇论文提出了一个结合人类反馈和延迟控制的强化学习框架,用于在保证严格时间要求的前提下,优化语义通信系统的传输质量和资源使用效率。

源自 arXiv: 2602.15640