菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-08
📄 Abstract - Graph-Enhanced Deep Reinforcement Learning for Multi-Objective Unrelated Parallel Machine Scheduling

The Unrelated Parallel Machine Scheduling Problem (UPMSP) with release dates, setups, and eligibility constraints presents a significant multi-objective challenge. Traditional methods struggle to balance minimizing Total Weighted Tardiness (TWT) and Total Setup Time (TST). This paper proposes a Deep Reinforcement Learning framework using Proximal Policy Optimization (PPO) and a Graph Neural Network (GNN). The GNN effectively represents the complex state of jobs, machines, and setups, allowing the PPO agent to learn a direct scheduling policy. Guided by a multi-objective reward function, the agent simultaneously minimizes TWT and TST. Experimental results on benchmark instances demonstrate that our PPO-GNN agent significantly outperforms a standard dispatching rule and a metaheuristic, achieving a superior trade-off between both objectives. This provides a robust and scalable solution for complex manufacturing scheduling.

顶级标签: reinforcement learning systems machine learning
详细标签: scheduling optimization graph neural networks proximal policy optimization multi-objective learning manufacturing systems 或 搜索:

图增强深度强化学习在多目标不相关并行机调度中的应用 / Graph-Enhanced Deep Reinforcement Learning for Multi-Objective Unrelated Parallel Machine Scheduling


1️⃣ 一句话总结

这篇论文提出了一种结合图神经网络和强化学习的新方法,用于解决复杂的多目标生产调度问题,能同时高效地减少任务延误和机器准备时间,效果优于传统规则和优化算法。

源自 arXiv: 2602.08052