菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-15
📄 Abstract - Learning to Order: Task Sequencing as In-Context Optimization

Task sequencing (TS) is one of the core open problems in Deep Learning, arising in a plethora of real-world domains, from robotic assembly lines to autonomous driving. Unfortunately, prior work has not convincingly demonstrated the generalization ability of meta-learned TS methods to solve new TS problems, given few initial demonstrations. In this paper, we demonstrate that deep neural networks can meta-learn over an infinite prior of synthetically generated TS problems and achieve a few-shot generalization. We meta-learn a transformer-based architecture over datasets of sequencing trajectories generated from a prior distribution that samples sequencing problems as paths in directed graphs. In a large-scale experiment, we provide ample empirical evidence that our meta-learned models discover optimal task sequences significantly quicker than non-meta-learned baselines.

顶级标签: machine learning agents systems
详细标签: task sequencing meta-learning few-shot generalization transformer directed graphs 或 搜索:

学习排序:任务序列化作为上下文优化 / Learning to Order: Task Sequencing as In-Context Optimization


1️⃣ 一句话总结

这篇论文提出了一种新方法,通过让神经网络在大量模拟任务排序问题中学习,使其能够快速适应并找到新任务的最优执行顺序,比传统方法效率更高。

源自 arXiv: 2603.14550