菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-16
📄 Abstract - Universal Algorithm-Implicit Learning

Current meta-learning methods are constrained to narrow task distributions with fixed feature and label spaces, limiting applicability. Moreover, the current meta-learning literature uses key terms like "universal" and "general-purpose" inconsistently and lacks precise definitions, hindering comparability. We introduce a theoretical framework for meta-learning which formally defines practical universality and introduces a distinction between algorithm-explicit and algorithm-implicit learning, providing a principled vocabulary for reasoning about universal meta-learning methods. Guided by this framework, we present TAIL, a transformer-based algorithm-implicit meta-learner that functions across tasks with varying domains, modalities, and label configurations. TAIL features three innovations over prior transformer-based meta-learners: random projections for cross-modal feature encoding, random injection label embeddings that extrapolate to larger label spaces, and efficient inline query processing. TAIL achieves state-of-the-art performance on standard few-shot benchmarks while generalizing to unseen domains. Unlike other meta-learning methods, it also generalizes to unseen modalities, solving text classification tasks despite training exclusively on images, handles tasks with up to 20$\times$ more classes than seen during training, and provides orders-of-magnitude computational savings over prior transformer-based approaches.

顶级标签: meta-learning machine learning theory
详细标签: universal meta-learning algorithm-implicit learning transformer meta-learner cross-modal generalization few-shot learning 或 搜索:

通用算法-隐式学习 / Universal Algorithm-Implicit Learning


1️⃣ 一句话总结

这篇论文提出了一个定义‘通用性’的理论框架,并基于此开发了一个名为TAIL的Transformer模型,它能够跨越不同领域、数据类型和标签设置进行高效学习,在少样本学习任务上取得了领先性能,并展现出强大的跨模态和跨规模泛化能力。

源自 arXiv: 2602.14761