菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-18
📄 Abstract - Next-Embedding Prediction Makes Strong Vision Learners

Inspired by the success of generative pretraining in natural language, we ask whether the same principles can yield strong self-supervised visual learners. Instead of training models to output features for downstream use, we train them to generate embeddings to perform predictive tasks directly. This work explores such a shift from learning representations to learning models. Specifically, models learn to predict future patch embeddings conditioned on past ones, using causal masking and stop gradient, which we refer to as Next-Embedding Predictive Autoregression (NEPA). We demonstrate that a simple Transformer pretrained on ImageNet-1k with next embedding prediction as its sole learning objective is effective - no pixel reconstruction, discrete tokens, contrastive loss, or task-specific heads. This formulation retains architectural simplicity and scalability, without requiring additional design complexity. NEPA achieves strong results across tasks, attaining 83.8% and 85.3% top-1 accuracy on ImageNet-1K with ViT-B and ViT-L backbones after fine-tuning, and transferring effectively to semantic segmentation on ADE20K. We believe generative pretraining from embeddings provides a simple, scalable, and potentially modality-agnostic alternative to visual self-supervised learning.

顶级标签: computer vision model training machine learning
详细标签: self-supervised learning generative pretraining vision transformer embedding prediction autoregressive models 或 搜索:

通过预测下一个嵌入来构建强大的视觉学习模型 / Next-Embedding Prediction Makes Strong Vision Learners


1️⃣ 一句话总结

这篇论文提出了一种名为NEPA的新方法,它通过让模型像预测句子下一个词一样去预测图像中下一个区域的嵌入表示,从而在不需要复杂设计的情况下,成功训练出性能强大的通用视觉模型。


源自 arXiv: 2512.16922