菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-25
📄 Abstract - PointRFT: Explicit Reinforcement Fine-tuning for Point Cloud Few-shot Learning

Understanding spatial dynamics and semantics in point cloud is fundamental for comprehensive 3D comprehension. While reinforcement learning algorithms such as Group Relative Policy Optimization (GRPO) have recently achieved remarkable breakthroughs in large language models by incentivizing reasoning capabilities through strategic reward design, their potential remains largely unexplored in the 3D perception domain. This naturally raises a pivotal question: Can RL-based methods effectively empower 3D point cloud fine-tuning? In this paper, we propose PointRFT, the first reinforcement fine-tuning paradigm tailored specifically for point cloud representation learning. We select three prevalent 3D foundation models and devise specialized accuracy reward and dispersion reward functions to stabilize training and mitigate distribution shifts. Through comprehensive few-shot classification experiments comparing distinct training paradigms, we demonstrate that PointRFT consistently outperforms vanilla supervised fine-tuning (SFT) across diverse benchmarks. Furthermore, when organically integrated into a hybrid Pretraining-SFT-RFT paradigm, the representational capacity of point cloud foundation models is substantially unleashed, achieving state-of-the-art performance particularly under data-scarce scenarios.

顶级标签: computer vision model training reinforcement learning
详细标签: point cloud few-shot learning fine-tuning 3d perception representation learning 或 搜索:

PointRFT:面向点云少样本学习的显式强化微调方法 / PointRFT: Explicit Reinforcement Fine-tuning for Point Cloud Few-shot Learning


1️⃣ 一句话总结

这篇论文首次将强化学习引入3D点云模型的微调过程,通过设计专门的奖励机制,有效提升了模型在数据稀缺情况下的识别性能,尤其在少样本学习任务中表现优异。

源自 arXiv: 2603.23957