菜单

🤖 系统
📄 Abstract - Scaling Spatial Intelligence with Multimodal Foundation Models

Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks: 68.7% on VSI-Bench, 43.3% on MMSI, 85.6% on MindCube, 54.6% on ViewSpatial, and 50.1% on SITE, while maintaining strong general multimodal understanding (e.g., 84.9% on MMBench-En). More importantly, we analyze the impact of data scaling, discuss early signs of emergent generalization capabilities enabled by diverse data training, analyze the risk of overfitting and language shortcuts, present a preliminary study on spatial chain-of-thought reasoning, and validate the potential downstream application. SenseNova-SI is an ongoing project, and this report will be updated continuously. All newly trained multimodal foundation models are publicly released to facilitate further research in this direction.

顶级标签: multi-modal model training model evaluation
详细标签: spatial intelligence multimodal foundation models data scaling visual understanding benchmark evaluation 或 搜索:

📄 论文总结

通过多模态基础模型扩展空间智能 / Scaling Spatial Intelligence with Multimodal Foundation Models


1️⃣ 一句话总结

这篇论文提出了SenseNova-SI系列模型,通过构建包含800万多样化样本的数据集来增强多模态基础模型的空间理解能力,在多个空间智能基准测试中取得了领先性能,同时保持了强大的通用多模态理解能力。


📄 打开原文 PDF