菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-18
📄 Abstract - Universal Skeleton Understanding via Differentiable Rendering and MLLMs

Multimodal large language models (MLLMs) exhibit strong visual-language reasoning, yet remain confined to their native modalities and cannot directly process structured, non-visual data such as human skeletons. Existing methods either compress skeleton dynamics into lossy feature vectors for text alignment, or quantize motion into discrete tokens that generalize poorly across heterogeneous skeleton formats. We present SkeletonLLM, which achieves universal skeleton understanding by translating arbitrary skeleton sequences into the MLLM's native visual modality. At its core is DrAction, a differentiable, format-agnostic renderer that converts skeletal kinematics into compact image sequences. Because the pipeline is end-to-end differentiable, MLLM gradients can directly guide the rendering to produce task-informative visual tokens. To further enhance reasoning capabilities, we introduce a cooperative training strategy: Causal Reasoning Distillation transfers structured, step-by-step reasoning from a teacher model, while Discriminative Finetuning sharpens decision boundaries between confusable actions. SkeletonLLM demonstrates strong generalization on diverse tasks including recognition, captioning, reasoning, and cross-format transfer -- suggesting a viable path for applying MLLMs to non-native modalities. Code will be released upon acceptance.

顶级标签: multi-modal llm model training
详细标签: skeleton understanding differentiable rendering multimodal reasoning visual-language models cross-format transfer 或 搜索:

通过可微分渲染与多模态大语言模型实现通用骨架理解 / Universal Skeleton Understanding via Differentiable Rendering and MLLMs


1️⃣ 一句话总结

这篇论文提出了一种名为SkeletonLLM的新方法,它通过一个可微分的通用渲染器将各种骨架动作数据转换成图像序列,让原本只能处理图像和文本的多模态大模型能够直接理解和推理人体动作,从而在识别、描述和跨格式迁移等多种任务上表现出强大的通用能力。

源自 arXiv: 2603.18003