菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-26
📄 Abstract - Unleashing Guidance Without Classifiers for Human-Object Interaction Animation

Generating realistic human-object interaction (HOI) animations remains challenging because it requires jointly modeling dynamic human actions and diverse object geometries. Prior diffusion-based approaches often rely on hand-crafted contact priors or human-imposed kinematic constraints to improve contact quality. We propose LIGHT, a data-driven alternative in which guidance emerges from the denoising pace itself, reducing dependence on manually designed priors. Building on diffusion forcing, we factor the representation into modality-specific components and assign individualized noise levels with asynchronous denoising schedules. In this paradigm, cleaner components guide noisier ones through cross-attention, yielding guidance without auxiliary classifiers. We find that this data-driven guidance is inherently contact-aware, and can be enhanced when training is augmented with a broad spectrum of synthetic object geometries, encouraging invariance of contact semantics to geometric diversity. Extensive experiments show that pace-induced guidance more effectively mirrors the benefits of contact priors than conventional classifier-free guidance, while achieving higher contact fidelity, more realistic HOI generation, and stronger generalization to unseen objects and tasks.

顶级标签: computer vision model training multi-modal
详细标签: human-object interaction diffusion models contact-aware guidance asynchronous denoising video generation 或 搜索:

无需分类器引导的人-物交互动画生成 / Unleashing Guidance Without Classifiers for Human-Object Interaction Animation


1️⃣ 一句话总结

这篇论文提出了一种名为LIGHT的新方法,它通过控制去噪速度让AI模型自己学会生成逼真的人与物体互动动画,不再需要依赖人工设计的接触规则或额外分类器,从而能更好地处理各种形状的物体和复杂的互动任务。

源自 arXiv: 2603.25734