菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-16
📄 Abstract - Prompt-to-Gesture: Measuring the Capabilities of Image-to-Video Deictic Gesture Generation

Gesture recognition research, unlike NLP, continues to face acute data scarcity, with progress constrained by the need for costly human recordings or image processing approaches that cannot generate authentic variability in the gestures themselves. Recent advancements in image-to-video foundation models have enabled the generation of photorealistic, semantically rich videos guided by natural language. These capabilities open up new possibilities for creating effort-free synthetic data, raising the critical question of whether video Generative AI models can augment and complement traditional human-generated gesture data. In this paper, we introduce and analyze prompt-based video generation to construct a realistic deictic gestures dataset and rigorously evaluate its effectiveness for downstream tasks. We propose a data generation pipeline that produces deictic gestures from a small number of reference samples collected from human participants, providing an accessible approach that can be leveraged both within and beyond the machine learning community. Our results demonstrate that the synthetic gestures not only align closely with real ones in terms of visual fidelity but also introduce meaningful variability and novelty that enrich the original data, further supported by superior performance of various deep models using a mixed dataset. These findings highlight that image-to-video techniques, even in their early stages, offer a powerful zero-shot approach to gesture synthesis with clear benefits for downstream tasks.

顶级标签: video generation aigc multi-modal
详细标签: gesture synthesis synthetic data image-to-video deictic gestures data augmentation 或 搜索:

提示词生成手势:评估图像到视频指代手势生成模型的能力 / Prompt-to-Gesture: Measuring the Capabilities of Image-to-Video Deictic Gesture Generation


1️⃣ 一句话总结

这篇论文提出了一种利用图像到视频生成模型,仅需少量真人手势样本就能自动合成逼真且多样化的指代手势视频数据的方法,并通过实验证明这些合成数据能有效提升下游手势识别任务的性能。

源自 arXiv: 2604.14953