菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-15
📄 Abstract - FlowAct-R1: Towards Interactive Humanoid Video Generation

Interactive humanoid video generation aims to synthesize lifelike visual agents that can engage with humans through continuous and responsive video. Despite recent advances in video synthesis, existing methods often grapple with the trade-off between high-fidelity synthesis and real-time interaction requirements. In this paper, we propose FlowAct-R1, a framework specifically designed for real-time interactive humanoid video generation. Built upon a MMDiT architecture, FlowAct-R1 enables the streaming synthesis of video with arbitrary durations while maintaining low-latency responsiveness. We introduce a chunkwise diffusion forcing strategy, complemented by a novel self-forcing variant, to alleviate error accumulation and ensure long-term temporal consistency during continuous interaction. By leveraging efficient distillation and system-level optimizations, our framework achieves a stable 25fps at 480p resolution with a time-to-first-frame (TTFF) of only around 1.5 seconds. The proposed method provides holistic and fine-grained full-body control, enabling the agent to transition naturally between diverse behavioral states in interactive scenarios. Experimental results demonstrate that FlowAct-R1 achieves exceptional behavioral vividness and perceptual realism, while maintaining robust generalization across diverse character styles.

顶级标签: video generation aigc agents
详细标签: interactive video humanoid agents real-time synthesis temporal consistency full-body control 或 搜索:

FlowAct-R1:迈向交互式人形视频生成 / FlowAct-R1: Towards Interactive Humanoid Video Generation


1️⃣ 一句话总结

这篇论文提出了一个名为FlowAct-R1的新框架,它能够实时生成栩栩如生、能与用户持续互动的人形角色视频,在保证高质量画面的同时,实现了低延迟和流畅的交互体验。

源自 arXiv: 2601.10103