菜单

🤖 系统
📄 Abstract - Block Cascading: Training Free Acceleration of Block-Causal Video Models

Block-causal video generation faces a stark speed-quality trade-off: small 1.3B models manage only 16 FPS while large 14B models crawl at 4.5 FPS, forcing users to choose between responsiveness and quality. Block Cascading significantly mitigates this trade-off through training-free parallelization. Our key insight: future video blocks do not need fully denoised current blocks to begin generation. By starting block generation with partially denoised context from predecessors, we transform sequential pipelines into parallel cascades where multiple blocks denoise simultaneously. With 5 GPUs exploiting temporal parallelism, we achieve ~2x acceleration across all model scales: 1.3B models accelerate from 16 to 30 FPS, 14B models from 4.5 to 12.5 FPS. Beyond inference speed, Block Cascading eliminates overhead from KV-recaching (of ~200ms) during context switches for interactive generation. Extensive evaluations validated against multiple block-causal pipelines demonstrate no significant loss in generation quality when switching from block-causal to Block Cascading pipelines for inference. Project Page: this https URL

顶级标签: video generation model training systems
详细标签: block-causal models inference acceleration parallel denoising video generation training-free optimization 或 搜索:

📄 论文总结

块级联:无需训练的块因果视频模型加速方法 / Block Cascading: Training Free Acceleration of Block-Causal Video Models


1️⃣ 一句话总结

这项研究提出了一种无需额外训练的视频生成加速技术,通过让多个视频块并行去噪,在保持生成质量的同时将处理速度提升约两倍,解决了大型模型速度与质量难以兼顾的问题。


📄 打开原文 PDF