菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-16
📄 Abstract - Flash-Unified: A Training-Free and Task-Aware Acceleration Framework for Native Unified Models

Native unified multimodal models, which integrate both generative and understanding capabilities, face substantial computational overhead that hinders their real-world deployment. Existing acceleration techniques typically employ a static, monolithic strategy, ignoring the fundamental divergence in computational profiles between iterative generation tasks (e.g., image generation) and single-pass understanding tasks (e.g., VQA). In this work, we present the first systematic analysis of unified models, revealing pronounced parameter specialization, where distinct neuron sets are critical for each task. This implies that, at the parameter level, unified models have implicitly internalized separate inference pathways for generation and understanding within a single architecture. Based on these insights, we introduce a training-free and task-aware acceleration framework, FlashU, that tailors optimization to each task's demands. Across both tasks, we introduce Task-Specific Network Pruning and Dynamic Layer Skipping, aiming to eliminate inter-layer and task-specific redundancy. For visual generation, we implement a time-varying control signal for the guidance scale and a temporal approximation for the diffusion head via Diffusion Head Cache. For multimodal understanding, building upon the pruned model, we introduce Dynamic Token Pruning via a V-Norm Proxy to exploit the spatial redundancy of visual inputs. Extensive experiments on Show-o2 demonstrate that FlashU achieves 1.78$\times$ to 2.01$\times$ inference acceleration across both understanding and generation tasks while maintaining SOTA performance, outperforming competing unified models and validating our task-aware acceleration paradigm. Our code is publicly available at this https URL.

顶级标签: multi-modal model training model evaluation
详细标签: model acceleration multimodal models inference optimization task-aware pruning computational efficiency 或 搜索:

Flash-Unified:一种面向原生统一模型、无需训练且任务感知的加速框架 / Flash-Unified: A Training-Free and Task-Aware Acceleration Framework for Native Unified Models


1️⃣ 一句话总结

这篇论文提出了一种名为FlashU的加速框架,它无需额外训练,通过分析统一模型中不同任务(如图像生成和视觉问答)对模型参数的依赖差异,动态地剪枝和跳过冗余计算,从而在保持顶尖性能的同时,将推理速度提升了近一倍。

源自 arXiv: 2603.15271