SparseDVFS:面向高效能边缘推理的稀疏感知动态电压频率调节框架 / SparseDVFS: Sparse-Aware DVFS for Energy-Efficient Edge Inference
1️⃣ 一句话总结
这篇论文提出了一种名为SparseDVFS的新方法,它通过智能识别神经网络中不同算子的稀疏程度来精细调节硬件频率,从而在保证性能的同时,大幅降低了边缘设备上运行人工智能模型时的能耗。
Deploying deep neural networks (DNNs) on power-sensitive edge devices presents a formidable challenge. While Dynamic Voltage and Frequency Scaling (DVFS) is widely employed for energy optimization, traditional model-level scaling is often too coarse to capture intra-inference variations, whereas fine-grained operator-level scaling suffers from prohibitive performance degradation due to significant hardware switching latency. This paper presents SparseDVFS, a fine-grained, sparse-aware DVFS framework designed for energy-efficient edge inference. Our key insight is that operator sparsity is a primary metric for hardware frequency modulation. By distinguishing between compute-bound dense operators and memory-bound sparse operators, the system can apply specialized frequency triplets to maximize energy efficiency. To overcome switching overheads and component interference, SparseDVFS incorporates three key innovations: (1) an offline modeler that established a deterministic mapping between operator sparsity and optimal frequency triplets (CPU/GPU/EMC) via white-box timeline analysis; (2) a runtime graph partitioner that utilizes a greedy merging heuristic to aggregate operators into super-blocks, balancing scaling granularity and DVFS switching latency through a latency amortization constraint; and (3) a unified co-governor that employs a frequency unified scaling engine (FUSE) and a look-ahead instruction queue to eliminate antagonistic effects between independent controllers and hide hardware transition latencies. Extensive evaluations show that SparseDVFS achieves an average 78.17% energy efficiency gain over state-of-the-art solutions while maintaining a superior 14% cost-gain ratio.
SparseDVFS:面向高效能边缘推理的稀疏感知动态电压频率调节框架 / SparseDVFS: Sparse-Aware DVFS for Energy-Efficient Edge Inference
这篇论文提出了一种名为SparseDVFS的新方法,它通过智能识别神经网络中不同算子的稀疏程度来精细调节硬件频率,从而在保证性能的同时,大幅降低了边缘设备上运行人工智能模型时的能耗。
源自 arXiv: 2603.21908