菜单

🤖 系统
📄 Abstract - UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs

Deploying large language model (LLM) models on mobile platforms faces significant challenges due to the limited memory and shared computational resources of the device. Resource availability may be an issue as it is directly impacted by the current device workload, adding to the uncertainty of model deployment. We introduce UniQL, a unified post-training quantization and low-rank compression framework with on-device configurable pruning rates for edge LLMs. UniQL is a general framework that integrates quantization and low-rank compression for Transformers, State Space Models (SSMs), and hybrid models to support diverse edge applications. In our proposed joint framework, we introduce an efficient structured weight-sorting method that speeds up computation by 20x, quantization-aware singular value decomposition (SVD) to minimize quantization errors, state-aware weight sorting for SSMs, and a fused rotary positional embedding (RoPE) kernel for pruned models. Our framework performs weight-sorting, fine-tuning, and quantization in the cloud in a single-pass workflow, while enabling on-device configurable pruning rates up to 35%. Our experiments show that quantized and pruned models achieve a memory reduction of 4x-5.7x and a token-throughput improvement of 2.7x-3.4x, maintaining accuracy within 5% of the original models at 15% pruning across Transformers (Llama3 and Qwen2.5), SSMs (Mamba2), and hybrid models (Nemotron-H and Bamba-v2). The code and quantized models are available at: this https URL.

顶级标签: llm model training systems
详细标签: model compression quantization low-rank compression edge computing post-training 或 搜索:

UniQL:面向自适应边缘大语言模型的统一量化与低秩压缩框架 / UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs


1️⃣ 一句话总结

这篇论文提出了一个名为UniQL的统一框架,它通过结合量化与低秩压缩技术,在云端一次性完成模型优化,使大语言模型能在手机等边缘设备上高效运行,在显著减小模型体积并提升运行速度的同时,基本保持原有的准确性。


📄 打开原文 PDF