菜单

🤖 系统
📄 Abstract - LUT-LLM: Efficient Large Language Model Inference with Memory-based Computations on FPGAs

The rapid progress of large language models (LLMs) has advanced numerous applications, yet efficient single-batch inference remains vital for on-device intelligence. While FPGAs offer fine-grained data control and high energy efficiency, recent GPU optimizations have narrowed their advantage, especially under arithmetic-based computation. To overcome this, we leverage FPGAs' abundant on-chip memory to shift LLM inference from arithmetic- to memory-based computation through table lookups. We present LUT-LLM, the first FPGA accelerator enabling 1B+ LLM inference via vector-quantized memory operations. Our analysis identifies activation-weight co-quantization as the most effective scheme, supported by (1) bandwidth-aware parallel centroid search, (2) efficient 2D table lookups, and (3) a spatial-temporal hybrid design minimizing data caching. Implemented on an AMD V80 FPGA for a customized Qwen 3 1.7B model, LUT-LLM achieves 1.66x lower latency than AMD MI210 and 1.72x higher energy efficiency than NVIDIA A100, scaling to 32B models with 2.16x efficiency gain over A100.

顶级标签: systems model training machine learning
详细标签: fpga acceleration memory-based computation vector quantization efficient inference hardware optimization 或 搜索:

📄 论文总结

LUT-LLM:基于FPGA内存计算的高效大语言模型推理 / LUT-LLM: Efficient Large Language Model Inference with Memory-based Computations on FPGAs


1️⃣ 一句话总结

这项研究提出了一种名为LUT-LLM的创新方法,通过将大语言模型的计算从传统算术运算转变为基于内存的查找表操作,在FPGA上实现了比高端GPU更低延迟和更高能效的模型推理。


📄 打开原文 PDF