菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-22
📄 Abstract - Towards Automated Kernel Generation in the Era of LLMs

The performance of modern AI systems is fundamentally constrained by the quality of their underlying kernels, which translate high-level algorithmic semantics into low-level hardware operations. Achieving near-optimal kernels requires expert-level understanding of hardware architectures and programming models, making kernel engineering a critical but notoriously time-consuming and non-scalable process. Recent advances in large language models (LLMs) and LLM-based agents have opened new possibilities for automating kernel generation and optimization. LLMs are well-suited to compress expert-level kernel knowledge that is difficult to formalize, while agentic systems further enable scalable optimization by casting kernel development as an iterative, feedback-driven loop. Rapid progress has been made in this area. However, the field remains fragmented, lacking a systematic perspective for LLM-driven kernel generation. This survey addresses this gap by providing a structured overview of existing approaches, spanning LLM-based approaches and agentic optimization workflows, and systematically compiling the datasets and benchmarks that underpin learning and evaluation in this domain. Moreover, key open challenges and future research directions are further outlined, aiming to establish a comprehensive reference for the next generation of automated kernel optimization. To keep track of this field, we maintain an open-source GitHub repository at this https URL.

顶级标签: llm systems model training
详细标签: kernel optimization automated code generation hardware-aware ai agentic systems benchmarking 或 搜索:

迈向大语言模型时代的自动化内核生成 / Towards Automated Kernel Generation in the Era of LLMs


1️⃣ 一句话总结

这篇综述论文系统梳理了如何利用大语言模型及其智能体技术,来自动化生成和优化AI系统的底层计算内核,以解决传统方法依赖专家、耗时且难以规模化的问题,并指出了该领域未来的关键挑战和研究方向。

源自 arXiv: 2601.15727