KernelEvolve:为Meta的异构AI加速器扩展智能内核编码 / KernelEvolve: Scaling Agentic Kernel Coding for Heterogeneous AI Accelerators at Meta
1️⃣ 一句话总结
这篇论文提出了一个名为KernelEvolve的智能框架,它能自动为不同种类的AI硬件(如不同品牌的GPU和自研芯片)快速生成和优化推荐模型的计算核心,将原本需要数周的开发时间缩短至几小时,并显著提升了性能。
Making deep learning recommendation model (DLRM) training and inference fast and efficient is important. However, this presents three key system challenges - model architecture diversity, kernel primitive diversity, and hardware generation and architecture heterogeneity. This paper presents KernelEvolve-an agentic kernel coding framework-to tackle heterogeneity at-scale for DLRM. KernelEvolve is designed to take kernel specifications as input and automate the process of kernel generation and optimization for recommendation model across heterogeneous hardware architectures. KernelEvolve does so by operating at multiple programming abstractions, from Triton and CuTe DSL to low-level hardware agnostic languages, spanning the full hardware-software optimization stack. The kernel optimization process is described as graph-based search with selection policy, universal operator, fitness function, and termination rule, dynamically adapts to runtime execution context through retrieval-augmented prompt synthesis. We designed, implemented, and deployed KernelEvolve to optimize a wide variety of production recommendation models across generations of NVIDIA and AMD GPUs, as well as Meta's AI accelerators. We validate KernelEvolve on the publicly-available KernelBench suite, achieving 100% pass rate on all 250 problems across three difficulty levels, and 160 PyTorch ATen operators across three heterogeneous hardware platforms, demonstrating 100% correctness. KernelEvolve reduces development time from weeks to hours and achieves substantial performance improvements over PyTorch baselines across diverse production use cases and for heterogeneous AI systems at-scale. Beyond performance efficiency improvements, KernelEvolve significantly mitigates the programmability barrier for new AI hardware by enabling automated kernel generation for in-house developed AI hardware.
KernelEvolve:为Meta的异构AI加速器扩展智能内核编码 / KernelEvolve: Scaling Agentic Kernel Coding for Heterogeneous AI Accelerators at Meta
这篇论文提出了一个名为KernelEvolve的智能框架,它能自动为不同种类的AI硬件(如不同品牌的GPU和自研芯片)快速生成和优化推荐模型的计算核心,将原本需要数周的开发时间缩短至几小时,并显著提升了性能。
源自 arXiv: 2512.23236