菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-21
📄 Abstract - Silicon Aware Neural Networks

Recent work in the machine learning literature has demonstrated that deep learning can train neural networks made of discrete logic gate functions to perform simple image classification tasks at very high speeds on CPU, GPU and FPGA platforms. By virtue of being formed by discrete logic gates, these Differentiable Logic Gate Networks (DLGNs) lend themselves naturally to implementation in custom silicon - in this work we present a method to map DLGNs in a one-to-one fashion to a digital CMOS standard cell library by converting the trained model to a gate-level netlist. We also propose a novel loss function whereby the DLGN can optimize the area, and indirectly power consumption, of the resulting circuit by minimizing the expected area per neuron based on the area of the standard cells in the target standard cell library. Finally, we also show for the first time an implementation of a DLGN as a silicon circuit in simulation, performing layout of a DLGN in the SkyWater 130nm process as a custom hard macro using a Cadence standard cell library and performing post-layout power analysis. We find that our custom macro can perform classification on MNIST with 97% accuracy 41.8 million times a second at a power consumption of 83.88 mW.

顶级标签: machine learning systems
详细标签: neural networks digital hardware silicon implementation logic gates power optimization 或 搜索:

硅感知神经网络 / Silicon Aware Neural Networks


1️⃣ 一句话总结

本文提出了一种将可微逻辑门网络直接映射到标准数字芯片单元的方法,并引入了以芯片面积为优化目标的新损失函数,首次在130纳米工艺下实现了高精度、低功耗且极高速的硅电路模拟,能够以每秒4180万次的速度完成MNIST图像分类。

源自 arXiv: 2604.19334