菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-13
📄 Abstract - Robust Reasoning and Learning with Brain-Inspired Representations under Hardware-Induced Nonlinearities

Traditional machine learning depends on high-precision arithmetic and near-ideal hardware assumptions, which is increasingly challenged by variability in aggressively scaled semiconductor devices. Compute-in-memory (CIM) architectures alleviate data-movement bottlenecks and improve energy efficiency yet introduce nonlinear distortions and reliability concerns. We address these issues with a hardware-aware optimization framework based on Hyperdimensional Computing (HDC), systematically compensating for non-ideal similarity computations in CIM. Our approach formulates encoding as an optimization problem, minimizing the Frobenius norm between an ideal kernel and its hardware-constrained counterpart, and employs a joint optimization strategy for end-to-end calibration of hypervector representations. Experimental results demonstrate that our method when applied to QuantHD achieves 84\% accuracy under severe hardware-induced perturbations, a 48\% increase over naive QuantHD under the same conditions. Additionally, our optimization is vital for graph-based HDC reliant on precise variable-binding for interpretable reasoning. Our framework preserves the accuracy of RelHD on the Cora dataset, achieving a 5.4$\times$ accuracy improvement over naive RelHD under nonlinear environments. By preserving HDC's robustness and symbolic properties, our solution enables scalable, energy-efficient intelligent systems capable of classification and reasoning on emerging CIM hardware.

顶级标签: systems machine learning theory
详细标签: hyperdimensional computing compute-in-memory hardware-aware optimization robustness energy efficiency 或 搜索:

在硬件引入的非线性条件下,利用类脑表征实现鲁棒的推理与学习 / Robust Reasoning and Learning with Brain-Inspired Representations under Hardware-Induced Nonlinearities


1️⃣ 一句话总结

这篇论文提出了一种硬件感知的优化框架,通过改进超维度计算,有效补偿了存内计算硬件中的非线性失真,从而在能效更高的新型硬件上实现了高精度的分类和可解释推理任务。

源自 arXiv: 2604.12079