菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-08
📄 Abstract - Approximating Matrix Functions with Deep Neural Networks and Transformers

Transformers have revolutionized natural language processing, but their use for numerical computation has received less attention. We study the approximation of matrix functions, which map scalar functions to matrices, using neural networks including transformers. We focus on functions mapping square matrices to square matrices of the same dimension. These types of matrix functions appear throughout scientific computing, e.g., the matrix exponential in continuous-time Markov chains and the matrix sign function in stability analysis of dynamical systems. In this paper, we make two contributions. First, we prove bounds on the width and depth of ReLU networks needed to approximate the matrix exponential to an arbitrary precision. Second, we show experimentally that a transformer encoder-decoder with suitable numerical encodings can approximate certain matrix functions at a relative error of 5% with high probability. Our study reveals that the encoding scheme strongly affects performance, with different schemes working better for different functions.

顶级标签: machine learning theory model training
详细标签: matrix functions neural networks transformers function approximation scientific computing 或 搜索:

利用深度神经网络与Transformer近似矩阵函数 / Approximating Matrix Functions with Deep Neural Networks and Transformers


1️⃣ 一句话总结

这篇论文证明了深度神经网络(特别是Transformer架构)能够有效近似科学计算中至关重要的矩阵函数(如矩阵指数),并揭示了输入数据的编码方式对模型性能有决定性影响。

源自 arXiv: 2602.07800