菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-13
📄 Abstract - Hierarchical Transformer Preconditioning for Interactive Physics Simulation

Neural preconditioners for real-time physics simulation offer promising data-driven priors, but they often fail to capture long-range couplings efficiently because they inherit local message passing or sparse-operator access patterns. We introduce the Hierarchical Transformer Preconditioner, a neural preconditioner anchored to a weak-admissibility H-matrix partition. The partition provides a multiscale structural prior (dense diagonal leaves plus coarsening off-diagonal tiles) that enables full-graph approximate-inverse computation with O(N) scaling at fixed block sizes. The network models the inverse through low-rank far-field factors and uses highway connections (axial buffers plus a global summary token) to propagate context across transformer depth. At each PCG iteration, preconditioner application reduces to batched dense GEMMs with regular memory access. The key training contribution is a cosine-Hutchinson probe objective that learns the action of MA on convergence-critical spectral subspaces, optimizing angular alignment of MAz with z rather than forcing eigenvalue clusters to a prescribed location. This removes unnecessary spectral-placement constraints from SAI-style objectives and improves conditioning on irregular spectra. Because both inference and apply are dense, dependency-free tensor programs, the full solve loop is captured as a single CUDA Graph. On stiff multiphase Poisson systems (up to 100:1 density contrast, N = 1,024-16,384), the solver runs from ~143 to ~21 fps. At N = 8,192, it reaches 17.9 ms/frame, with 2.2x speedup over GPU Jacobi, ~28x over GPU IC/DILU (AMGX multicolor_dilu), and 2.7x over neural SPAI retrained per scale on the same benchmark.

顶级标签: machine learning systems physics simulation
详细标签: neural preconditioner transformer multigrid pcg cuda graph 或 搜索:

面向交互式物理模拟的分层Transformer预处理器 / Hierarchical Transformer Preconditioning for Interactive Physics Simulation


1️⃣ 一句话总结

本文提出一种新型的神经网络预处理器,通过结合分层矩阵结构与Transformer架构,在保持计算效率的同时有效捕捉物理模拟中的长程耦合,实现了在复杂物理系统上比传统GPU求解器快2到28倍的实时模拟速度。

源自 arXiv: 2605.13343