菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-17
📄 Abstract - Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry

Neural networks can accurately forecast complex dynamical systems, yet how they internally represent underlying latent geometry remains poorly understood. We study neural forecasters through the lens of representational alignment, introducing anchor-based, geometry-agnostic relative embeddings that remove rotational and scaling ambiguities in latent spaces. Applying this framework across seven canonical dynamical systems - ranging from periodic to chaotic - we reveal reproducible family-level structure: multilayer perceptrons align with other MLPs, recurrent networks with RNNs, while transformers and echo-state networks achieve strong forecasts despite weaker alignment. Alignment generally correlates with forecasting accuracy, yet high accuracy can coexist with low alignment. Relative geometry thus provides a simple, reproducible foundation for comparing how model families internalize and represent dynamical structure.

顶级标签: machine learning theory model evaluation
详细标签: neural forecasting representational alignment latent geometry dynamical systems model comparison 或 搜索:

神经预测器的相对几何:连接学习到的潜在几何中的准确性与对齐性 / Relative Geometry of Neural Forecasters: Linking Accuracy and Alignment in Learned Latent Geometry


1️⃣ 一句话总结

这篇论文通过一种新的几何分析方法发现,不同神经网络模型在预测复杂动态系统时,其内部表示结构存在清晰的家族相似性,但高预测精度并不总是与这种内部结构的高度一致性直接挂钩。

源自 arXiv: 2602.15676