菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-01-14
📄 Abstract - Geometric Stability: The Missing Axis of Representations

Analysis of learned representations has a blind spot: it focuses on $similarity$, measuring how closely embeddings align with external references, but similarity reveals only what is represented, not whether that structure is robust. We introduce $geometric$ $stability$, a distinct dimension that quantifies how reliably representational geometry holds under perturbation, and present $Shesha$, a framework for measuring it. Across 2,463 configurations in seven domains, we show that stability and similarity are empirically uncorrelated ($\rho \approx 0.01$) and mechanistically distinct: similarity metrics collapse after removing the top principal components, while stability retains sensitivity to fine-grained manifold structure. This distinction yields actionable insights: for safety monitoring, stability acts as a functional geometric canary, detecting structural drift nearly 2$\times$ more sensitively than CKA while filtering out the non-functional noise that triggers false alarms in rigid distance metrics; for controllability, supervised stability predicts linear steerability ($\rho = 0.89$-$0.96$); for model selection, stability dissociates from transferability, revealing a geometric tax that transfer optimization incurs. Beyond machine learning, stability predicts CRISPR perturbation coherence and neural-behavioral coupling. By quantifying $how$ $reliably$ systems maintain structure, geometric stability provides a necessary complement to similarity for auditing representations across biological and computational systems.

顶级标签: model evaluation machine learning theory
详细标签: representation analysis robustness geometric stability model auditing safety monitoring 或 搜索:

几何稳定性:表征中缺失的维度 / Geometric Stability: The Missing Axis of Representations


1️⃣ 一句话总结

这篇论文提出了‘几何稳定性’这一新概念,用于衡量表征结构在受到扰动时的鲁棒性,并证明它与传统的相似性度量无关,为评估和改进机器学习及生物系统的表征提供了新的视角和实用工具。

源自 arXiv: 2601.09173