📄
Abstract - Knob: A Physics-Inspired Gating Interface for Interpretable and Controllable Neural Dynamics
Existing neural network calibration methods often treat calibration as a static, post-hoc optimization task. However, this neglects the dynamic and temporal nature of real-world inference. Moreover, existing methods do not provide an intuitive interface enabling human operators to dynamically adjust model behavior under shifting conditions. In this work, we propose Knob, a framework that connects deep learning with classical control theory by mapping neural gating dynamics to a second-order mechanical system. By establishing correspondences between physical parameters -- damping ratio ($\zeta$) and natural frequency ($\omega_n$) -- and neural gating, we create a tunable "safety valve". The core mechanism employs a logit-level convex fusion, functioning as an input-adaptive temperature scaling. It tends to reduce model confidence particularly when model branches produce conflicting predictions. Furthermore, by imposing second-order dynamics (Knob-ODE), we enable a \textit{dual-mode} inference: standard i.i.d. processing for static tasks, and state-preserving processing for continuous streams. Our framework allows operators to tune "stability" and "sensitivity" through familiar physical analogues. This paper presents an exploratory architectural interface; we focus on demonstrating the concept and validating its control-theoretic properties rather than claiming state-of-the-art calibration performance. Experiments on CIFAR-10-C validate the calibration mechanism and demonstrate that, in Continuous Mode, the gate responses are consistent with standard second-order control signatures (step settling and low-pass attenuation), paving the way for predictable human-in-the-loop tuning.
Knob:一种受物理学启发的门控接口,用于实现可解释和可控的神经动力学 /
Knob: A Physics-Inspired Gating Interface for Interpretable and Controllable Neural Dynamics
1️⃣ 一句话总结
这篇论文提出了一个名为Knob的创新框架,它将神经网络的门控过程类比为一个可调节的机械弹簧阻尼系统,从而为研究人员提供了一个直观的物理旋钮,能够动态地调整AI模型在连续任务中的稳定性和响应速度,增强了模型的可控性和可解释性。