关于神经网络推理中逐层近似验证不可组合性的说明 / A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference
1️⃣ 一句话总结
这篇论文通过一个简单的反例证明,即使神经网络每一层的计算误差都在允许范围内,攻击者仍能通过精心设计的逐层误差累积,使最终输出结果偏离到任意指定的错误值,从而说明逐层近似验证的方法在整体上并不可靠。
A natural and informal approach to verifiable (or zero-knowledge) ML inference over floating-point data is: ``prove that each layer was computed correctly up to tolerance $\delta$; therefore the final output is a reasonable inference result''. This short note gives a simple counterexample showing that this inference is false in general: for any neural network, we can construct a functionally equivalent network for which adversarially chosen approximation-magnitude errors in individual layer computations suffice to steer the final output arbitrarily (within a prescribed bounded range).
关于神经网络推理中逐层近似验证不可组合性的说明 / A Note on Non-Composability of Layerwise Approximate Verification for Neural Inference
这篇论文通过一个简单的反例证明,即使神经网络每一层的计算误差都在允许范围内,攻击者仍能通过精心设计的逐层误差累积,使最终输出结果偏离到任意指定的错误值,从而说明逐层近似验证的方法在整体上并不可靠。
源自 arXiv: 2602.15756