隐式神经表示:一个信号处理的视角 / Implicit Neural Representations: A Signal Processing Perspective
1️⃣ 一句话总结
这篇论文从信号处理的角度,系统梳理了用神经网络将图像、音频等数据表示为连续函数的新方法(隐式神经表示)的核心原理、技术演进和应用价值。
Implicit neural representations (INRs) mark a fundamental shift in signal modeling, moving from discrete sampled data to continuous functional representations. By parameterizing signals as neural networks, INRs provide a unified framework for representing images, audio, video, 3D geometry, and beyond as continuous functions of their coordinates. This functional viewpoint enables signal operations such as differentiation to be carried out analytically through automatic differentiation rather than through discrete approximations. In this article, we examine the evolution of INRs from a signal processing perspective, emphasizing spectral behavior, sampling theory, and multiscale representation. We trace the progression from standard coordinate based networks, which exhibit a spectral bias toward low frequency components, to more advanced designs that reshape the approximation space through specialized activations, including periodic, localized, and adaptive functions. We also discuss structured representations, such as hierarchical decompositions and hash grid encodings, that improve spatial adaptivity and computational efficiency. We further highlight the utility of INRs across a broad range of applications, including inverse problems in medical and radar imaging, compression, and 3D scene representation. By interpreting INRs as learned signal models whose approximation spaces adapt to the underlying data, this article clarifies the field's core conceptual developments and outlines open challenges in theoretical stability, weight space interpretability, and large scale generalization.
隐式神经表示:一个信号处理的视角 / Implicit Neural Representations: A Signal Processing Perspective
这篇论文从信号处理的角度,系统梳理了用神经网络将图像、音频等数据表示为连续函数的新方法(隐式神经表示)的核心原理、技术演进和应用价值。
源自 arXiv: 2604.15047