菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-27
📄 Abstract - PEPS: Positional Encoding Projected Sampling -- Extended

Implicit neural representations (INRs) are increasingly being used as tools to map coordinates to signals, encompassing applications from neural fields to texture compression, shape representations, and beyond. Most INR methods are based on using high-dimensional projections of the initial coordinates through encoders such as grid or positional encoding. Nevertheless, positional encoding is often insufficient and grids, as we show in this paper, require high resolution for being able to learn. In this paper, we demonstrate that positional encoding can be used not only as a high-dimensional embedding but also decomposed as a series of meaningful points. We propose the Positional Encoding Projected Sampling, where we treat the projection of the original coordinate at each frequency as a point of interest. We describe the motion of each point with respect to the frequencies and show that it follows a unique pattern. Finally, we use the unique motion of each point as a basis decomposition for doing learned positional encoding using grids. We prove, using three competitive applications; image representation, texture compression, and signed distance function; that the proposed approach outperforms the current state of the art methods, and often requires 25\% less parameters for equivalent reconstruction error or rendering.

顶级标签: machine learning computer vision systems
详细标签: implicit neural representations positional encoding neural fields texture compression grid sampling 或 搜索:

位置编码投影采样——扩展版 / PEPS: Positional Encoding Projected Sampling -- Extended


1️⃣ 一句话总结

本文提出一种新的位置编码方法,将每个频率下的坐标投影视为独立关注点,利用其独特运动模式实现更高效的网格学习,在图像、纹理压缩和距离场等任务中比现有方法少用25%的参数就能达到相同效果。

源自 arXiv: 2604.24167