快与慢的视觉:学习视频中的时间流动 / Seeing Fast and Slow: Learning the Flow of Time in Videos
1️⃣ 一句话总结
本文提出一种自监督学习方法,让AI能像人一样感知视频播放速度的变化,并进一步利用这些能力自动收集大规模慢动作视频数据,从而实现了可控播放速度的视频生成以及将模糊低帧率视频转化为高清慢动作的效果。
How can we tell whether a video has been sped up or slowed down? How can we generate videos at different speeds? Although videos have been central to modern computer vision research, little attention has been paid to perceiving and controlling the passage of time. In this paper, we study time as a learnable visual concept and develop models for reasoning about and manipulating the flow of time in videos. We first exploit the multimodal cues and temporal structure naturally present in videos to learn, in a self-supervised manner, to detect speed changes and estimate playback speed. We then show that these learned temporal reasoning models enable us to curate the largest slow-motion video dataset to date from noisy in-the-wild sources. Such slow-motion footage, typically filmed by high-speed cameras, contains substantially richer temporal detail than standard videos. Using this data, we further develop models capable of temporal control, including speed-conditioned video generation, which produces motion at specified playback speed, and temporal super-resolution, which tranforms low-FPS, blurry videos into high-FPS sequences with fine-grained temporal details. Our findings highlight time as a manipulable, perceptual dimension in video learning, opening doors to temporally controllable video generation, temporal forensics detection, and potentially richer world-models that understand how events unfold over time.
快与慢的视觉:学习视频中的时间流动 / Seeing Fast and Slow: Learning the Flow of Time in Videos
本文提出一种自监督学习方法,让AI能像人一样感知视频播放速度的变化,并进一步利用这些能力自动收集大规模慢动作视频数据,从而实现了可控播放速度的视频生成以及将模糊低帧率视频转化为高清慢动作的效果。
源自 arXiv: 2604.21931