菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-22
📄 Abstract - Efficient INT8 Single-Image Super-Resolution via Deployment-Aware Quantization and Teacher-Guided Training

Efficient single-image super-resolution (SISR) requires balancing reconstruction fidelity, model compactness, and robustness under low-bit deployment, which is especially challenging for x3 SR. We present a deployment-oriented quantized SISR framework based on an extract-refine-upsample design. The student performs most computation in the low-resolution space and uses a lightweight re-parameterizable backbone with PixelShuffle reconstruction, yielding a compact inference graph. To improve quality without significantly increasing complexity, we adopt a three-stage training pipeline: Stage 1 learns a basic reconstruction mapping with spatial supervision; Stage 2 refines fidelity using Charbonnier loss, DCT-domain supervision, and confidence-weighted output-level distillation from a Mamba-based teacher; and Stage 3 applies quantization-aware training directly on the fused deploy graph. We further use weight clipping and BatchNorm recalibration to improve quantization stability. On the MAI 2026 Quantized 4K Image Super-Resolution Challenge test set, our final AIO MAI submission achieves 29.79 dB PSNR and 0.8634 SSIM, obtaining a final score of 1.8 under the target mobile INT8 deployment setting. Ablation on Stage 3 optimization shows that teacher-guided supervision improves the dynamic INT8 TFLite reconstruction from 29.91 dB/0.853 to 30.0003 dB/0.856, while the fixed-shape deployable INT8 TFLite artifact attains 30.006 dB/0.857.

顶级标签: computer vision model training model evaluation
详细标签: super-resolution quantization knowledge distillation int8 deployment compact model 或 搜索:

通过部署感知量化与教师指导训练实现高效的INT8单图像超分辨率 / Efficient INT8 Single-Image Super-Resolution via Deployment-Aware Quantization and Teacher-Guided Training


1️⃣ 一句话总结

本文提出了一种专为移动设备INT8部署优化的超分辨率框架,通过让学生模型在低分辨率空间进行大部分计算、采用轻量级可重参数化网络结构,并利用三阶段训练(基础重建、教师蒸馏指导、量化感知微调),在显著降低计算量和模型大小的同时,将4K图像超分辨率的精度提升至接近30 dB PSNR,验证了在资源受限设备上运行高清超分任务的可行性。

源自 arXiv: 2604.20291