菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-05
📄 Abstract - A Benchmark Study of Neural Network Compression Methods for Hyperspectral Image Classification

Deep neural networks have achieved strong performance in image classification tasks due to their ability to learn complex patterns from high-dimensional data. However, their large computational and memory requirements often limit deployment on resource-constrained platforms such as remote sensing devices and edge systems. Network compression techniques have therefore been proposed to reduce model size and computational cost while maintaining predictive performance. In this study, we conduct a systematic evaluation of neural network compression methods for a remote sensing application, namely hyperspectral land cover classification. Specifically, we examine three widely used compression strategies for convolutional neural networks: pruning, quantization, and knowledge distillation. Experiments are conducted on two benchmark hyperspectral datasets, considering classification accuracy, memory consumption, and inference efficiency. Our results demonstrate that compressed models can significantly reduce model size and computational cost while maintaining competitive classification performance. These findings provide insights into the trade-offs between compression ratio, efficiency, and accuracy, and highlight the potential of compression techniques for enabling efficient deep learning deployment in remote sensing applications.

顶级标签: computer vision model training machine learning
详细标签: neural network compression hyperspectral image classification pruning quantization knowledge distillation 或 搜索:

高光谱图像分类中神经网络压缩方法的基准研究 / A Benchmark Study of Neural Network Compression Methods for Hyperspectral Image Classification


1️⃣ 一句话总结

这篇论文系统评估了三种主流神经网络压缩技术在高光谱图像分类任务上的效果,发现它们能在保持较高分类精度的同时,显著减小模型体积并提升计算效率,为在资源受限的遥感设备上部署深度学习模型提供了实用方案。

源自 arXiv: 2603.04720