KD-OCT:用于临床级视网膜OCT分类的高效知识蒸馏方法 / KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification
1️⃣ 一句话总结
这篇论文提出了一种名为KD-OCT的新方法,它通过知识蒸馏技术,将一个大而准的复杂眼科诊断模型压缩成一个小而快的轻量模型,在保持高精度的同时,让模型能部署在临床或边缘设备上,用于实时筛查老年性黄斑变性等眼病。
Age-related macular degeneration (AMD) and choroidal neovascularization (CNV)-related conditions are leading causes of vision loss worldwide, with optical coherence tomography (OCT) serving as a cornerstone for early detection and management. However, deploying state-of-the-art deep learning models like ConvNeXtV2-Large in clinical settings is hindered by their computational demands. Therefore, it is desirable to develop efficient models that maintain high diagnostic performance while enabling real-time deployment. In this study, a novel knowledge distillation framework, termed KD-OCT, is proposed to compress a high-performance ConvNeXtV2-Large teacher model, enhanced with advanced augmentations, stochastic weight averaging, and focal loss, into a lightweight EfficientNet-B2 student for classifying normal, drusen, and CNV cases. KD-OCT employs real-time distillation with a combined loss balancing soft teacher knowledge transfer and hard ground-truth supervision. The effectiveness of the proposed method is evaluated on the Noor Eye Hospital (NEH) dataset using patient-level cross-validation. Experimental results demonstrate that KD-OCT outperforms comparable multi-scale or feature-fusion OCT classifiers in efficiency- accuracy balance, achieving near-teacher performance with substantial reductions in model size and inference time. Despite the compression, the student model exceeds most existing frameworks, facilitating edge deployment for AMD screening. Code is available at this https URL OCT.
KD-OCT:用于临床级视网膜OCT分类的高效知识蒸馏方法 / KD-OCT: Efficient Knowledge Distillation for Clinical-Grade Retinal OCT Classification
这篇论文提出了一种名为KD-OCT的新方法,它通过知识蒸馏技术,将一个大而准的复杂眼科诊断模型压缩成一个小而快的轻量模型,在保持高精度的同时,让模型能部署在临床或边缘设备上,用于实时筛查老年性黄斑变性等眼病。
源自 arXiv: 2512.09069