菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-21
📄 Abstract - Attend what matters: Leveraging vision foundational models for breast cancer classification using mammograms

Vision Transformers $(\texttt{ViT})$ have become the architecture of choice for many computer vision tasks, yet their performance in computer-aided diagnostics remains limited. Focusing on breast cancer detection from mammograms, we identify two main causes for this shortfall. First, medical images are high-resolution with small abnormalities, leading to an excessive number of tokens and making it difficult for the softmax-based attention to localize and attend to relevant regions. Second, medical image classification is inherently fine-grained, with low inter-class and high intra-class variability, where standard cross-entropy training is insufficient. To overcome these challenges, we propose a framework with three key components: (1) Region of interest $(\texttt{RoI})$ based token reduction using an object detection model to guide attention; (2) contrastive learning between selected $\texttt{RoI}$ to enhance fine-grained discrimination through hard-negative based training; and (3) a $\texttt{DINOv2}$ pretrained $\texttt{ViT}$ that captures localization-aware, fine-grained features instead of global $\texttt{CLIP}$ representations. Experiments on public mammography datasets demonstrate that our method achieves superior performance over existing baselines, establishing its effectiveness and potential clinical utility for large-scale breast cancer screening. Our code is available for reproducibility here: this https URL

顶级标签: medical computer vision model training
详细标签: breast cancer mammogram vision transformer contrastive learning fine-grained classification 或 搜索:

关注重点:利用视觉基础模型进行基于乳腺X光图像的乳腺癌分类 / Attend what matters: Leveraging vision foundational models for breast cancer classification using mammograms


1️⃣ 一句话总结

该论文提出了一种结合目标检测、对比学习和自监督视觉Transformer(DINOv2)的框架,通过减少无关图像区域、强化对相似病变的区分能力,有效提升了乳腺X光片中乳腺癌检测的准确性。

源自 arXiv: 2604.19350