菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-14
📄 Abstract - Parameter-Efficient Fine-Tuning of DINOv2 for Large-Scale Font Classification

We present a font classification system capable of identifying 394 font families from rendered text images. Our approach fine-tunes a DINOv2 Vision Transformer using Low-Rank Adaptation (LoRA), achieving approximately 86% top-1 accuracy while training fewer than 1% of the model's 87.2M parameters. We introduce a synthetic dataset generation pipeline that renders Google Fonts at scale with diverse augmentations including randomized colors, alignment, line wrapping, and Gaussian noise, producing training images that generalize to real-world typographic samples. The model incorporates built-in preprocessing to ensure consistency between training and inference, and is deployed as a HuggingFace Inference Endpoint. We release the model, dataset, and full training pipeline as open-source resources.

顶级标签: computer vision model training natural language processing
详细标签: vision transformer parameter-efficient fine-tuning font classification low-rank adaptation synthetic dataset 或 搜索:

基于DINOv2的高效参数微调方法用于大规模字体分类 / Parameter-Efficient Fine-Tuning of DINOv2 for Large-Scale Font Classification


1️⃣ 一句话总结

这篇论文提出了一种高效的字体分类系统,它通过一种名为LoRA的轻量级微调技术,仅训练了不到1%的模型参数,就能在包含394种字体的分类任务上达到约86%的准确率,并开源了模型、合成数据集和完整训练流程。

源自 arXiv: 2602.13889