菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-23
📄 Abstract - Not All Layers Are Created Equal: Adaptive LoRA Ranks for Personalized Image Generation

Low Rank Adaptation (LoRA) is the de facto fine-tuning strategy to generate personalized images from pre-trained diffusion models. Choosing a good rank is extremely critical, since it trades off performance and memory consumption, but today the decision is often left to the community's consensus, regardless of the personalized subject's complexity. The reason is evident: the cost of selecting a good rank for each LoRA component is combinatorial, so we opt for practical shortcuts such as fixing the same rank for all components. In this paper, we take a first step to overcome this challenge. Inspired by variational methods that learn an adaptive width of neural networks, we let the ranks of each layer freely adapt during fine-tuning on a subject. We achieve it by imposing an ordering of importance on the rank's positions, effectively encouraging the creation of higher ranks when strictly needed. Qualitatively and quantitatively, our approach, LoRA$^2$, achieves a competitive trade-off between DINO, CLIP-I, and CLIP-T across 29 subjects while requiring much less memory and lower rank than high rank LoRA versions. Code: this https URL.

顶级标签: model training computer vision aigc
详细标签: lora personalized image generation adaptive fine-tuning diffusion models parameter efficiency 或 搜索:

并非所有层都生而平等:用于个性化图像生成的自适应LoRA秩 / Not All Layers Are Created Equal: Adaptive LoRA Ranks for Personalized Image Generation


1️⃣ 一句话总结

这篇论文提出了一种名为LoRA²的新方法,它能让AI模型在微调生成个性化图片时,自动为不同层分配合适的复杂度(即“秩”),从而在保证生成质量的同时,显著降低内存消耗和计算成本。

源自 arXiv: 2603.21884