菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-04
📄 Abstract - LoRA-MME: Multi-Model Ensemble of LoRA-Tuned Encoders for Code Comment Classification

Code comment classification is a critical task for automated software documentation and analysis. In the context of the NLBSE'26 Tool Competition, we present LoRA-MME, a Multi-Model Ensemble architecture utilizing Parameter-Efficient Fine-Tuning (PEFT). Our approach addresses the multi-label classification challenge across Java, Python, and Pharo by combining the strengths of four distinct transformer encoders: UniXcoder, CodeBERT, GraphCodeBERT, and CodeBERTa. By independently fine-tuning these models using Low-Rank Adaptation(LoRA) and aggregating their predictions via a learned weighted ensemble strategy, we maximize classification performance without the memory overhead of full model fine-tuning. Our tool achieved an F1 Weighted score of 0.7906 and a Macro F1 of 0.6867 on the test set. However, the computational cost of the ensemble resulted in a final submission score of 41.20%, highlighting the trade-off between semantic accuracy and inference efficiency.

顶级标签: natural language processing model training model evaluation
详细标签: code comment classification parameter-efficient fine-tuning model ensemble low-rank adaptation multi-label classification 或 搜索:

LoRA-MME:基于LoRA调优编码器的多模型集成用于代码注释分类 / LoRA-MME: Multi-Model Ensemble of LoRA-Tuned Encoders for Code Comment Classification


1️⃣ 一句话总结

这篇论文提出了一种名为LoRA-MME的新方法,它通过结合四种不同预训练模型并使用低秩适配技术进行高效微调,再通过加权集成策略来提升代码注释分类的准确性,但同时也面临着计算效率与性能之间的权衡。

源自 arXiv: 2603.03959