菜单

关于 🐙 GitHub
arXiv 提交日期: 2025-12-24
📄 Abstract - C2LLM Technical Report: A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling

We present C2LLM - Contrastive Code Large Language Models, a family of code embedding models in both 0.5B and 7B sizes. Building upon Qwen-2.5-Coder backbones, C2LLM adopts a Pooling by Multihead Attention (PMA) module for generating sequence embedding from token embeddings, effectively 1) utilizing the LLM's causal representations acquired during pretraining, while also 2) being able to aggregate information from all tokens in the sequence, breaking the information bottleneck in EOS-based sequence embeddings, and 3) supporting flexible adaptation of embedding dimension, serving as an alternative to MRL. Trained on three million publicly available data, C2LLM models set new records on MTEB-Code among models of similar sizes, with C2LLM-7B ranking 1st on the overall leaderboard.

顶级标签: llm natural language processing model training
详细标签: code retrieval embedding models contrastive learning cross-attention pooling benchmark 或 搜索:

C2LLM技术报告:通过自适应交叉注意力池化实现代码检索的新前沿 / C2LLM Technical Report: A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling


1️⃣ 一句话总结

这篇论文提出了一个名为C2LLM的新型代码嵌入模型家族,它通过创新的注意力池化方法,有效聚合代码序列的全部信息,从而在代码检索任务上取得了同类模型中的最佳性能。

源自 arXiv: 2512.21332