迈向使用词元图改进句子表示 / Towards Improved Sentence Representations using Token Graphs
1️⃣ 一句话总结
这篇论文提出了一种名为GLOT的轻量级方法,它通过将句子中的词元构建成图并利用图神经网络进行聚合,从而更有效地从大型语言模型中提取高质量的句子向量表示,该方法在保持高准确率的同时,显著减少了计算开销。
Obtaining a single-vector representation from a Large Language Model's (LLM) token-level outputs is a critical step for nearly all sentence-level tasks. However, standard pooling methods like mean or max aggregation treat tokens as an independent set, discarding the rich relational structure captured by the model's self-attention layers and making them susceptible to signal dilution. To address this, we introduce GLOT, a lightweight, structure-aware pooling module that reframes pooling as relational learning followed by aggregation. Operating on the outputs of a frozen LLM, GLOT first constructs a latent token-similarity graph, then refines token representations with a graph neural network, and finally aggregates them using a readout layer. Experimentally, our approach is remarkably robust and efficient: on a diagnostic stress test where 90% of tokens are random distractors, GLOT maintains over 97% accuracy while baseline methods collapse. Furthermore, it is competitive with state-of-the-art techniques on benchmarks like GLUE and MTEB with 20x fewer trainable parameters and speeds up the training time by over 100x compared with parameter-efficient fine-tuning methods. Supported by a theoretical analysis of its expressive power, our work shows that learning over token graphs is a powerful paradigm for the efficient adaptation of frozen LLMs. Our code is published at this https URL.
迈向使用词元图改进句子表示 / Towards Improved Sentence Representations using Token Graphs
这篇论文提出了一种名为GLOT的轻量级方法,它通过将句子中的词元构建成图并利用图神经网络进行聚合,从而更有效地从大型语言模型中提取高质量的句子向量表示,该方法在保持高准确率的同时,显著减少了计算开销。
源自 arXiv: 2603.03389