菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-19
📄 Abstract - Secure Linear Alignment of Large Language Models

Language models increasingly appear to learn similar representations, despite differences in training objectives, architectures, and data modalities. This emerging compatibility between independently trained models introduces new opportunities for cross-model alignment to downstream objectives. Moreover, it unlocks new potential application domains, such as settings where security, privacy, or competitive constraints prohibit direct data or model sharing. In this work, we propose a privacy-preserving framework that exploits representational convergence to enable cross-silo inference between independent language models. The framework learns an affine transformation over a shared public dataset and applies homomorphic encryption to protect client queries during inference. By encrypting only the linear alignment and classification operations, the method achieves sub-second inference latency while maintaining strong security guarantees. We support this framework with an empirical investigation into representational convergence, in which we learn linear transformations between the final hidden states of independent models. We evaluate these cross-model mappings on embedding classification and out-of-distribution detection, observing minimal performance degradation across model pairs. Additionally, we show for the first time that linear alignment sometimes enables text generation across independently trained models.

顶级标签: llm systems model training
详细标签: privacy-preserving model alignment homomorphic encryption representation learning cross-model inference 或 搜索:

大语言模型的安全线性对齐 / Secure Linear Alignment of Large Language Models


1️⃣ 一句话总结

这篇论文提出了一种保护隐私的框架,利用不同大语言模型内部表示趋于相似的现象,通过一个简单的线性变换和同态加密技术,让彼此独立的模型能够安全协作进行推理和文本生成,而无需共享敏感数据或模型本身。

源自 arXiv: 2603.18908