菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-10
📄 Abstract - UniARM: Towards a Unified Autoregressive Reward Model for Multi-Objective Test-Time Alignment

Multi-objective alignment aims to align LLM responses with multiple human preference objectives. Among existing methods, guiding the generation of frozen LLMs through autoregressive reward models (ARMs) to accomplish multi-objective test-time alignment is a low-cost solution. However, these methods typically rely on independent parameters for each preference objective, either by training ARMs independently across preference dimensions, which neglects interactions among preference features, or by training a single ARM with separate feature extraction modules for each preference, which can cause feature entanglement. Both strategies can result in misalignment between generated outputs and user preferences. To address this limitation, we propose Preference-Modulated \& Shared Low-Rank Adaptation (MoSLoRA) for ARM training, which first extracts shared features via a preference-agnostic module and then applies affine transformations to shared features via a preference modulation module conditioned on mixed preference vectors. This design mitigates feature entanglement and enables precise control over preference trade-offs during inference. Building on this, we introduce the Unified Autoregressive Reward Model (UniARM), a novel framework for multi-objective test-time alignment. UniARM jointly models all preference dimensions in a single parameter space, eliminating the need for independent parameters for each preference objective. es on larger-scale LLMs, enhancing its practical usability.

顶级标签: llm model training model evaluation
详细标签: autoregressive reward model multi-objective alignment low-rank adaptation test-time alignment preference modulation 或 搜索:

UniARM:面向多目标测试时对齐的统一自回归奖励模型 / UniARM: Towards a Unified Autoregressive Reward Model for Multi-Objective Test-Time Alignment


1️⃣ 一句话总结

这篇论文提出了一种名为UniARM的新框架,它通过一个统一的模型来同时优化大语言模型的多个目标(如安全性和有用性),解决了以往方法中目标间相互干扰或控制不精准的问题,使得模型输出能更准确地平衡和满足用户的多重偏好。

源自 arXiv: 2602.09538