AuthorMix:通过分层适配器混合实现模块化的作者风格迁移 / AuthorMix: Modular Authorship Style Transfer via Layer-wise Adapter Mixing
1️⃣ 一句话总结
这篇论文提出了一种名为AuthorMix的轻量级、模块化方法,它通过训练针对特定作者风格的小型适配器,并学习如何将它们分层组合,从而仅需少量目标作者的文本就能高效、准确地将其写作风格迁移到其他文本上,同时更好地保持了原文的核心意思。
The task of authorship style transfer involves rewriting text in the style of a target author while preserving the meaning of the original text. Existing style transfer methods train a single model on large corpora to model all target styles at once: this high-cost approach offers limited flexibility for target-specific adaptation, and often sacrifices meaning preservation for style transfer. In this paper, we propose AuthorMix: a lightweight, modular, and interpretable style transfer framework. We train individual, style-specific LoRA adapters on a small set of high-resource authors, allowing the rapid training of specialized adaptation models for each new target via learned, layer-wise adapter mixing, using only a handful of target style training examples. AuthorMix outperforms existing, SoTA style-transfer baselines -- as well as GPT-5.1 -- for low-resource targets, achieving the highest overall score and substantially improving meaning preservation.
AuthorMix:通过分层适配器混合实现模块化的作者风格迁移 / AuthorMix: Modular Authorship Style Transfer via Layer-wise Adapter Mixing
这篇论文提出了一种名为AuthorMix的轻量级、模块化方法,它通过训练针对特定作者风格的小型适配器,并学习如何将它们分层组合,从而仅需少量目标作者的文本就能高效、准确地将其写作风格迁移到其他文本上,同时更好地保持了原文的核心意思。
源自 arXiv: 2603.23069