基于参数高效大语言模型微调与往返翻译的文本风格迁移 / Text Style Transfer with Parameter-efficient LLM Finetuning and Round-trip Translation
1️⃣ 一句话总结
这篇论文提出了一种新方法,通过利用往返翻译自动生成训练数据,并高效微调大语言模型,有效解决了文本风格迁移任务中缺乏成对训练数据的问题,在多个领域取得了比零样本提示和少样本学习更好的效果。
This paper proposes a novel method for Text Style Transfer (TST) based on parameter-efficient fine-tuning of Large Language Models (LLMs). Addressing the scarcity of parallel corpora that map between styles, the study employs roundtrip translation to synthesize such parallel datasets from monolingual corpora. This approach creates 'neutralized' text devoid of stylistic attributes, essentially creating a shared input style at training-time and inference-time. Experimental results demonstrate consistent superiority of this method over zero-shot prompting and fewshot ICL techniques measured by BLEU scores and style accuracy scores across four investigated domains. Furthermore, the integration of retrieval-augmented generation (RAG) for terminology and name knowledge enhances robustness and stylistic consistency.
基于参数高效大语言模型微调与往返翻译的文本风格迁移 / Text Style Transfer with Parameter-efficient LLM Finetuning and Round-trip Translation
这篇论文提出了一种新方法,通过利用往返翻译自动生成训练数据,并高效微调大语言模型,有效解决了文本风格迁移任务中缺乏成对训练数据的问题,在多个领域取得了比零样本提示和少样本学习更好的效果。
源自 arXiv: 2602.15013