菜单

🤖 系统
📄 Abstract - PromptBridge: Cross-Model Prompt Transfer for Large Language Models

Large language models (LLMs) underpin applications in code generation, mathematical reasoning, and agent-based workflows. In practice, systems access LLMs via commercial APIs or open-source deployments, and the model landscape (e.g., GPT, Claude, Llama) evolves rapidly. This rapid evolution forces frequent model switches driven by capability, cost, deployment constraints, and privacy. Yet prompts are highly model-sensitive: reusing a prompt engineered for one model on another often yields substantially worse performance than a prompt optimized for the target model. We term this phenomenon Model Drifting. Through extensive empirical analysis across diverse LLM configurations, we show that model drifting is both common and severe. To address this challenge, we introduce PromptBridge, a training-free framework that preserves prompt effectiveness under model switches, enabling cross-model prompt transfer without costly per-task or per-model re-optimization. PromptBridge requires only a small set of alignment tasks for calibration. It first applies Model-Adaptive Reflective Prompt Evolution (MAP-RPE) to obtain task- and model-specific optimal prompts via iterative reflective refinement and quantitative evaluation. Using the resulting calibrated prompt pairs for the source and target models, PromptBridge learns a cross-model prompt mapping. At test time, i.e., for an unseen task, given a source-model prompt, this mapping directly produces an optimized prompt for the target model. Experiments in single-agent and multi-agent settings show that PromptBridge consistently improves downstream accuracy while reducing migration effort. The code will be available soon.

顶级标签: llm natural language processing model evaluation
详细标签: prompt engineering model transfer cross-model adaptation prompt optimization evaluation framework 或 搜索:

PromptBridge:面向大语言模型的跨模型提示词迁移框架 / PromptBridge: Cross-Model Prompt Transfer for Large Language Models


1️⃣ 一句话总结

这篇论文提出了一个名为PromptBridge的训练免费框架,旨在解决大语言模型之间因模型差异导致提示词效果大幅下降的问题,通过少量校准任务学习跨模型提示映射,从而实现在切换模型时高效复用和迁移提示词,显著提升新模型上的任务表现并减少迁移成本。


📄 打开原文 PDF