菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-06
📄 Abstract - Optimizing LLM Prompt Engineering with DSPy Based Declarative Learning

Large Language Models (LLMs) have shown strong performance across a wide range of natural language processing tasks; however, their effectiveness is highly dependent on prompt design, structure, and embedded reasoning signals. Conventional prompt engineering methods largely rely on heuristic trial-and-error processes, which limits scalability, reproducibility, and generalization across tasks. DSPy, a declarative framework for optimizing text-processing pipelines, offers an alternative approach by enabling automated, modular, and learnable prompt construction for LLM-based this http URL paper presents a systematic study of DSPy-based declarative learning for prompt optimization, with emphasis on prompt synthesis, correction, calibration, and adaptive reasoning control. We introduce a unified DSPy LLM architecture that combines symbolic planning, gradient free optimization, and automated module rewriting to reduce hallucinations, improve factual grounding, and avoid unnecessary prompt complexity. Experimental evaluations conducted on reasoning tasks, retrieval-augmented generation, and multi-step chain-of-thought benchmarks demonstrate consistent gains in output reliability, efficiency, and generalization across models. The results show improvements of up to 30 to 45% in factual accuracy and a reduction of approximately 25% in hallucination rates. Finally, we outline key limitations and discuss future research directions for declarative prompt optimization frameworks.

顶级标签: llm model training systems
详细标签: prompt optimization declarative learning dspy framework automated prompting hallucination reduction 或 搜索:

基于DSPy声明式学习优化大语言模型提示工程 / Optimizing LLM Prompt Engineering with DSPy Based Declarative Learning


1️⃣ 一句话总结

这篇论文提出了一种名为DSPy的声明式框架,通过自动化和可学习的模块化方法,系统性地优化大语言模型的提示设计,从而显著提升了模型输出的准确性、可靠性并减少了幻觉现象,避免了传统手动试错方法的局限性。

源自 arXiv: 2604.04869