菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-17
📄 Abstract - Fine-Tuning LLMs to Generate Economical and Reliable Actions for the Power Grid

Public Safety Power Shutoffs (PSPS) force rapid topology changes that can render standard operating points infeasible, requiring operators to quickly identify corrective transmission switching actions that reduce load shedding while maintaining acceptable voltage behavior. We present a verifiable, multi-stage adaptation pipeline that fine-tunes an instruction-tuned large language model (LLM) to generate \emph{open-only} corrective switching plans from compact PSPS scenario summaries under an explicit switching budget. First, supervised fine-tuning distills a DC-OPF MILP oracle into a constrained action grammar that enables reliable parsing and feasibility checks. Second, direct preference optimization refines the policy using AC-evaluated preference pairs ranked by a voltage-penalty metric, injecting voltage-awareness beyond DC imitation. Finally, best-of-$N$ selection provides an inference-time addition by choosing the best feasible candidate under the target metric. On IEEE 118-bus PSPS scenarios, fine-tuning substantially improves DC objective values versus zero-shot generation, reduces AC power-flow failure from 50\% to single digits, and improves voltage-penalty outcomes on the common-success set. Code and data-generation scripts are released to support reproducibility.

顶级标签: llm systems model training
详细标签: power grid optimization supervised fine-tuning direct preference optimization corrective switching verifiable ai 或 搜索:

微调大语言模型为电网生成经济可靠的调控动作 / Fine-Tuning LLMs to Generate Economical and Reliable Actions for the Power Grid


1️⃣ 一句话总结

这篇论文提出了一种可验证的微调方法,教会大语言模型根据电网安全停电的紧急情况,自动生成既经济又可靠、且符合电压安全要求的开关操作方案,有效减少了停电损失并提升了系统稳定性。

源自 arXiv: 2602.15350