知识模型提示提升大语言模型在规划任务上的性能 / Knowledge Model Prompting Increases LLM Performance on Planning Tasks
1️⃣ 一句话总结
这篇论文提出了一种基于任务-方法-知识框架的提示方法,它能有效引导大语言模型进行结构化推理和任务分解,从而在复杂的符号规划任务上大幅提升其性能表现。
Large Language Models (LLM) can struggle with reasoning ability and planning tasks. Many prompting techniques have been developed to assist with LLM reasoning, notably Chain-of-Thought (CoT); however, these techniques, too, have come under scrutiny as LLMs' ability to reason at all has come into question. Borrowing from the domain of cognitive and educational science, this paper investigates whether the Task-Method-Knowledge (TMK) framework can improve LLM reasoning capabilities beyond its previously demonstrated success in educational applications. The TMK framework's unique ability to capture causal, teleological, and hierarchical reasoning structures, combined with its explicit task decomposition mechanisms, makes it particularly well-suited for addressing language model reasoning deficiencies, and unlike other hierarchical frameworks such as HTN and BDI, TMK provides explicit representations of not just what to do and how to do it, but also why actions are taken. The study evaluates TMK by experimenting on the PlanBench benchmark, focusing on the Blocksworld domain to test for reasoning and planning capabilities, examining whether TMK-structured prompting can help language models better decompose complex planning problems into manageable sub-tasks. Results also highlight significant performance inversion in reasoning models. TMK prompting enables the reasoning model to achieve up to an accuracy of 97.3\% on opaque, symbolic tasks (Random versions of Blocksworld in PlanBench) where it previously failed (31.5\%), suggesting the potential to bridge the gap between semantic approximation and symbolic manipulation. Our findings suggest that TMK functions not merely as context, but also as a mechanism that steers reasoning models away from their default linguistic modes to engage formal, code-execution pathways in the context of the experiments.
知识模型提示提升大语言模型在规划任务上的性能 / Knowledge Model Prompting Increases LLM Performance on Planning Tasks
这篇论文提出了一种基于任务-方法-知识框架的提示方法,它能有效引导大语言模型进行结构化推理和任务分解,从而在复杂的符号规划任务上大幅提升其性能表现。
源自 arXiv: 2602.03900