菜单

🤖 系统
📄 Abstract - Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information

Recent large language models achieve strong reasoning performance by generating detailed chain-of-thought traces, but this often leads to excessive token use and high inference latency. Existing efficiency approaches typically focus on model-centric interventions, such as reinforcement learning or supervised fine-tuning, to reduce verbosity. In contrast, we propose a training-free, input-centric approach. Inspired by cognitive psychology, we introduce Focused Chain-of-Thought (F-CoT), which separates information extraction from the reasoning process. F-CoT first organizes the essential information from a query into a concise, structured context and then guides the model to reason exclusively over this context. By preventing attention to irrelevant details, F-CoT naturally produces shorter reasoning paths. On arithmetic word problems, F-CoT reduces generated tokens by 2-3x while maintaining accuracy comparable to standard zero-shot CoT. These results highlight structured input as a simple yet effective lever for more efficient LLM reasoning.

顶级标签: llm natural language processing model evaluation
详细标签: chain-of-thought reasoning efficiency structured prompting inference latency training-free optimization 或 搜索:

聚焦思维链:通过结构化输入信息实现高效大语言模型推理 / Focused Chain-of-Thought: Efficient LLM Reasoning via Structured Input Information


1️⃣ 一句话总结

这篇论文提出了一种无需训练、基于输入信息结构化的‘聚焦思维链’方法,它能将问题中的关键信息提取并组织成简洁的上下文,从而引导大语言模型进行更专注、更高效的推理,在保持准确性的同时大幅减少生成的文本量。


📄 打开原文 PDF