菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - SEALing the Gap: A Reference Framework for LLM Inference Carbon Estimation via Multi-Benchmark Driven Embodiment

Large Language Models are rapidly gaining traction in software engineering, yet their growing carbon footprint raises pressing sustainability concerns. While training emissions are substantial, inference quickly surpasses them due to the sheer volume of prompts processed. This shift underscores the urgent need for accurate, prompt-level carbon measurement during inference to enable informed, sustainability-focused decision-making. To address the limitations of existing approaches, in this paper, we outline the guiding principles for a novel reference framework for LLM inference carbon estimation that can guide the design of future tools and provide a systematic foundation for advancing sustainability research in this domain. We also introduce SEAL, an early embodiment of these principles that leverages a multi-benchmark-driven approach for per-prompt carbon estimation. Its initial validation shows promising results, positioning SEAL as a foundation for standardized sustainability assessment across the LLM ecosystem.

顶级标签: llm systems model evaluation
详细标签: carbon footprint sustainability inference efficiency energy measurement benchmarking 或 搜索:

弥合差距:通过多基准驱动实现LLM推理碳排放估算的参考框架 / SEALing the Gap: A Reference Framework for LLM Inference Carbon Estimation via Multi-Benchmark Driven Embodiment


1️⃣ 一句话总结

这篇论文提出了一个名为SEAL的参考框架,旨在通过多基准驱动的方法,精准估算大型语言模型在推理阶段处理每个用户请求所产生的碳排放,为促进AI领域的可持续发展提供了标准化评估基础。

源自 arXiv: 2603.02949