菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-17
📄 Abstract - Neural Scaling Laws for Boosted Jet Tagging

The success of Large Language Models (LLMs) has established that scaling compute, through joint increases in model capacity and dataset size, is the primary driver of performance in modern machine learning. While machine learning has long been an integral component of High Energy Physics (HEP) data analysis workflows, the compute used to train state-of-the-art HEP models remains orders of magnitude below that of industry foundation models. With scaling laws only beginning to be studied in the field, we investigate neural scaling laws for boosted jet classification using the public JetClass dataset. We derive compute optimal scaling laws and identify an effective performance limit that can be consistently approached through increased compute. We study how data repetition, common in HEP where simulation is expensive, modifies the scaling yielding a quantifiable effective dataset size gain. We then study how the scaling coefficients and asymptotic performance limits vary with the choice of input features and particle multiplicity, demonstrating that increased compute reliably drives performance toward an asymptotic limit, and that more expressive, lower-level features can raise the performance limit and improve results at fixed dataset size.

顶级标签: machine learning model training data
详细标签: scaling laws jet tagging high energy physics compute optimal data efficiency 或 搜索:

用于增强喷注标记的神经缩放定律 / Neural Scaling Laws for Boosted Jet Tagging


1️⃣ 一句话总结

这篇论文研究了在高能物理的喷注分类任务中,模型性能如何随着计算资源、数据量和特征选择的增加而提升的规律,发现增加计算资源可以稳定地将性能推向一个极限,并且使用更底层、信息更丰富的特征可以提高这个性能极限。

源自 arXiv: 2602.15781