超越单一提取器:重新思考用于大语言模型预训练的HTML到文本提取方法 / Beyond a Single Extractor: Re-thinking HTML-to-Text Extraction for LLM Pretraining
1️⃣ 一句话总结
这篇论文研究发现,在构建大语言模型预训练数据集时,对所有网页使用单一的文本提取方法会浪费大量有用内容,而结合多种提取器能显著增加数据量并提升模型在表格、代码等结构化任务上的表现。
One of the first pre-processing steps for constructing web-scale LLM pretraining datasets involves extracting text from HTML. Despite the immense diversity of web content, existing open-source datasets predominantly apply a single fixed extractor to all webpages. In this work, we investigate whether this practice leads to suboptimal coverage and utilization of Internet data. We first show that while different extractors may lead to similar model performance on standard language understanding tasks, the pages surviving a fixed filtering pipeline can differ substantially. This suggests a simple intervention: by taking a Union over different extractors, we can increase the token yield of DCLM-Baseline by up to 71% while maintaining benchmark performance. We further show that for structured content such as tables and code blocks, extractor choice can significantly impact downstream task performance, with differences of up to 10 percentage points (p.p.) on WikiTQ and 3 p.p. on HumanEval.
超越单一提取器:重新思考用于大语言模型预训练的HTML到文本提取方法 / Beyond a Single Extractor: Re-thinking HTML-to-Text Extraction for LLM Pretraining
这篇论文研究发现,在构建大语言模型预训练数据集时,对所有网页使用单一的文本提取方法会浪费大量有用内容,而结合多种提取器能显著增加数据量并提升模型在表格、代码等结构化任务上的表现。
源自 arXiv: 2602.19548