菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-03-03
📄 Abstract - GPUTOK: GPU Accelerated Byte Level BPE Tokenization

As large language models move toward million-token context windows, CPU tokenizers become a major slowdown because they process text one step at a time while powerful GPUs sit unused. We built a GPU-based byte-level BPE tokenizer that follows GPT-2's merge rules. It includes a basic BlockBPE-style kernel and a faster, optimized version that uses cuCollections static map, CUB reductions, and a pybind11 interface for Python. On WikiText103 sequences up to 131k tokens, the optimized GPU tokenizer produces the same tokens as a CPU version and, for the longest inputs, is about 1.7x faster than tiktoken and about 7.6x faster than the HuggingFace GPT-2 tokenizer. Nsight profiling shows that 70-80% of CUDA API time goes to memory allocation, so adding memory pooling should give the biggest speed boost next. Tests on generation tasks using WikiText103 prompts show that our GPU tokenizer's outputs stay within about one percentage point of tiktoken and HuggingFace GPT-2 on similarity and overlap metrics, meaning it keeps output quality while making long-context inference more practical.

顶级标签: llm systems model training
详细标签: tokenization gpu acceleration bpe performance optimization inference 或 搜索:

GPUTOK:GPU加速的字节级BPE分词器 / GPUTOK: GPU Accelerated Byte Level BPE Tokenization


1️⃣ 一句话总结

这篇论文开发了一个在GPU上运行的快速分词工具,它能让处理超长文本的大语言模型运行得更快,在保证结果质量基本不变的前提下,速度比常用的CPU分词器快好几倍。

源自 arXiv: 2603.02597