ZeroSyl:用于口语建模的简单零资源音节切分方法 / ZeroSyl: Simple Zero-Resource Syllable Tokenization for Spoken Language Modeling
1️⃣ 一句话总结
这篇论文提出了一种名为ZeroSyl的简单免训练方法,它可以直接从预训练的语音模型中提取音节边界和特征,用于构建更高效的口语语言模型,并在多项任务上超越了之前复杂的多阶段方法。
Pure speech language models aim to learn language directly from raw audio without textual resources. A key challenge is that discrete tokens from self-supervised speech encoders result in excessively long sequences, motivating recent work on syllable-like units. However, methods like Sylber and SyllableLM rely on intricate multi-stage training pipelines. We propose ZeroSyl, a simple training-free method to extract syllable boundaries and embeddings directly from a frozen WavLM model. Using L2 norms of features in WavLM's intermediate layers, ZeroSyl achieves competitive syllable segmentation performance. The resulting segments are mean-pooled, discretized using K-means, and used to train a language model. ZeroSyl outperforms prior syllabic tokenizers across lexical, syntactic, and narrative benchmarks. Scaling experiments show that while finer-grained units are beneficial for lexical tasks, our discovered syllabic units exhibit better scaling behavior for syntactic modeling.
ZeroSyl:用于口语建模的简单零资源音节切分方法 / ZeroSyl: Simple Zero-Resource Syllable Tokenization for Spoken Language Modeling
这篇论文提出了一种名为ZeroSyl的简单免训练方法,它可以直接从预训练的语音模型中提取音节边界和特征,用于构建更高效的口语语言模型,并在多项任务上超越了之前复杂的多阶段方法。
源自 arXiv: 2602.15537