菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-30
📄 Abstract - Qualitative Evaluation of Language Model Rescoring in Automatic Speech Recognition

Evaluating automatic speech recognition (ASR) systems is a classical but difficult and still open problem, which often boils down to focusing only on the word error rate (WER). However, this metric suffers from many limitations and does not allow an in-depth analysis of automatic transcription errors. In this paper, we propose to study and understand the impact of rescoring using language models in ASR systems by means of several metrics often used in other natural language processing (NLP) tasks in addition to the WER. In particular, we introduce two measures related to morpho-syntactic and semantic aspects of transcribed words: 1) the POSER (Part-of-speech Error Rate), which should highlight the grammatical aspects, and 2) the EmbER (Embedding Error Rate), a measurement that modifies the WER by providing a weighting according to the semantic distance of the wrongly transcribed words. These metrics illustrate the linguistic contributions of the language models that are applied during a posterior rescoring step on transcription hypotheses.

顶级标签: natural language processing model evaluation audio
详细标签: speech recognition rescoring language models error analysis semantic metrics 或 搜索:

语音识别中语言模型重新评分方法的定性评估 / Qualitative Evaluation of Language Model Rescoring in Automatic Speech Recognition


1️⃣ 一句话总结

本文提出两种新的评估指标——词性错误率和嵌入错误率,分别从语法和语义角度分析语言模型在语音识别后处理中的实际贡献,弥补了传统词错误率仅关注替换、删除和插入错误数量而忽略语言质量的不足。

源自 arXiv: 2604.27533