菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-04-28
📄 Abstract - Bye Bye Perspective API: Lessons for Measurement Infrastructure in NLP, CSS and LLM Evaluation

The closure of Perspective API at the end of 2026 discards what has functioned as the de facto standard for automated toxicity measurement in NLP, CSS, and LLM evaluation research. We document the structural dependence that the communities built on this single proprietary tool and discuss how this dependence caused epistemic problems that have affected - and will likely continue to affect - collective research efforts. Perspective's model was periodically updated without versioning or disclosure, its annotation structure reflected a single corporate operationalisation of a contested concept, and its scores were used simultaneously as an evaluation target and an evaluation standard. Its closure leaves behind non-updatable benchmarks, irreproducible results, and ultimately a field at risk of perpetuating these issues by turning to closed-source LLMs. We use Perspective's announced termination as an opportunity to call for an independent, valid, adaptable, and reproducible toxicity and hate speech measurement infrastructure, with the technical and governance requirements outlined in this paper.

顶级标签: natural language processing llm model evaluation
详细标签: toxicity measurement perspective api benchmark dependency reproducibility measurement infrastructure 或 搜索:

告别Perspective API:NLP、CSS与LLM评估中测量基础设施的教训 / Bye Bye Perspective API: Lessons for Measurement Infrastructure in NLP, CSS and LLM Evaluation


1️⃣ 一句话总结

本文以Perspective API关闭为契机,批判性地分析了NLP、CSS和LLM评估领域过度依赖单一商业毒性测量工具所带来的不可复现、概念模糊及标准失当等问题,并呼吁建立独立、有效、可适应、可复现的新型测量基础设施。

源自 arXiv: 2604.25580