CIRCLE:一个从现实世界视角评估AI的框架 / CIRCLE: A Framework for Evaluating AI from a Real-World Lens
1️⃣ 一句话总结
这篇论文提出了一个名为CIRCLE的六阶段框架,它通过将现实世界中利益相关者的关切转化为可测量的指标,来系统评估AI在实际部署中的真实效果,而不仅仅是其理论性能。
This paper proposes CIRCLE, a six-stage, lifecycle-based framework to bridge the reality gap between model-centric performance metrics and AI's materialized outcomes in deployment. While existing frameworks like MLOps focus on system stability and benchmarks measure abstract capabilities, decision-makers outside the AI stack lack systematic evidence about the behavior of AI technologies under real-world user variability and constraints. CIRCLE operationalizes the Validation phase of TEVV (Test, Evaluation, Verification, and Validation) by formalizing the translation of stakeholder concerns outside the stack into measurable signals. Unlike participatory design, which often remains localized, or algorithmic audits, which are often retrospective, CIRCLE provides a structured, prospective protocol for linking context-sensitive qualitative insights to scalable quantitative metrics. By integrating methods such as field testing, red teaming, and longitudinal studies into a coordinated pipeline, CIRCLE produces systematic knowledge: evidence that is comparable across sites yet sensitive to local context. This can enable governance based on materialized downstream effects rather than theoretical capabilities.
CIRCLE:一个从现实世界视角评估AI的框架 / CIRCLE: A Framework for Evaluating AI from a Real-World Lens
这篇论文提出了一个名为CIRCLE的六阶段框架,它通过将现实世界中利益相关者的关切转化为可测量的指标,来系统评估AI在实际部署中的真实效果,而不仅仅是其理论性能。
源自 arXiv: 2602.24055