菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-05
📄 Abstract - Tempora: Characterising the Time-Contingent Utility of Online Test-Time Adaptation

Test-time adaptation (TTA) offers a compelling remedy for machine learning (ML) models that degrade under domain shifts, improving generalisation on-the-fly with only unlabelled samples. This flexibility suits real deployments, yet conventional evaluations unrealistically assume unbounded processing time, overlooking the accuracy-latency trade-off. As ML increasingly underpins latency-sensitive and user-facing use-cases, temporal pressure constrains the viability of adaptable inference; predictions arriving too late to act on are futile. We introduce Tempora, a framework for evaluating TTA under this pressure. It consists of temporal scenarios that model deployment constraints, evaluation protocols that operationalise measurement, and time-contingent utility metrics that quantify the accuracy-latency trade-off. We instantiate the framework with three such metrics: (1) discrete utility for asynchronous streams with hard deadlines, (2) continuous utility for interactive settings where value decays with latency, and (3) amortised utility for budget-constrained deployments. Applying Tempora to seven TTA methods on ImageNet-C across 240 temporal evaluations reveals rank instability: conventional rankings do not predict rankings under temporal pressure; ETA, a state-of-the-art method in the conventional setting, falls short in 41.2% of evaluations. The highest-utility method varies with corruption type and temporal pressure, with no clear winner. By enabling systematic evaluation across diverse temporal constraints for the first time, Tempora reveals when and why rankings invert, offering practitioners a lens for method selection and researchers a target for deployable adaptation.

顶级标签: model evaluation machine learning systems
详细标签: test-time adaptation latency-accuracy trade-off online adaptation evaluation framework temporal utility 或 搜索:

Tempora:刻画在线测试时自适应方法的时间依赖性效用 / Tempora: Characterising the Time-Contingent Utility of Online Test-Time Adaptation


1️⃣ 一句话总结

这篇论文提出了一个名为Tempora的新框架,用于评估机器学习模型在现实部署中面临时间压力时的自适应能力,揭示了传统性能排名在考虑延迟后会发生变化,帮助开发者和研究者选择更适合实际应用场景的模型自适应方法。

源自 arXiv: 2602.06136