通过空间与时间监听器实现机器学习模型的可视化 / Visualization of Machine Learning Models through Their Spatial and Temporal Listeners
1️⃣ 一句话总结
这篇论文提出了一个以模型为中心的两阶段可视化框架,通过抽象监听器捕捉模型的空间与时间行为,并分析了大量相关文献,发现当前研究过于关注模型结果而较少探索模型内部机制,尽管后者具有更高的影响力。
Model visualization (ModelVis) has emerged as a major research direction, yet existing taxonomies are largely organized by data or tasks, making it difficult to treat models as first-class analysis objects. We present a model-centric two-stage framework that employs abstract listeners to capture spatial and temporal model behaviors, and then connects the translated model behavior data to the classical InfoVis pipeline. To apply the framework at scale, we build a retrieval-augmented human--large language model (LLM) extraction workflow and curate a corpus of 128 VIS/VAST ModelVis papers with 331 coded figures. Our analysis shows a dominant result-centric priority on visualizing model outcomes, quantitative/nominal data type, statistical charts, and performance evaluation. Citation-weighted trends further indicate that less frequent model-mechanism-oriented studies have disproportionately high impact while are less investigated recently. Overall, the framework is a general approach for comparing existing ModelVis systems and guiding possible future designs.
通过空间与时间监听器实现机器学习模型的可视化 / Visualization of Machine Learning Models through Their Spatial and Temporal Listeners
这篇论文提出了一个以模型为中心的两阶段可视化框架,通过抽象监听器捕捉模型的空间与时间行为,并分析了大量相关文献,发现当前研究过于关注模型结果而较少探索模型内部机制,尽管后者具有更高的影响力。
源自 arXiv: 2603.27527