菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-05-11
📄 Abstract - MulTaBench: Benchmarking Multimodal Tabular Learning with Text and Image

Tabular Foundation Models have recently established the state of the art in supervised tabular learning, by leveraging pretraining to learn generalizable representations of numerical and categorical structured data. However, they lack native support for unstructured modalities such as text and image, and rely on frozen, pretrained embeddings to process them. On established Multimodal Tabular Learning benchmarks, we show that tuning the embeddings to the task improves performance. Existing benchmarks, however, often focus on the mere co-occurrence of modalities; this leads to high variance across datasets and masks the benefits of task-specific tuning. To address this gap, we introduce MulTaBench, a benchmark of 40 datasets, split equally between image-tabular and text-tabular tasks. We focus on predictive tasks where the modalities provide complementary predictive signal, and where generic embeddings lose critical information, necessitating Target-Aware Representations that are aligned with the task. Our experimental results demonstrate that the gains from target-aware representation tuning generalize across both text and image modalities, several tabular learners, encoder scales, and embedding dimensions. MulTaBench constitutes the largest image-tabular benchmarking effort to date, spanning high-impact domains such as healthcare and e-commerce. It is designed to enable the research of novel architectures which incorporate joint modeling and target-aware representations, paving the way for the development of novel Multimodal Tabular Foundation Models.

顶级标签: machine learning multi-modal benchmark
详细标签: tabular learning multimodal representation learning target-aware foundation models 或 搜索:

MulTaBench:多模态表格数据学习(含文本与图像)的基准评测 / MulTaBench: Benchmarking Multimodal Tabular Learning with Text and Image


1️⃣ 一句话总结

本文提出了一个包含40个数据集的多模态表格基准MulTaBench,专门测试文本或图像与结构化表格数据协同预测的任务,并发现针对任务调整嵌入表示能显著提升模型性能,为开发统一的多模态表格基础模型提供了重要评测工具。

源自 arXiv: 2605.10616