AFMRL:电商中属性增强的细粒度多模态表示学习 / AFMRL: Attribute-Enhanced Fine-Grained Multi-Modal Representation Learning in E-commerce
1️⃣ 一句话总结
本文提出了一种名为AFMRL的方法,通过让多模态大模型自动生成商品的关键属性(如颜色、材质),并利用这些属性来改进对比学习和模型微调,从而大幅提升电商场景下区分高度相似商品(如同一款手机的不同颜色)的准确性。
Multimodal representation is crucial for E-commerce tasks such as identical product retrieval. Large representation models (e.g., VLM2Vec) demonstrate strong multimodal understanding capabilities, yet they struggle with fine-grained semantic comprehension, which is essential for distinguishing highly similar items. To address this, we propose Attribute-Enhanced Fine-Grained Multi-Modal Representation Learning (AFMRL), which defines product fine-grained understanding as an attribute generation task. It leverages the generative power of Multimodal Large Language Models (MLLMs) to extract key attributes from product images and text, and enhances representation learning through a two-stage training framework: 1) Attribute-Guided Contrastive Learning (AGCL), where the key attributes generated by the MLLM are used in the image-text contrastive learning training process to identify hard samples and filter out noisy false negatives. 2) Retrieval-aware Attribute Reinforcement (RAR), where the improved retrieval performance of the representation model post-attribute integration serves as a reward signal to enhance MLLM's attribute generation during multimodal fine-tuning. Extensive experiments on large-scale E-commerce datasets demonstrate that our method achieves state-of-the-art performance on multiple downstream retrieval tasks, validating the effectiveness of harnessing generative models to advance fine-grained representation learning.
AFMRL:电商中属性增强的细粒度多模态表示学习 / AFMRL: Attribute-Enhanced Fine-Grained Multi-Modal Representation Learning in E-commerce
本文提出了一种名为AFMRL的方法,通过让多模态大模型自动生成商品的关键属性(如颜色、材质),并利用这些属性来改进对比学习和模型微调,从而大幅提升电商场景下区分高度相似商品(如同一款手机的不同颜色)的准确性。
源自 arXiv: 2604.20135