菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-11
📄 Abstract - Bi-Level Prompt Optimization for Multimodal LLM-as-a-Judge

Large language models (LLMs) have become widely adopted as automated judges for evaluating AI-generated content. Despite their success, aligning LLM-based evaluations with human judgments remains challenging. While supervised fine-tuning on human-labeled data can improve alignment, it is costly and inflexible, requiring new training for each task or dataset. Recent progress in auto prompt optimization (APO) offers a more efficient alternative by automatically improving the instructions that guide LLM judges. However, existing APO methods primarily target text-only evaluations and remain underexplored in multimodal settings. In this work, we study auto prompt optimization for multimodal LLM-as-a-judge, particularly for evaluating AI-generated images. We identify a key bottleneck: multimodal models can only process a limited number of visual examples due to context window constraints, which hinders effective trial-and-error prompt refinement. To overcome this, we propose BLPO, a bi-level prompt optimization framework that converts images into textual representations while preserving evaluation-relevant visual cues. Our bi-level optimization approach jointly refines the judge prompt and the I2T prompt to maintain fidelity under limited context budgets. Experiments on four datasets and three LLM judges demonstrate the effectiveness of our method.

顶级标签: llm model evaluation multi-modal
详细标签: prompt optimization multimodal evaluation ai-generated images automated judging bi-level optimization 或 搜索:

面向多模态大语言模型作为评估者的双层提示优化 / Bi-Level Prompt Optimization for Multimodal LLM-as-a-Judge


1️⃣ 一句话总结

本文提出了一种名为BLPO的双层提示优化框架,通过将图像转换为保留关键视觉信息的文本表示,有效解决了多模态大模型在评估生成图像时因上下文限制而难以优化提示的难题,从而显著提升了AI评估结果与人类判断的一致性。

源自 arXiv: 2602.11340