重新思考患者教育:多轮多模态交互视角 / Rethinking Patient Education as Multi-turn Multi-modal Interaction
1️⃣ 一句话总结
这篇论文提出了一个名为MedImageEdu的新基准测试,用于评估AI系统如何像医生一样,结合放射影像和文字解释,通过多轮对话为不同背景的患者提供个性化、安全且易于理解的教育,而不仅仅是回答问题。
Most medical multimodal benchmarks focus on static tasks such as image question answering, report generation, and plain-language rewriting. Patient education is more demanding: systems must identify relevant evidence across images, show patients where to look, explain findings in accessible language, and handle confusion or distress. Yet most patient education work remains text-only, even though combined image-and-text explanations may better support understanding. We introduce MedImageEdu, a benchmark for multi-turn, evidence-grounded radiology patient education. Each case provides a radiology report with report text and case images. A DoctorAgent interacts with a PatientAgent, conditioned on a hidden profile that captures factors such as education level, health literacy, and personality. When a patient question would benefit from visual support, the DoctorAgent can issue drawing instructions grounded in the report, case images, and the current question to a benchmark-provided drawing tool. The tool returns image(s), after which the DoctorAgent produces a final multimodal response consisting of the image(s) and a grounded plain-language explanation. MedImageEdu contains 150 cases from three sources and evaluates both the consultation process and the final multimodal response along five dimensions: Consultation, Safety and Scope, Language Quality, Drawing Quality, and Image-Text Response Quality. Across representative open- and closed-source vision-language model agents, we find three consistent gaps: fluent language often outpaces faithful visual grounding, safety is the weakest dimension across disease categories, and emotionally tense interactions are harder than low education or low health literacy. MedImageEdu provides a controlled testbed for assessing whether multimodal agents can teach from evidence rather than merely answer from text.
重新思考患者教育:多轮多模态交互视角 / Rethinking Patient Education as Multi-turn Multi-modal Interaction
这篇论文提出了一个名为MedImageEdu的新基准测试,用于评估AI系统如何像医生一样,结合放射影像和文字解释,通过多轮对话为不同背景的患者提供个性化、安全且易于理解的教育,而不仅仅是回答问题。
源自 arXiv: 2604.14656