Brain3D:通过多模态推理实现视觉表征的脑电图到三维解码 / Brain3D: EEG-to-3D Decoding of Visual Representations via Multimodal Reasoning
1️⃣ 一句话总结
这篇论文提出了一种名为Brain3D的新方法,它通过将脑电信号先转化为图像,再借助大语言模型提取三维描述,最终生成三维模型,从而首次实现了从人脑活动直接解码并重建出三维视觉内容。
Decoding visual information from electroencephalography (EEG) has recently achieved promising results, primarily focusing on reconstructing two-dimensional (2D) images from brain activity. However, the reconstruction of three-dimensional (3D) representations remains largely unexplored. This limits the geometric understanding and reduces the applicability of neural decoding in different contexts. To address this gap, we propose Brain3D, a multimodal architecture for EEG-to-3D reconstruction based on EEG-to-image decoding. It progressively transforms neural representations into the 3D domain using geometry-aware generative reasoning. Our pipeline first produces visually grounded images from EEG signals, then employs a multimodal large language model to extract structured 3D-aware descriptions, which guide a diffusion-based generation stage whose outputs are finally converted into coherent 3D meshes via a single-image-to-3D model. By decomposing the problem into structured stages, the proposed approach avoids direct EEG-to-3D mappings and enables scalable brain-driven 3D generation. We conduct a comprehensive evaluation comparing the reconstructed 3D outputs against the original visual stimuli, assessing both semantic alignment and geometric fidelity. Experimental results demonstrate strong performance of the proposed architecture, achieving up to 85.4% 10-way Top-1 EEG decoding accuracy and 0.648 CLIPScore, supporting the feasibility of multimodal EEG-driven 3D reconstruction.
Brain3D:通过多模态推理实现视觉表征的脑电图到三维解码 / Brain3D: EEG-to-3D Decoding of Visual Representations via Multimodal Reasoning
这篇论文提出了一种名为Brain3D的新方法,它通过将脑电信号先转化为图像,再借助大语言模型提取三维描述,最终生成三维模型,从而首次实现了从人脑活动直接解码并重建出三维视觉内容。
源自 arXiv: 2604.08068