用于灾害结构勘测总结的大语言模型 / A Large Language Model for Disaster Structural Reconnaissance Summarization
1️⃣ 一句话总结
这项研究提出了一个结合大语言模型的新框架,能够自动分析灾害现场的图像和文本数据,生成结构损伤的总结报告,从而帮助工程师快速评估灾情并提升建筑环境的韧性。
Artificial Intelligence (AI)-aided vision-based Structural Health Monitoring (SHM) has emerged as an effective approach for monitoring and assessing structural condition by analyzing image and video data. By integrating Computer Vision (CV) and Deep Learning (DL), vision-based SHM can automatically identify and localize visual patterns associated with structural damage. However, previous works typically generate only discrete outputs, such as damage class labels and damage region coordinates, requiring engineers to further reorganize and analyze these results for evaluation and decision-making. In late 2022, Large Language Models (LLMs) became popular across multiple fields, providing new insights into AI-aided vision-based SHM. In this study, a novel LLM-based Disaster Reconnaissance Summarization (LLM-DRS) framework is proposed. It introduces a standard reconnaissance plan in which the collection of vision data and corresponding metadata follows a well-designed on-site investigation process. Text-based metadata and image-based vision data are then processed and integrated into a unified format, where well-trained Deep Convolutional Neural Networks extract key attributes, including damage state, material type, and damage level. Finally, all data are fed into an LLM with carefully designed prompts, enabling the LLM-DRS to generate summary reports for individual structures or affected regions based on aggregated attributes and metadata. Results show that integrating LLMs into vision-based SHM, particularly for rapid post-disaster reconnaissance, demonstrates promising potential for improving resilience of the built environment through effective reconnaissance.
用于灾害结构勘测总结的大语言模型 / A Large Language Model for Disaster Structural Reconnaissance Summarization
这项研究提出了一个结合大语言模型的新框架,能够自动分析灾害现场的图像和文本数据,生成结构损伤的总结报告,从而帮助工程师快速评估灾情并提升建筑环境的韧性。
源自 arXiv: 2602.11588