基于多尺度注意力机制和SAM大模型的火灾热释放率识别

    Fire Heat Release Rate Recognition Based on Multi-scale Dilation Attention Mechanism and SAM Large Model

    • 摘要: 热释放率(Heat Release Rate,HRR)是火灾动力学中最重要的参数之一,能直接反映火灾强度及燃烧过程的能量释放速率。针对传统HRR识别方法感受野固定、无法适应火焰多尺度变化且缺乏关键区域聚焦的问题,本文提出了一种基于多尺度空洞融合注意力模块(Multi-scale Atrous Convolution Attention Fusion Module,MSACAF)和Segment Anything(SAM) 大模型的火灾热释放率识别方法,以提升火灾热释放率的估算精度。该算法基于ResNet-18骨干网络,引入多尺度空洞卷积,以适应不同尺寸和形状的火焰并提取更丰富特征。结合通道和空间注意力机制,有效分配权重信息使模型关注火焰的关键区域。本文在NIST Fire Calorimetry Database (FCD) 中选取了123次燃烧视频,将视频帧中的火焰图像输入SAM大模型进行分割,构建了包含48 841张火焰分割图像的数据集,以降低背景和非火灾特征对预测精度的影响。实验结果表明,本文提出的模型优于其他深度神经网络模型,消融实验验证了MSACAF的有效性,HRR预测准确率提高了4.4%。研究表明,该方法在基于HRR识别领域能获得更精准的结果,为火灾风险评估提供了新的思路。

       

      Abstract: Heat release rate (HRR) is one of the most important parameters in fire dynamics, directly reflecting fire intensity and the energy release rate during combustion. Traditional HRR recognition methods have fixed receptive fields, struggle with multi-scale flame variations, and often ignore critical regions. In this study, a fire HRR recognition method is proposed based on the multi-scale atrous convolution attention fusion module (MSACAF) and the segment anything (SAM), aiming to improve the accuracy of HRR estimation. This algorithm is based on the ResNet-18 backbone and introduces multi-scale atrous convolution to adapt to flames of different sizes and shapes and extract richer features. By combining channel and spatial attention mechanisms, it allocates weight information effectively, allowing the model to focus on key flame regions. In this study, 123 combustion videos were selected from the NIST fire calorimetry database (FCD). Flame images from the videos were fed into the SAM large model for segmentation, generating a dataset of 48841 segmented flame images to significantly reduce computational load. The experimental results show that the proposed model outperforms other deep neural network models. Ablation experiments validate the effectiveness of MSACAF, and the accuracy of HRR prediction improves by 4.4%. The results demonstrate that the proposed method achieves higher accuracy in HRR-based recognition, offering new insights for risk assessment.

       

    /

    返回文章
    返回