Abstract:
Heat release rate (HRR) is one of the most important parameters in fire dynamics, directly reflecting fire intensity and the energy release rate during combustion. Traditional HRR recognition methods have fixed receptive fields, struggle with multi-scale flame variations, and often ignore critical regions. In this study, a fire HRR recognition method is proposed based on the multi-scale atrous convolution attention fusion module (MSACAF) and the segment anything (SAM), aiming to improve the accuracy of HRR estimation. This algorithm is based on the ResNet-18 backbone and introduces multi-scale atrous convolution to adapt to flames of different sizes and shapes and extract richer features. By combining channel and spatial attention mechanisms, it allocates weight information effectively, allowing the model to focus on key flame regions. In this study, 123 combustion videos were selected from the NIST fire calorimetry database (FCD). Flame images from the videos were fed into the SAM large model for segmentation, generating a dataset of
48841 segmented flame images to significantly reduce computational load. The experimental results show that the proposed model outperforms other deep neural network models. Ablation experiments validate the effectiveness of MSACAF, and the accuracy of HRR prediction improves by 4.4%. The results demonstrate that the proposed method achieves higher accuracy in HRR-based recognition, offering new insights for risk assessment.