基于级联稀疏查询机制的轻量化火灾检测算法

张小雪,王雨,吴思远,等. 基于级联稀疏查询机制的轻量化火灾检测算法[J]. 光电工程,2023,50(10): 230216. doi: 10.12086/oee.2023.230216
引用本文: 张小雪,王雨,吴思远,等. 基于级联稀疏查询机制的轻量化火灾检测算法[J]. 光电工程,2023,50(10): 230216. doi: 10.12086/oee.2023.230216
Zhang X X, Wang Y, Wu S Y, et al. An improved lightweight fire detection algorithm based on cascade sparse query[J]. Opto-Electron Eng, 2023, 50(10): 230216. doi: 10.12086/oee.2023.230216
Citation: Zhang X X, Wang Y, Wu S Y, et al. An improved lightweight fire detection algorithm based on cascade sparse query[J]. Opto-Electron Eng, 2023, 50(10): 230216. doi: 10.12086/oee.2023.230216

基于级联稀疏查询机制的轻量化火灾检测算法

  • 基金项目:
    国家自然科学基金资助项目(62076199);陕西省重点研发计划项目(2022ZDLGY01-03);瞬态光学与光子技术国家重点实验室开放基金项目(SKLST202214);陕西省教育厅重点科学研究计划项目(23JY063);西安市科技计划项目(22GXFW0088).
详细信息
    作者简介:
    *通讯作者: 孙帮勇,sunbangyong@xaut.edu.cn
  • 中图分类号: TN911.73

An improved lightweight fire detection algorithm based on cascade sparse query

  • Fund Project: Project supported by National Natural Science Foundation of China (62076199), Key Research and Development Program of Shaanxi (2022ZDLGY01-03), Open Fund of the State Key Laboratory of Transient Optics and Photonic Technology (SKLST202214), Key Scientific Research Program of Shaanxi Provincial Department of Education (23JY063), and Scientific Research Program of Xi’an (22GXFW0088)
More Information
  • 针对现有火灾检测算法仍存在的模型复杂、检测速度慢、误检率高等问题,提出一种基于级联稀疏查询机制的轻量化火灾检测网络LFNet。首先,建立了轻量化的图像特征提取模块ECDNet,其通过在YOLOv5s主干网络中嵌入轻量化注意力模块ECA (efficient channel attention),用于解决火灾检测中火焰与烟雾的多尺度难点;其次,利用深层特征提取模块FPN+PAN,对不同层级的特征图进行深度处理和多尺度融合;最后,利用嵌入轻量化的级联稀疏查询模块CSQ (cascade sparse query)提升对早期火灾中的小火焰与薄烟雾的检测准确率。实验表明,本文方法在mAP和Precision等客观指标上的综合表现达到最优,同时在实现较高检测精度时的参数量也较低,能够满足实际场景的火灾检测要求。

  • Overview: The Visual Fire Detection task is designed to detect flames and smoke using visual algorithms from images and videos to achieve fire alarms. In recent years, fire detection algorithms based on convolutional neural networks have greatly improved the detection accuracy of flames and smoke. However, the following questions still exist in the current methods: 1) The generalization ability of the model still needs to be improved; 2) Low fire detection accuracy for small objects; 3) The tradeoff between the detection accuracy and speed fails to achieve. In order to overcome the above problems, a lightweight fire detection algorithm is proposed based on cascading sparse query mechanism, called LFNet. In this study, firstly, a lightweight feature extraction module ECDNet is established to extract more fine-grained features in different levels of feature layers by embedding the lightweight attention module ECA (Efficient Channel Attention) in YOLOv5s backbone network, which is used to solve the multi-scale of flame and smoke in fire detection. Secondly, deep feature extraction module FPN+PAN is adopted to improve multi-scale fusion of feature maps at different levels. Finally, the Cascade Sparse Query embedded lightweight cascade sparse query module is applied to enhance the detection accuracy of small flames and thin smoke in early fires. Furthermore, to further decrease the parameters and calculation of the model, the Slimming pruning algorithm is adopted to change the size of the model. The experimental results on the three fire datasets of SF-dataset, D-fire and FIRESENSE show that the comprehensive performance of the proposed method on objective indicators such as mAP and Recall is best. On the SF-dataset dataset, the LFNet achieves the best mAP and Recall, which are 71.76% and 52.98%, respectively. On the D-fire dataset, the mAP of our method reachs 71.76%, which is far superior to other fire detection methods. On the FIRESENSE dataset, our method achieves 70.61% mAP. Our method effectively alleviates the main problems of current fire detection algorithms, such as low detection accuracy, high missed detection rate for small objects, and difficulty in balancing speed and accuracy. The network trains and builds a fire detection model on self-built datasets and other fire datasets. The experimental results show that on the condition that the model size is suitable and the speed is relatively fast, our method achieves an optimal detection effect on both the self-built fire dataset and the public fire datasets, and will potentially promote the application of deep learning-based fire detection methods in industries.

  • 加载中
  • 图 1  LFNet网络架构图

    Figure 1.  Network architecture of LFNet

    图 2  ECA注意力模块原理示意图

    Figure 2.  Efficient channel attention module

    图 3  (a)原图;(b)注意力热力图

    Figure 3.  (a) Original; (b) Attention mechanism heat map

    图 4  级联查询机制模块原理示意图

    Figure 4.  Cascade sparse query module

    图 5  头部预测原理示意图

    Figure 5.  Cascade sparse query head module

    图 6  SF-dataset数据集上聚类实验结果

    Figure 6.  Clustering experiment results on SF-dataset

    图 7  聚类结果。(a) SF-dataset; (b) D-fire;(c) FIRESENSE

    Figure 7.  Clustering results. (a) SF-dataset; (b) D-fire; (c) FIRESENSE

    图 8  SF-dataset数据集比较实验检测结果。(a) Images;(b) Ours;(c) EFDNet;(d) Y-Edge;(e) M-YOLO;(f) Fire-YOLO;(g) YOLOX-Tiny;(h) PicoDet;(i) PP-YOLOE;(j) YOLOv7

    Figure 8.  Comparison experiment detection results for the SF-dataset. (a) Images;(b) Ours;(c) EFDNet;(d) Y-Edge;(e) M-YOLO;(f) Fire-YOLO;(g) YOLOX-Tiny;(h) PicoDet;(i) PP-YOLOE;(j) YOLOv7

    图 9  D-fire数据集比较实验检测结果。(a) Images;(b) Ours;(c) EFDNet;(d) Y-Edge;(e) M-YOLO;(f) Fire-YOLO;(g) YOLOX-Tiny;(h) PicoDet;(i) PP-YOLOE;(j) YOLOv7;

    Figure 9.  Comparison experiment detection results for the D-fire dataset. (a) Images;(b) Ours;(c) EFDNet;(d) Y-Edge;(e) M-YOLO;(f) Fire-YOLO;(g) YOLOX-Tiny;(h) PicoDet;(i) PP-YOLOE;(j) YOLOv7;

    图 10  FIRESENSE数据集比较实验检测结果。(a) Images;(b) Ours;(c) EFDNet;(d) Y-Edge;(e) M-YOLO; (f) Fire-YOLO;(g) YOLOX-Tiny;(h) PicoDet;(i) PP-YOLOE;(j) YOLOv7

    Figure 10.  Comparison experiment detection results for the FIRESENSE dataset. (a) Images;(b) Ours;(c) EFDNet;(d) Y-Edge;(e) M-YOLO;(f) Fire-YOLO;(g) YOLOX-Tiny;(h) PicoDet;(i) PP-YOLOE;(j) YOLOv7

    图 11  在SF-dataset数据集上训练数据集样本比例、Batch size、模型输入尺寸以及迭代次数epoch实验结果(a)训练数据集样本比例;(b) Batch size;(c)模型输入尺寸;(d)迭代次数epoch

    Figure 11.  Parameters experiment of percentage of training samples, batch size and patch size on the Santa Barbara dataset (a) Percentage of training dataset samples; (b) Batch size; (c) Model input size; (d) Epoch

    表 1  三个数据集的训练集,验证集,测试集的详细统计数据

    Table 1.  Numbers of training set, validation set and testing set for the three datasets

    数据集训练集验证集测试集总数
    FireSmokeNoneFireSmokeNoneFireSmokeNone
    SF-dataset48594859485960760760760760760718219
    D-Fire46584693787058258798458258798421527
    FIRESENSE(video)9112111211249
    下载: 导出CSV

    表 2  不同数据集上不同方法的精度对比实验结果

    Table 2.  Experimental precision results of different comparative methods on different datasets

    数据集方法 指标
    Recall/%Precision/% Accuracy/%mAP/%
    SF-datasetCelik 等[19]38.4569.5472.1241.58
    Demirel 等[2]42.2873.6578.5447.15
    Zhang 等[20]40.1272.7773.1545.4
    Fire-YOLO[21]49.9387.0590.0969.38
    EFDNet[22]44.2780.4582.2859.89
    Pruned+KD[23]47.2583.1685.6463.1
    YOLOX-Tiny[24]49.9586.8989.2469.08
    PicoDet[25]50.187.190.1369.42
    YOLOv7[26]54.1288.6494.8871.69
    LFNet54.9889.1298.571.76
    D-fireCelik 等[19]35.9065.7868.4239.65
    Demirel 等[2]40.6873.1278.2745.85
    Zhang 等[20]38.6770.4570.8643.94
    Fire-YOLO[21]51.1884.1288.2168.88
    EFDNet[22]43.6876.5777.9458.77
    Pruned+KD[23]46.0279.8582.462.71
    YOLOX-Tiny[24]51.0683.9486.1468.14
    PicoDet[25]51.2784.3288.2668.95
    YOLOv7[26]53.1286.4493.8570.65
    LFNet53.3587.6897.9271.15
    FIRESENSECelik 等[19]33.756.0760.2136.48
    Demirel 等[2]38.5564.2869.4442.66
    Zhang 等[20]36.9661.5862.3840.36
    Fire-YOLO[21]52.4779.8885.1968.12
    EFDNet[22]42.7468.2570.2256.97
    Pruned+KD[23]46.8072.9475.161.35
    YOLOX-Tiny[24]53.9480.4484.1268.02
    PicoDet[25]52.9680.9285.9168.89
    YOLOv7[26]54.5389.7696.2870.15
    LFNet53.1992.4498.4270.61
    注:加粗字体表示最优结果
    下载: 导出CSV

    表 3  SF-dataset上不同方法的速度对比实验结果

    Table 3.  Experimental speed results of different comparative methods on SF-dataset

    方法 指标
    Flops/GParameter/MSpeed/(f/s)Infer time/msmAP/%
    M-YOLO[27]7.5423.81850.666.6
    Fire-YOLO[21]45.126228 32.17 69.38
    EFDNet[22]1.993.6637.2159.89
    Y-Edge[25]30.4753.83627.9765.95
    Prund+KD[23]16.826.34517.2563.1
    PPYOLO-Tiny[28]4.9617.844223.868.36
    YOLOX-Tiny[24]5.4219.194025.2169.08
    PicoDet[25]0.7341056.6569.42
    PPYOLOE[29]29.4252.209825.6470.85
    YOLOv7[26]18.4236.98819.8571.69
    LFNet3.8512.69810.2471.76
    下载: 导出CSV

    表 4  消融实验的实验结果

    Table 4.  Experimental results of Ablation experiments

    Model方法指标
    ECDNetFPN+PCSQHQFocal-CIOUSlimmingmAP(%)Recall/%Parameter/M)Speed/(f/s)
    YOLOv5s67.8949.3827.684
    68.8950.9627.684
    70.1551.5327.780
    70.6152.1227.791
    71.1553.3527.791
    71.7652.9812.6 98
    注:加粗字体表示最优结果
    下载: 导出CSV
  • [1]

    肖潇, 孔凡芝, 刘金华. 基于动静态特征的监控视频火灾检测算法[J]. 计算机科学, 2019, 46(S1): 284−286,299.

    Xiao X, Kong F Z, Liu J H. Monitoring video fire detection algorithm based on dynamic characteristics and static characteristics[J]. Comput Sci, 2019, 46(S1): 284−286,299.

    [2]

    Çelik T, Demirel H. Fire detection in video sequences using a generic color model[J]. Fire Saf J, 2009, 44(2): 147−158. doi: 10.1016/j.firesaf.2008.05.005

    [3]

    Qiu T, Yan Y, Lu G. An autoadaptive edge-detection algorithm for flame and fire image processing[J]. IEEE Trans Instrum Meas, 2012, 61(5): 1486−1493. doi: 10.1109/TIM.2011.2175833

    [4]

    邵婧, 王冠香, 郭蔚. 基于视频动态纹理的火灾检测[J]. 中国图象图形学报, 2013, 18(6): 647−653. doi: 10.11834/jig.20130605

    Shao J, Wang G X, Guo W. Fire detection based on video dynamic texture[J]. J Image Graph, 2013, 18(6): 647−653. doi: 10.11834/jig.20130605

    [5]

    Surit S, Chatwiriya W. Forest fire smoke detection in video based on digital image processing approach with static and dynamic characteristic analysis[C]//Proceedings of the 2011 First ACIS/JNU International Conference on Computers, Networks, Systems and Industrial Engineering, 2011: 35–39. https://doi.org/10.1109/CNSI.2011.47.

    [6]

    Zhang Q J, Xu J L, Xu L, et al. Deep convolutional neural networks for forest fire detection[C]//Proceedings of 2016 International Forum on Management, Education and Information Technology Application, 2016: 568–575. https://doi.org/10.2991/ifmeita-16.2016.105.

    [7]

    Dunnings A J, Breckon T P. Experimentally defined convolutional neural network architecture variants for non-temporal real-time fire detection[C]//Proceedings of 2018 25th IEEE International Conference on Image Processing, 2018: 1558–1562. https://doi.org/10.1109/ICIP.2018.8451657.

    [8]

    Sharma J, Granmo O C, Goodwin M, et al. Deep convolutional neural networks for fire detection in images[C]//Proceedings of the 18th International Conference on Engineering Applications of Neural Networks, 2017: 183–193. https://doi.org/10.1007/978-3-319-65172-9_16.

    [9]

    Akhloufi M A, Tokime R B, Elassady H. Wildland fires detection and segmentation using deep learning[J]. Proc SPIE, 2018, 10649: 106490B. doi: 10.1117/12.2304936

    [10]

    任嘉锋, 熊卫华, 吴之昊, 等. 基于改进YOLOv3的火灾检测与识别[J]. 计算机系统应用, 2019, 28(12): 171−176. doi: 10.15888/j.cnki.csa.007184

    Ren J F, Xiong W H, Wu Z H, et al. Fire detection and identification based on improved YOLOv3[J]. Comput Syst Appl, 2019, 28(12): 171−176. doi: 10.15888/j.cnki.csa.007184

    [11]

    缪伟志, 陆兆纳, 王俊龙, 等. 基于视觉的火灾检测研究[J]. 森林工程, 2022, 38(1): 86−92,100. doi: 10.16270/j.cnki.slgc.2022.01.007

    Miao W Z, Lu Z N, Wang J L, et al. Fire detection research based on vision[J]. For Eng, 2022, 38(1): 86−92,100. doi: 10.16270/j.cnki.slgc.2022.01.007

    [12]

    刘凯, 魏艳秀, 许京港, 等. 基于计算机视觉的森林火灾识别算法设计[J]. 森林工程, 2018, 34(4): 89−95. doi: 10.3969/j.issn.1006-8023.2018.04.015

    Liu K, Wei Y X, Xu J G, et al. Design of forest fire identification algorithm based on computer vision[J]. For Eng, 2018, 34(4): 89−95. doi: 10.3969/j.issn.1006-8023.2018.04.015

    [13]

    皮骏, 刘宇恒, 李久昊. 基于YOLOv5s的轻量化森林火灾检测算法研究[J]. 图学学报, 2023, 44(1): 26−32. doi: 10.11996/JG.j.2095-302X.2023010026

    Pi J, Liu Y H, Li J H. Research on lightweight forest fire detection algorithm based on YOLOv5s[J]. J Graph, 2023, 44(1): 26−32. doi: 10.11996/JG.j.2095-302X.2023010026

    [14]

    Wang Q L, Wu B G, Zhu P F, et al. ECA-Net: efficient channel attention for deep convolutional neural networks[C]//Proceedings of 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020: 11534–11542. https://doi.org/10.1109/CVPR42600.2020.01155.

    [15]

    Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017: 2117–2125. https://doi.org/10.1109/CVPR.2017.106.

    [16]

    Wang W H, Xie E Z, Song X G, et al. Efficient and accurate arbitrary-shaped text detection with pixel aggregation network[C]//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision, 2019: 8440–8449. https://doi.org/10.1109/ICCV.2019.00853.

    [17]

    Yang C H Y, Huang Z H, Wang N Y. QueryDet: cascaded sparse query for accelerating high-resolution small object detection[C]//Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022: 13668–13677. https://doi.org/10.1109/CVPR52688.2022.01330.

    [18]

    Liu Z, Li J G, Shen Z Q, et al. Learning efficient convolutional networks through network slimming[C]//Proceedings of 2017 IEEE International Conference on Computer Vision, 2017: 2736–2744. https://doi.org/10.1109/ICCV.2017.298.

    [19]

    Celik T, Ozkaramanli H, Demirel H. Fire pixel classification using fuzzy logic and statistical color model[C]//Proceedings of 2007 IEEE International Conference on Acoustics, Speech and Signal Processing, 2007: I-1205–I-1208. https://doi.org/10.1109/ICASSP.2007.366130.

    [20]

    Zhang D Y, Han S Z, Zhao J H, et al. Image based forest fire detection using dynamic characteristics with artificial neural networks[C]//Proceedings of 2009 International Joint Conference on Artificial Intelligence, 2009: 290–293. https://doi.org/10.1109/JCAI.2009.79.

    [21]

    Zhao L, Zhi L Q, Zhao C, et al. Fire-YOLO: a small target object detection method for fire inspection[J]. Sustainability, 2022, 14(9): 4930. doi: 10.3390/su14094930

    [22]

    Li S B, Yang Q D, Liu P. An efficient fire detection method based on multiscale feature extraction, implicit deep supervision and channel attention mechanism[J]. IEEE Trans Image Proc, 2020, 29: 8467−8475. doi: 10.1109/TIP.2020.3016431

    [23]

    Wang S Y, Zhao J, Ta N, et al. A real-time deep learning forest fire monitoring algorithm based on an improved pruned + KD model[J]. J Real-Time Image Proc, 2021, 18(6): 2319−2329. doi: 10.1007/s11554-021-01124-9

    [24]

    Ge Z, Liu S T, Wang F, et al. YOLOX: exceeding YOLO series in 2021[Z]. arXiv: 2107.08430, 2021. https://arxiv.org/abs/2107.08430.

    [25]

    Yu G H, Chang Q Y, Lv W Y, et al. PP-PicoDet: a better real-time object detector on mobile devices[Z]. arXiv: 2111.00902, 2021. https://arxiv.org/abs/2111.00902.

    [26]

    Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023: 7464–7475. https://doi.org/10.1109/CVPR52729.2023.00721.

    [27]

    Wang S Y, Chen T, Lv X Y, et al. Forest fire detection based on lightweight Yolo[C]//Proceedings of 2021 33rd Chinese Control and Decision Conference, 2021: 1560–1565. https://doi.org/10.1109/CCDC52312.2021.9601362.

    [28]

    Long X, Deng K P, Wang G Z, et al. PP-YOLO: an effective and efficient implementation of object detector[Z]. arXiv: 2007.12099, 2020. https://arxiv.org/abs/2007.12099.

    [29]

    Xu S L, Wang X X, Lv W Y, et al. PP-YOLOE: an evolved version of YOLO[Z]. arXiv: 2203.16250, 2022. https://arxiv.org/abs/2203.16250.

  • 加载中

(12)

(4)

计量
  • 文章访问数:  2139
  • PDF下载数:  582
  • 施引文献:  0
出版历程
收稿日期:  2023-09-01
修回日期:  2023-11-15
录用日期:  2023-11-15
刊出日期:  2023-10-25

目录

/

返回文章
返回