Electronics, Vol. 12, Pages 2492: Fully Synthetic Videos and the Random-Background-Pasting Method for Flame Segmentation
Electronics doi: 10.3390/electronics12112492
Authors: Yang Jia Zixu Mao Xinmeng Zhang Yaxi Kuang Yanping Chen Qixing Zhang
Video-based flame detection (VFD) aims to recognize fire events by using image features. Flame segmentation is an essential task in VFD, providing suspected regions for feature analysis and object recognition. However, the lack of positive flame samples makes it difficult to train deep-learning-based VFD models effectively. In this paper, we propose the assumption that we can train a segmentation model with virtual flame images and design experiments to prove it. We collected many virtual flame videos to extend existing flame datasets, which provide adequate flame samples for deep-learning-based VFD methods. We also apply a random-background-pasting method to distribute the flame images among different scenarios. The proposed method trains a flame segmentation model with zero real flame images. Moreover, we perform segmentation testing using real flame images, which the model has never used, to see if the model trained using ‘fake’ images can segment real objects. We trained four segmentation models based on FCN, U-Net, Deeplabv3, and Mask-RCNN using synthetic flame video frames and obtained the highest mPA of 0.783 and mIoU of 0.515. The experimental results on the FIRE-SMOKE-DATASET and the Fire-Detection-Image-Dataset demonstrate that the ‘fake’ flame samples generated by the proposed random-background-pasting method can obviously improve the performance of existing state-of-the-art flame segmentation methods using cross-dataset evaluation settings.