1 School of Electronics and Information, Northwestern Polytechnical University, Xi'an 710072, China;
2 School of Information Technology, Northwestern University, Xi'an 710072, China
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract high-frequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multi-scale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.
(Ultraviolet, visible, and infrared radiation effects (including laser radiation))
Fund:
Project supported by the National Natural Science Foundation of China (Grant No. 61402368), Aerospace Support Fund, China (Grant No. 2017-HT-XGD), and Aerospace Science and Technology Innovation Foundation, China (Grant No. 2017 ZD 53047).
Gui-Qing He(何贵青), Qi-Qi Zhang(张琪琦), Jia-Qi Ji(纪佳琪), Dan-Dan Dong(董丹丹), Hai-Xi Zhang(张海曦), Jun Wang(王珺) An infrared and visible image fusion method based uponmulti-scale and top-hat transforms 2018 Chin. Phys. B 27 118706
[1]
Li S T, Kang X D, Fang L Y, Hu J W and Yin H T 2017 Inform. Fusion. 33 100
[2]
Zhang Q, Liu Y, Blum R S, Han J G and Tao D C 2018 Inform. Fusion. 40 57
[3]
Mishra D and Palkar B 2015 Int. J. Comput. Appl. 130 7
[4]
Ma J Y, Chen C, Li C and Huang J 2016 Inform. Fusion. 31 100
[5]
Cui G M, Feng H J, Xu Z H, Li Q and Chen Y T 2015 Opt. Commun. 341 199
[6]
Bavirisetti D P and Dhuli R 2016 Ain Shams Engineering Journal
[7]
Sahu A, Bhateja V, Krishn A, et al. 2015 IEEE International Conference on Medical Imaging, 2015, p. 448
[8]
Nooshyar M, Abdipour M and Khajuee M 2011 Multi-focus Image Fusion for Visual Sensor Networks in Wavelet Domain (New York:Pergamon Press) pp. 789-797
[9]
Mehra I and Nishchal N K 2015 Opt. Commun. 335 153
[10]
Ye M and Tang D B 2015 J. Electr. Measur. Instr. 29 1328(in Chinese)
[11]
Yang Y 2010 IEEE International Conference on Bioinformatics and Biomedical Engineering, 2010, p. 1
[12]
Wang X H, Zhou Y and Zhou Y 2017 Comput. Eng. Desig. 38 729(in Chinese)
Altmetric calculates a score based on the online attention an article receives. Each coloured thread in the circle represents a different type of online attention. The number in the centre is the Altmetric score. Social media and mainstream news media are the main sources that calculate the score. Reference managers such as Mendeley are also tracked but do not contribute to the score. Older articles often score higher because they have had more time to get noticed. To account for this, Altmetric has included the context data for other articles of a similar age.