Infrared and Visible Image Fusion (IVF) Using Latent Low-Rank Representation and Deep Feature Extraction Network
Автор: Teku Sandhya Kumari, Gundala Sujatha, Boddeda Sravya, Hari Jyothula
Журнал: International Journal of Image, Graphics and Signal Processing @ijigsp
Статья в выпуске: 3 vol.16, 2024 года.
Бесплатный доступ
The combination of visible and infrared images from different sensors can provide a more detailed and informative image. Visible images capture environmental details and texture, while infrared sensors can detect thermal radiation and create grayscale images that have high contrast. These images are useful for distinguishing between target and background in challenging conditions, such as at night or in inclement weather. When these two types of images are fused, they create high- contrast images with rich texture and target details. In this paper, an effective image fusion technique has been developed, which utilizes Latent Low Rank Representation (LatLRR) method that decomposes the source images into latent low rank and salient parts to capture common and unique information respectively. The proposed network design incorporates the dense network and VGG-19 architectures for deep feature extraction of latent low- rank and salient parts, that minimize distortion while maintaining crucial texture and details in the output. Weighted average fusion strategies are used to combine these latent low-rank and salient parts, and the resulting fused features are used for feature reconstruction to generate a fused low-rank and salient part. These parts are integrated to yield a fused image output. The proposed approach out performs existing state-of-the-art methods on both visual characteristics and objective evaluation metrics.
Infrared images, Visible images, Image Fusion, Latent Low Rank Representation, VGG-19 network, Dense network
Короткий адрес: https://sciup.org/15019451
IDR: 15019451 | DOI: 10.5815/ijigsp.2024.03.03
Список литературы Infrared and Visible Image Fusion (IVF) Using Latent Low-Rank Representation and Deep Feature Extraction Network
- Han, J.; Bhanu, B. Fusion of color and infrared video for moving human detection. Pattern Recognit. 2007, 40, 1771–1784. [CrossRef]
- Reinhard, E.; Adhikhmin, M.; Gooch, B.; Shirley, P. Color transfer between images. IEEE Comput 2001, 21, 34–41. [CrossRef]
- Simone, G.; Farina, A.; Morabito, F.C.; Serpico, S.B.;Bruzzone, L. Image fusion techniques for remote sensing applications. Inf. Fusion 2002, 3, 3–15. [CrossRef]
- Hanna, B.V.; Gorbach, A.M.; Gage, F.A.; Pinto, P.A.;Silva, J.S.; Gilfillan, L.G.; Elster, E.A. Intraoperative assessment of critical biliary structures with visible range/infrared image fusion. J. Am. Coll. Surg. 2008,206, 1227–1231. [CrossRef]
- Sanchez, V.; Prince, G.; Clarkson, J.P.; Rajpoot, N.M. Registration of thermal and visible light images of diseased plants using silhouette extraction in the wavelet domain. Pattern Recognit. 2015, 48, 2119–2128.
- Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22,2864–2875. [PubMed]
- Bavirisetti, D.P.; Xiao, G.; Liu, G. Multi-sensor image fusion based on fourth order partial differential equations. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017.
- J. Ma, C. Chen, C. Li, J. Huang, Infrared and visible image fusion via gradient transfer and total variation minimization, Information Fusion 31 (2016) 100– 109.
- Teku Sandhya Kumari, Koteswara Rao Sanagapallea, and Santi Prabha Inty. A two-stage processing approach for contrast intensified image fusion. World Journal of Engineering 17.1 (2020): 68-77.
- Teku Sandhya Kumari, S. Koteswara Rao, and I. Santi Prabha. Adaptive window-based fractal dimension estimation for weight maps in contrast improved multi-sensor fusion. Journal of Engineering Science and Technology 15.2 (2020): 1319-1337.
- Teku Sandhya Kumari, S. Koteswara Rao, and I. Santi Prabha. A compendious analysis of feature-extraction algorithms to frame fusion rules. International Journal of Computing and Digital System (2021).
- Zhang, Hao, et al. Image fusion meets deep learning: A survey and perspective. Information Fusion 76 (2021): 323-336.
- Zhang, Xingchen. Deep learning-based multi-focus image fusion: A survey and a comparative study. IEEE Transactions on Pattern Analysis and Machine Intelligence 44.9 (2021): 4819-4838.
- H. Xu, J. Ma, J. Jiang, X. Guo, H. Ling, U2fusion: A unified unsupervised image fusion network, IEEE Transactions on Pattern Analysis and Machine Intelligence (2020).
- Prabhakar K R, Srikar V S, Babu R V. DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs[C]//2017 IEEE International Conference on Computer Vision (ICCV). IEEE, 2017: 4724-4732.
- Hui Li and Xiao-Jun Wu. DenseFuse: A Fusion Approach to Infrared and Visible Images. IEEE Transactions on Image Processing, 28(5):2614– 2623,2018.
- Yong Ma, Haojie Li, and Baocai Yin. Infrared and visible image fusion using Latent Low-Rank Representation. Information Fusion, vol. 36, (2017), pp. 191-207.
- H. Li, X. Wu, and J. Kittler. Infrared and visible image fusion using a deep learning framework, in Proc. 2018 24th Int. Conf. Pattern Recognit., Beijing,2018, pp. 2705–2710.
- Y. Liu, X. Chen, H. Peng, and Z. Wang, Multi-focus image fusion with a deep convolutional neural network, Inf. Fusion, vol. 36, pp. 191–207, Jul. 2017.
- R. Hou, D. Zhou, R. Nie, D. Liu, L. Xiong, Y. Guo, C. Yu, VIF-net: an unsupervised framework for infrared and visible image fusion, IEEE Transactions on Computational Imaging 6 (2020) 640–651.