Defocus spread effect elimination method in multiple multi-focus image fusion for microscopic images
-
摘要: 多聚焦圖像融合是計算機視覺領域中的一個重要分支,旨在使用圖像處理技術將同一場景下的聚焦不同目標的多張圖像中各自的清晰區域進行融合,最終獲得全清晰圖像。隨著以深度學習為代表的機器學習理論的突破,卷積神經網絡被廣泛應用于多聚焦圖像融合領域,但大多數方法僅關注網絡結構的改進,而使用簡單的兩兩串行融合方式,降低了多圖融合的效率,并且在融合過程中存在的失焦擴散效應也嚴重影響了融合結果的質量。針對上述問題,在顯微成像分析的應用場景下,提出了一種最大特征圖空間頻率融合策略,通過在基于無監督學習的卷積神經網絡中增加后處理模塊,規避了兩兩串行融合中冗余的特征提取過程,實驗證明該策略顯著提高了多張圖像的多聚焦圖像融合效率。并且提出了一種矯正策略,在保證融合效率的情況下可有效緩解失焦擴散效應對融合圖像質量的影響。Abstract: For a microscopic imaging scene, an all-in-focus image of the observation object is needed. Because of the limitation of the depth of field of the camera and the typically uneven surface of the observation object, an all-in-focus image is obtained through one shot with relative difficulty. In this case, an alternative method for obtaining the all-in-focus image is usually used, which is to fuse several images focusing on different depths with the help of multi-focus image fusion technology. Multi-focus image fusion is an important branch in the field of computer vision. It aims to use image processing technology to fuse the clear regions of multiple images, focusing on different objects in the same scene, and finally to obtain an all-in-focus fusion result. With the breakthrough of machine learning theory represented by deep learning, the convolutional neural network is widely adopted in the field of multi-focus image fusion. However, most methods only focus on improving network structure and use the simple one-by-one serial fusion method, which reduces the efficiency of multiple image fusion. In addition, the defocus spread effect in the fusion process, which causes blurred artifacts in the areas near focus map boundaries, can severely affect the quality of fusion results. In the application of microscopic imaging analysis, we proposed a maximum spatial frequency in the feature map (MSFIFM) fusion strategy. By adding a post-processing module in the convolution neural network based on unsupervised learning, the redundant feature extraction process in the one-by-one serial fusion is avoided. Experiments demonstrate that this strategy can significantly improve the efficiency of multi-focus image fusion with multiple images. In addition, we presented a correction strategy that can effectively alleviate the effect of defocus spread on the fusion result under the condition of ensuring the efficiency of the algorithm fusion.
-
圖 1 顯微成像場景中多張多聚焦圖像融合技術路線(圖中紅色箭頭為失焦擴散效應. 融合結果中的黃色虛線框為放大后的局部區域,以方便讀者查看)
Figure 1. Flow chart of multiple multi-focus image fusion in a microscopic imaging scene (The red arrow in the figure shows the defocus spread effect. The yellow dotted line box in the fusion result is the enlarged local area, which is convenient for readers to view)
表 1 MSFIFM策略與兩兩串行融合策略平均耗時對比
Table 1. Average time comparison between the MSFIFM and one-by-one fusion strategies
Image size Average time of MSFIFM strategy/s Average time of one-by-one fusion strategy/s Execution efficiency increase/% 900×600 0.1397 0.2645 47.18 600×400 0.0732 0.1351 45.83 300×200 0.0265 0.0391 32.08 表 2 CNN Fuse、MS-Lap以及本文算法平均融合時間對比
Table 2. Average time comparison among CNN Fuse,MS-Lap and our method
s Image name Average time of MSFIFM + rectification strategy Average time of CNN Fuse Average time of MS?Lap Chip1 3.9248 336.3321 96.2325 Chip2 0.4126 72.4707 1.7137 Chip3 1.5518 347.4140 95.9874 www.77susu.com -
參考文獻
[1] Liu Y, Wang L, Cheng J, et al. Multi-focus image fusion: A Survey of the state of the art. Inf Fusion, 2020, 64: 71 doi: 10.1016/j.inffus.2020.06.013 [2] Szeliski R. Computer vision: Algorithms and Applications. London: Springer, 2011 [3] Zhang Y J. Image Engineering. 4th ed. Beijing: Tsinghua University Press, 2018章毓晉. 圖像工程. 4版. 北京: 清華大學出版社, 2018 [4] Burt P, Adelson E. The Laplacian pyramid as a compact image code. IEEE Trans Commun, 1983, 31(4): 532 doi: 10.1109/TCOM.1983.1095851 [5] Toet A. Image fusion by a ratio of low-pass pyramid. Pattern Recognit Lett, 1989, 9(4): 245 doi: 10.1016/0167-8655(89)90003-2 [6] Li H, Manjunath B S, Mitra S K. Multisensor image fusion using the wavelet transform. Graph Models Image Process, 1995, 57(3): 235 doi: 10.1006/gmip.1995.1022 [7] Li S T, Kwok J T, Wang Y N. Combination of images with diverse focuses using the spatial frequency. Inf Fusion, 2001, 2(3): 169 doi: 10.1016/S1566-2535(01)00038-0 [8] Li S T, Kang X D, Hu J W. Image fusion with guided filtering. IEEE Trans Image Process, 2013, 22(7): 2864 doi: 10.1109/TIP.2013.2244222 [9] Zhou Z Q, Li S, Wang B. Multi-scale weighted gradient-based fusion for multi-focus images. Inf Fusion, 2014, 20: 60 doi: 10.1016/j.inffus.2013.11.005 [10] Liu Y, Liu S P, Wang Z F. Multi-focus image fusion with dense SIFT. Inf Fusion, 2015, 23: 139 doi: 10.1016/j.inffus.2014.05.004 [11] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436 doi: 10.1038/nature14539 [12] Liu Y, Chen X, Peng H, et al. Multi-focus image fusion with a deep convolutional neural network. Inf Fusion, 2017, 36: 191 doi: 10.1016/j.inffus.2016.12.001 [13] Ma, B Y, Zhu Y, Yin X, et al. SESF?Fuse: An unsupervised deep model for multi-focus image fusion. Neural Comput Appl, 2021, 33: 5793 doi: 10.1007/s00521-020-05358-9 [14] Xu H, Ma J Y, Jiang J J, et al. U2Fusion: A unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell, 10.1109/TPAMI.2020.3012548 [15] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs// IEEE International Conference on Computer Vision. Venice, 2017: 4724 [16] Ma B Y, Yin X, Wu D, et al. Gradient Aware Cascade Network for Multi-Focus Image Fusion[J/OL]. ArXiv Preprint (2020-10-01) [2021-01-12]. https://arxiv.org/abs/2010.08751 [17] Xu H, Fan F, Zhang H, et al. A deep model for multi-focus image fusion based on gradients and connected regions. IEEE Access, 2020, 8: 26316 doi: 10.1109/ACCESS.2020.2971137 [18] Huang J, Le Z L, Ma Y, et al. A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput Appl, 2020, 32(18): 15119 doi: 10.1007/s00521-020-04863-1 [19] Wang B B. Research on Multi-Focus Image Fusion Algorithm Based on Deep Learning [Dissertation]. Kunming: Yunnan University, 2018王鏢堡. 基于深度學習的多聚焦圖像算法研究[學位論文]. 昆明: 云南大學, 2018 [20] Ma H, Liao Q, Zhang J, et al. An α-Matte Boundary Defocus Model Based Cascaded Network for Multi-focus Image Fusion[J/OL]. ArXiv Preprint (2019-10-30) [2021-01-12]. https://arxiv.org/abs/1910.13136 [21] He K, Wei Y, Wang Y, et al. An improved non-rigid image registration approach. Chin J Eng, 2019, 41(7): 955何凱, 魏穎, 王陽, 等. 一種改進的非剛性圖像配準算法. 工程科學學報, 2019, 41(7):955 [22] Chen S W, Zhang S X, Yang X G, et al. Registration of visual-infrared images based on ellipse symmetrical orientation moment. Chin J Eng, 2017, 39(7): 1107陳世偉, 張勝修, 楊小岡, 等. 基于橢圓對稱方向矩的可見光與紅外圖像配準算法. 工程科學學報, 2017, 39(7):1107 [23] Hu J, Shen L, Sun G. Squeeze-and-excitation networks//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 7132 [24] Lin T Y, Maire M, Belongie S, et al. Microsoft COCO: Common Objects in Context. Computer Vision – ECCV 2014. Cham: Springer International Publishing, 2014 [25] Ma B Y, Yin X. The Code of SESF−Fuse for multi-focus image fusion [J/OL]. Github (2019-08-21) [2021-01-12]. https://github.com/Keep-Passion/SESF-Fuse馬博淵, 印象. SESF−Fuse的多聚焦圖像融合開源代碼[J/OL]. Github (2019-08-21) [2021-01-12]. https://github.com/Keep-Passion/SESF-Fuse [26] Kingma D, Ba J. Adam: A method for stochastic optimization[J/OL]. ArXiv Preprint (2017-01-30) [2021-01-12]. https://arxiv.org/abs/1412.6980 [27] Paszke A, Gross S, Massa F, et al. Py Torch: An imperative style, high-performance deep learning library[J/OL]. ArXiv Preprint (2019-12-3) [2021-01-12]. https://arxiv.org/abs/1912.01703 [28] Mao X Y. Introduction to OpenCV3 Programming. Beijing: Electronics industry publishing house, 2015毛星云. OpenCV3編程入門. 北京: 電子工業出版社, 2015 [29] Xu S, Ji L Z, Wang Z, et al. Towards reducing severe defocus spread effects for multi-focus image fusion via an optimization based strategy. IEEE Trans Comput Imaging, 2020, 6: 1561 doi: 10.1109/TCI.2020.3039564 -