<span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
<span id="fpn9h"><noframes id="fpn9h">
<th id="fpn9h"></th>
<strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
<th id="fpn9h"><noframes id="fpn9h">
<span id="fpn9h"><video id="fpn9h"></video></span>
<ruby id="fpn9h"></ruby>
<strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
  • 《工程索引》(EI)刊源期刊
  • 中文核心期刊
  • 中國科技論文統計源期刊
  • 中國科學引文數據庫來源期刊

留言板

尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

姓名
郵箱
手機號碼
標題
留言內容
驗證碼

面向顯微影像的多聚焦多圖融合中失焦擴散效應消除方法

印象 馬博淵 班曉娟 黃海友 王宇 李松巖

印象, 馬博淵, 班曉娟, 黃海友, 王宇, 李松巖. 面向顯微影像的多聚焦多圖融合中失焦擴散效應消除方法[J]. 工程科學學報, 2021, 43(9): 1174-1181. doi: 10.13374/j.issn2095-9389.2021.01.12.002
引用本文: 印象, 馬博淵, 班曉娟, 黃海友, 王宇, 李松巖. 面向顯微影像的多聚焦多圖融合中失焦擴散效應消除方法[J]. 工程科學學報, 2021, 43(9): 1174-1181. doi: 10.13374/j.issn2095-9389.2021.01.12.002
YIN Xiang, MA Bo-yuan, BAN Xiao-juan, HUANG Hai-you, WANG Yu, LI Song-yan. Defocus spread effect elimination method in multiple multi-focus image fusion for microscopic images[J]. Chinese Journal of Engineering, 2021, 43(9): 1174-1181. doi: 10.13374/j.issn2095-9389.2021.01.12.002
Citation: YIN Xiang, MA Bo-yuan, BAN Xiao-juan, HUANG Hai-you, WANG Yu, LI Song-yan. Defocus spread effect elimination method in multiple multi-focus image fusion for microscopic images[J]. Chinese Journal of Engineering, 2021, 43(9): 1174-1181. doi: 10.13374/j.issn2095-9389.2021.01.12.002

面向顯微影像的多聚焦多圖融合中失焦擴散效應消除方法

doi: 10.13374/j.issn2095-9389.2021.01.12.002
基金項目: 海南省財政科技計劃資助項目(ZDYF2019009);國家自然科學基金資助項目(6210020684,61873299);中央高校基本科研業務費資助項目(00007467);佛山市科技創新專項資金項目(BK21BF002,BK19AE034,BK20AF001)
詳細信息
    通訊作者:

    E-mail:hejohejo@126.com

  • 中圖分類號: TP391

Defocus spread effect elimination method in multiple multi-focus image fusion for microscopic images

More Information
  • 摘要: 多聚焦圖像融合是計算機視覺領域中的一個重要分支,旨在使用圖像處理技術將同一場景下的聚焦不同目標的多張圖像中各自的清晰區域進行融合,最終獲得全清晰圖像。隨著以深度學習為代表的機器學習理論的突破,卷積神經網絡被廣泛應用于多聚焦圖像融合領域,但大多數方法僅關注網絡結構的改進,而使用簡單的兩兩串行融合方式,降低了多圖融合的效率,并且在融合過程中存在的失焦擴散效應也嚴重影響了融合結果的質量。針對上述問題,在顯微成像分析的應用場景下,提出了一種最大特征圖空間頻率融合策略,通過在基于無監督學習的卷積神經網絡中增加后處理模塊,規避了兩兩串行融合中冗余的特征提取過程,實驗證明該策略顯著提高了多張圖像的多聚焦圖像融合效率。并且提出了一種矯正策略,在保證融合效率的情況下可有效緩解失焦擴散效應對融合圖像質量的影響。

     

  • 圖  1  顯微成像場景中多張多聚焦圖像融合技術路線(圖中紅色箭頭為失焦擴散效應. 融合結果中的黃色虛線框為放大后的局部區域,以方便讀者查看)

    Figure  1.  Flow chart of multiple multi-focus image fusion in a microscopic imaging scene (The red arrow in the figure shows the defocus spread effect. The yellow dotted line box in the fusion result is the enlarged local area, which is convenient for readers to view)

    圖  2  本文方法的網絡結構和執行流程。(a)為網絡結構;(b)為兩種多圖融合策略對比(左側為兩兩串行融合策略,右側為最大特征圖空間頻率融合策略)

    Figure  2.  Network structure and implementation process of this method: (a) Network structure; (b) two fusion strategies (the left side is the one-by-one serial fusion strategy, and the right side is the MSFIFM strategy)

    圖  3  面向顯微成像場景失焦擴散效應的矯正策略流程

    Figure  3.  Flow chart of rectification strategy for the defocus spread effect in the microscopic imaging scene

    圖  4  不同融合方式下芯片1、芯片2和芯片3的融合結果對比

    Figure  4.  Visualization of fusion results of chip1, chip2, and chip3 with different fusion algorithms

    表  1  MSFIFM策略與兩兩串行融合策略平均耗時對比

    Table  1.   Average time comparison between the MSFIFM and one-by-one fusion strategies

    Image sizeAverage time of MSFIFM strategy/sAverage time of one-by-one fusion strategy/sExecution efficiency increase/%
    900×6000.13970.264547.18
    600×4000.07320.135145.83
    300×2000.02650.039132.08
    下載: 導出CSV

    表  2  CNN Fuse、MS-Lap以及本文算法平均融合時間對比

    Table  2.   Average time comparison among CNN Fuse,MS-Lap and our method s

    Image nameAverage time of MSFIFM + rectification strategyAverage time of CNN FuseAverage time of MS?Lap
    Chip13.9248336.332196.2325
    Chip20.412672.47071.7137
    Chip31.5518347.414095.9874
    下載: 導出CSV
    <span id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    <span id="fpn9h"><noframes id="fpn9h">
    <th id="fpn9h"></th>
    <strike id="fpn9h"><noframes id="fpn9h"><strike id="fpn9h"></strike>
    <th id="fpn9h"><noframes id="fpn9h">
    <span id="fpn9h"><video id="fpn9h"></video></span>
    <ruby id="fpn9h"></ruby>
    <strike id="fpn9h"><noframes id="fpn9h"><span id="fpn9h"></span>
    www.77susu.com
  • [1] Liu Y, Wang L, Cheng J, et al. Multi-focus image fusion: A Survey of the state of the art. Inf Fusion, 2020, 64: 71 doi: 10.1016/j.inffus.2020.06.013
    [2] Szeliski R. Computer vision: Algorithms and Applications. London: Springer, 2011
    [3] Zhang Y J. Image Engineering. 4th ed. Beijing: Tsinghua University Press, 2018

    章毓晉. 圖像工程. 4版. 北京: 清華大學出版社, 2018
    [4] Burt P, Adelson E. The Laplacian pyramid as a compact image code. IEEE Trans Commun, 1983, 31(4): 532 doi: 10.1109/TCOM.1983.1095851
    [5] Toet A. Image fusion by a ratio of low-pass pyramid. Pattern Recognit Lett, 1989, 9(4): 245 doi: 10.1016/0167-8655(89)90003-2
    [6] Li H, Manjunath B S, Mitra S K. Multisensor image fusion using the wavelet transform. Graph Models Image Process, 1995, 57(3): 235 doi: 10.1006/gmip.1995.1022
    [7] Li S T, Kwok J T, Wang Y N. Combination of images with diverse focuses using the spatial frequency. Inf Fusion, 2001, 2(3): 169 doi: 10.1016/S1566-2535(01)00038-0
    [8] Li S T, Kang X D, Hu J W. Image fusion with guided filtering. IEEE Trans Image Process, 2013, 22(7): 2864 doi: 10.1109/TIP.2013.2244222
    [9] Zhou Z Q, Li S, Wang B. Multi-scale weighted gradient-based fusion for multi-focus images. Inf Fusion, 2014, 20: 60 doi: 10.1016/j.inffus.2013.11.005
    [10] Liu Y, Liu S P, Wang Z F. Multi-focus image fusion with dense SIFT. Inf Fusion, 2015, 23: 139 doi: 10.1016/j.inffus.2014.05.004
    [11] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436 doi: 10.1038/nature14539
    [12] Liu Y, Chen X, Peng H, et al. Multi-focus image fusion with a deep convolutional neural network. Inf Fusion, 2017, 36: 191 doi: 10.1016/j.inffus.2016.12.001
    [13] Ma, B Y, Zhu Y, Yin X, et al. SESF?Fuse: An unsupervised deep model for multi-focus image fusion. Neural Comput Appl, 2021, 33: 5793 doi: 10.1007/s00521-020-05358-9
    [14] Xu H, Ma J Y, Jiang J J, et al. U2Fusion: A unified unsupervised image fusion network. IEEE Trans Pattern Anal Mach Intell, 10.1109/TPAMI.2020.3012548
    [15] Prabhakar K R, Srikar V S, Babu R V. DeepFuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs// IEEE International Conference on Computer Vision. Venice, 2017: 4724
    [16] Ma B Y, Yin X, Wu D, et al. Gradient Aware Cascade Network for Multi-Focus Image Fusion[J/OL]. ArXiv Preprint (2020-10-01) [2021-01-12]. https://arxiv.org/abs/2010.08751
    [17] Xu H, Fan F, Zhang H, et al. A deep model for multi-focus image fusion based on gradients and connected regions. IEEE Access, 2020, 8: 26316 doi: 10.1109/ACCESS.2020.2971137
    [18] Huang J, Le Z L, Ma Y, et al. A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput Appl, 2020, 32(18): 15119 doi: 10.1007/s00521-020-04863-1
    [19] Wang B B. Research on Multi-Focus Image Fusion Algorithm Based on Deep Learning [Dissertation]. Kunming: Yunnan University, 2018

    王鏢堡. 基于深度學習的多聚焦圖像算法研究[學位論文]. 昆明: 云南大學, 2018
    [20] Ma H, Liao Q, Zhang J, et al. An α-Matte Boundary Defocus Model Based Cascaded Network for Multi-focus Image Fusion[J/OL]. ArXiv Preprint (2019-10-30) [2021-01-12]. https://arxiv.org/abs/1910.13136
    [21] He K, Wei Y, Wang Y, et al. An improved non-rigid image registration approach. Chin J Eng, 2019, 41(7): 955

    何凱, 魏穎, 王陽, 等. 一種改進的非剛性圖像配準算法. 工程科學學報, 2019, 41(7):955
    [22] Chen S W, Zhang S X, Yang X G, et al. Registration of visual-infrared images based on ellipse symmetrical orientation moment. Chin J Eng, 2017, 39(7): 1107

    陳世偉, 張勝修, 楊小岡, 等. 基于橢圓對稱方向矩的可見光與紅外圖像配準算法. 工程科學學報, 2017, 39(7):1107
    [23] Hu J, Shen L, Sun G. Squeeze-and-excitation networks//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, 2018: 7132
    [24] Lin T Y, Maire M, Belongie S, et al. Microsoft COCO: Common Objects in Context. Computer Vision – ECCV 2014. Cham: Springer International Publishing, 2014
    [25] Ma B Y, Yin X. The Code of SESF−Fuse for multi-focus image fusion [J/OL]. Github (2019-08-21) [2021-01-12]. https://github.com/Keep-Passion/SESF-Fuse

    馬博淵, 印象. SESF−Fuse的多聚焦圖像融合開源代碼[J/OL]. Github (2019-08-21) [2021-01-12]. https://github.com/Keep-Passion/SESF-Fuse
    [26] Kingma D, Ba J. Adam: A method for stochastic optimization[J/OL]. ArXiv Preprint (2017-01-30) [2021-01-12]. https://arxiv.org/abs/1412.6980
    [27] Paszke A, Gross S, Massa F, et al. Py Torch: An imperative style, high-performance deep learning library[J/OL]. ArXiv Preprint (2019-12-3) [2021-01-12]. https://arxiv.org/abs/1912.01703
    [28] Mao X Y. Introduction to OpenCV3 Programming. Beijing: Electronics industry publishing house, 2015

    毛星云. OpenCV3編程入門. 北京: 電子工業出版社, 2015
    [29] Xu S, Ji L Z, Wang Z, et al. Towards reducing severe defocus spread effects for multi-focus image fusion via an optimization based strategy. IEEE Trans Comput Imaging, 2020, 6: 1561 doi: 10.1109/TCI.2020.3039564
  • 加載中
圖(4) / 表(2)
計量
  • 文章訪問數:  1015
  • HTML全文瀏覽量:  493
  • PDF下載量:  85
  • 被引次數: 0
出版歷程
  • 收稿日期:  2021-01-12
  • 網絡出版日期:  2021-03-01
  • 刊出日期:  2021-09-18

目錄

    /

    返回文章
    返回