留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于复数卷积和注意力机制的并行磁共振成像重建

段继忠 肖琛

段继忠,肖琛. 基于复数卷积和注意力机制的并行磁共振成像重建[J]. 北京麻豆精品秘 国产传媒学报,2025,51(1):85-93 doi: 10.13700/j.bh.1001-5965.2022.1005
引用本文: 段继忠,肖琛. 基于复数卷积和注意力机制的并行磁共振成像重建[J]. 北京麻豆精品秘 国产传媒学报,2025,51(1):85-93 doi: 10.13700/j.bh.1001-5965.2022.1005
DUAN J Z,XIAO C. Parallel MRI reconstruction by using complex convolution and attention mechanism[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(1):85-93 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.1005
Citation: DUAN J Z,XIAO C. Parallel MRI reconstruction by using complex convolution and attention mechanism[J]. Journal of Beijing University of Aeronautics and Astronautics,2025,51(1):85-93 (in Chinese) doi: 10.13700/j.bh.1001-5965.2022.1005

基于复数卷积和注意力机制的并行磁共振成像重建

doi: 10.13700/j.bh.1001-5965.2022.1005
基金项目: 

国家自然科学基金地区科学基金项目(61861023) 

详细信息
    通讯作者:

    E-mail:duanjz@kust.edu.cn

  • 中图分类号: TP391;R319

Parallel MRI reconstruction by using complex convolution and attention mechanism

Funds: 

National Natural Science Foundation of China Regional Science Foundation Project (61861023) 

More Information
  • 摘要:

    针对并行磁共振成像的重建,提出深度复数注意力网络(DCANet)模型。根据磁共振成像数据的复数性质,该模型使用复数卷积替换常规实数卷积;由于并行磁共振成像的数据中每个线圈获取到的数据有所不同,该模型还使用逐通道的注意力机制来重点关注有效特征较多的通道;该模型使用数据一致性层保留采样过程中的原始数据,最终形成级联网络。使用3个不同的采样模式对2个不同磁共振成像数据序列进行实验,实验结果表明:DCANet模型具有较好的重建效果,能够获得更高的峰值信噪比(PSNR)和结构相似性指数(SSIM),以及更低的高频误差范数(HFEN),其中,PSNR相比磁共振成像级联通道注意力网络(MICCAN)、Deepcomplex、双倍频网络(DONet)这3种模型平均分别提高了4.52 dB、2.30 dB和1.21 dB。

     

  • 图 1  DCANet模型结构

    Figure 1.  Architecture of DCANet model

    图 2  逐通道的注意力机制

    Figure 2.  Channel attention mechanism

    图 3  DCANet与Deepcomplex[22],DONet[23]模型在不同序列的不同欠采样模式下PSNR的相对差异

    Figure 3.  Relative differences in PSNR between DCANet and Deepcomplex[22], DONet[23] models under different undersampling modes of different sequences

    图 4  冠状面质子密度加权序列3倍一维等间隔欠采样重建误差对比

    Figure 4.  Comparison of reconstruction error maps of coronal plane proton density-weighted sequences with 3-fold 1D uniform undersampled reconstruction

    图 5  冠状面质子密度加权序列5倍一维等间隔欠采样重建误差对比

    Figure 5.  Comparison of reconstruction error maps of coronal plane proton density weighted sequences with 5-fold 1D uniform undersampled reconstruction

    图 6  矢状面质子密度加权序列3倍一维笛卡尔随机欠采样重建误差对比

    Figure 6.  Comparison of reconstruction error maps of sagittal plane proton density-weighted sequences with 3-fold 1D cartesian random undersampling

    图 7  矢状面质子密度加权序列5倍一维笛卡尔随机欠采样重建误差对比

    Figure 7.  Comparison of reconstruction error maps of sagittal plane proton density-weighted sequences with 5-fold 1D cartesian random undersampling

    图 8  矢状面质子密度加权序列3倍二维随机欠采样重建误差对比

    Figure 8.  Comparison of reconstruction error maps of sagittal plane proton density-weighted sequences with 3-fold 2D random undersampling

    图 9  矢状面质子密度加权序列5倍二维随机欠采样重建误差对比

    Figure 9.  Comparison of reconstruction error maps of sagittal plane proton density-weighted sequences with 5-fold 2D random undersampling

    表  1  DCANet模型参数

    Table  1.   DCANet model parameters

    参数 数值
    CAB数量 10
    AGB数量 10
    Batch Size 1
    卷积核大小 3$ \times $3
    ${{\boldsymbol{K}}_{{U}{\text{p}}}}$、${{\boldsymbol{K}}_{{\text{Down}}}}$卷积核大小 1×1
    ${{\boldsymbol{K}}_{{U}{\text{p}}}}$、${{\boldsymbol{K}}_{{\text{Down}}}}$通道变化率$r$ 8
    初始学习率 0.0003
    学习率衰减因子 0.7
    epoch 300
    下载: 导出CSV

    表  2  冠状面质子密度加权序列重建效果

    Table  2.   Reconstruction performance of coronal proton-density weighted sequence

    欠采样模式 模型 PSNR/dB SSIM HFEN
    3 5 3 5 3 5
    一维等间隔欠采样MICCAN[25]31.695±0.89929.973±0.8950.831±0.0160.773±0.0200.385±0.0620.485±0.058
    Deepcomplex[22]32.708±1.51929.608±2.1110.835±0.0270.754±0.0310.320±0.0850.483±0.129
    DONet[23]33.308±1.34730.693±1.4270.846±0.0250.776±0.0240.303±0.0730.438±0.107
    DCANet34.786±1.49731.880±1.3980.876±0.0240.812±0.0290.273±0.0690.403±0.092
    一维笛卡尔随机欠采样MICCAN[25]34.515±0.98232.326±0.9460.872±0.0190.818±0.0270.243±0.0460.354±0.057
    Deepcomplex[22]35.586±1.38533.039±1.2430.880±0.0210.818±0.0290.193±0.0410.299±0.072
    DONet[23]35.799±1.44333.601±1.2540.884±0.0220.828±0.0260.188±0.0400.275±0.054
    DCANet36.791±1.29834.687±1.2060.899±0.0190.855±0.0280.181±0.0400.266±0.062
    二维随机欠采样MICCAN[25]30.446±2.19025.881±2.8260.876±0.0210.783±0.0340.193±0.0430.322±0.073
    Deepcomplex[22]35.591±2.92328.795±2.3460.915±0.0250.803±0.0290.122±0.0330.322±0.049
    DONet[23]36.094±2.56031.847±2.7160.920±0.0260.855±0.0310.113±0.0290.206±0.057
    DCANet36.917±2.84933.596±2.8430.935±0.0170.891±0.0220.103±0.0280.176±0.048
    下载: 导出CSV

    表  3  矢状面质子密度加权序列重建效果

    Table  3.   Reconstruction performance of sagittal proton-density weighted sequence

    欠采样掩码模型PSNR/dBSSIMHFEN
    353535
    一维等间隔欠采样MICCAN[25]33.710±2.09729.559±1.7260.854±0.0330.731±0.0450.353±0.0510.595±0.069
    Deepcomplex[22]35.220±1.86830.766±1.7720.870±0.0240.755±0.0370.282±0.0310.502±0.053
    DONet[23]35.744±1.75831.707±1.6030.876±0.0230.773±0.0300.263±0.0280.463±0.053
    DCANet37.046±2.06932.668±1.7130.900±0.0250.809±0.0340.241±0.0250.426±0.047
    一维笛卡尔随机欠采样MICCAN[25]34.513±1.86232.496±1.7720.864±0.0270.808±0.0350.294±0.0350.404±0.049
    Deepcomplex[22]36.889±1.59034.342±1.6320.893±0.0190.833±0.0270.194±0.0200.288±0.029
    DONet[23]37.308±1.54134.931±1.5780.896±0.0180.840±0.0260.184±0.0190.267±0.029
    DCANet38.660±1.73036.073±1.8490.916±0.0200.871±0.0300.168±0.0190.252±0.031
    二维随机欠采样MICCAN[25]27.841±3.03226.252±0.0510.819±0.0570.735±0.0610.267±0.0510.405±0.067
    Deepcomplex[22]36.025±2.09929.177±2.2950.920±0.0180.802±0.0420.151±0.0180.366±0.036
    DONet[23]37.149±1.90132.905±2.8040.933±0.0140.877±0.0270.130±0.0150.217±0.031
    DCANet38.166±2.07634.007±2.2090.941±0.0130.894±0.0220.111±0.0130.194±0.024
    下载: 导出CSV

    表  4  3折交叉验证实验结果

    Table  4.   Results of 3-fold cross-validation experiment

    模型 PSNR/dB SSIM HFEN
    MICCAN[25] 35.101±0.846 0.881±0.009 0.230±0.031
    Deepcomplex[22] 36.238±0.879 0.888±0.014 0.177±0.025
    DONet[23] 36.581±0.760 0.894±0.013 0.158±0.017
    DCANet 37.567±0.618 0.908±0.008 0.156±0.018
    下载: 导出CSV

    表  5  数据一致性层对比实验结果

    Table  5.   Results of data consistency layer contrast experiment

    模型 PSNR/dB SSIM HFEN
    DCANet_NoDC 33.677±1.337 0.857±0.029 0.271±0.052
    DCANet 36.791±1.298 0.899±0.019 0.181±0.040
    下载: 导出CSV
  • [1] 赵喜平. 磁共振成像[M]. 北京: 科学出版社, 2004: 64-87.

    ZHAO X P. Magnetic resonance imaging[M]. Beijing: Science Press, 2004: 64-87(in Chinese).
    [2] DONOHO D L. Compressed sensing[J]. IEEE Transactions on Information Theory, 2006, 52(4): 1289-1306. doi: 10.1109/TIT.2006.871582
    [3] LUSTIG M, DONOHO D L, SANTOS J M, et al. Compressed sensing MRI[J]. IEEE Signal Processing Magazine, 2008, 25(2): 72-82. doi: 10.1109/MSP.2007.914728
    [4] LAI Z Y, QU X B, LIU Y S, et al. Image reconstruction of compressed sensing MRI using graph-based redundant wavelet transform[J]. Medical Image Analysis, 2016, 27: 93-104. doi: 10.1016/j.media.2015.05.012
    [5] KNOLL F, BREDIES K, POCK T, et al. Second order total generalized variation (TGV) for MRI[J]. Magnetic Resonance in Medicine, 2011, 65(2): 480-491. doi: 10.1002/mrm.22595
    [6] YANG J F, ZHANG Y, YIN W T. A fast alternating direction method for TVL1-L2 signal reconstruction from partial Fourier data[J]. IEEE Journal of Selected Topics in Signal Processing, 2010, 4(2): 288-297. doi: 10.1109/JSTSP.2010.2042333
    [7] PRUESSMANN K P. Encoding and reconstruction in parallel MRI[J]. NMR in Biomedicine, 2006, 19(3): 288-299. doi: 10.1002/nbm.1042
    [8] PRUESSMANN K P, WEIGER M, BÖRNERT P, et al. Advances in sensitivity encoding with arbitrary k-space trajectories[J]. Magnetic Resonance in Medicine, 2001, 46(4): 638-651. doi: 10.1002/mrm.1241
    [9] PRUESSMANN K P, WEIGER M, SCHEIDEGGER M B, et al. SENSE: Sensitivity encoding for fast MRI[J]. Magnetic Resonance in Medicine, 1999, 42(5): 952-962. doi: 10.1002/(SICI)1522-2594(199911)42:5<952::AID-MRM16>3.0.CO;2-S
    [10] GRISWOLD M A, JAKOB P M, HEIDEMANN R M, et al. Generalized autocalibrating partially parallel acquisitions (GRAPPA)[J]. Magnetic Resonance in Medicine, 2002, 47(6): 1202-1210. doi: 10.1002/mrm.10171
    [11] LUSTIG M, PAULY J M. SPIRiT: Iterative self-consistent parallel imaging reconstruction from arbitrary k-space[J]. Magnetic Resonance in Medicine, 2010, 64(2): 457-471. doi: 10.1002/mrm.22428
    [12] UECKER M, LAI P, MURPHY M J, et al. ESPIRiT: An eigenvalue approach to autocalibrating parallel MRI: Where SENSE meets GRAPPA[J]. Magnetic Resonance in Medicine, 2014, 71(3): 990-1001. doi: 10.1002/mrm.24751
    [13] LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521: 436-444. doi: 10.1038/nature14539
    [14] RONNEBERGER O, FISCHER P, BROX T. U-net: Convolutional networks for biomedical image segmentation[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2015: 234-241.
    [15] DONG C, LOY C C, HE K, et al. Learning a deep convolutional network for image super-resolution[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2014: 184-199.
    [16] 施俊, 汪琳琳, 王珊珊, 等. 深度学习在医学影像中的应用综述[J]. 中国图象图形学报. 2020, 25(10): 1953-1981.

    SHI J, WANG L L, WANG S S, et al. A review of the application of deep learning in medical imaging[J]. Journal of Image and Graphics. 2020, 25(10): 1953-1981(in Chinese).
    [17] WANG S S, SU Z H, YING L, et al. Accelerating magnetic resonance imaging via deep learning[C]//Proceedings of the IEEE 13th International Symposium on Biomedical Imaging. Piscataway: IEEE Press, 2016: 514-517.
    [18] SCHLEMPER J, CABALLERO J, HAJNAL J V, et al. A deep cascade of convolutional neural networks for dynamic MR image reconstruction[J]. IEEE Transactions on Medical Imaging, 2018, 37(2): 491-503. doi: 10.1109/TMI.2017.2760978
    [19] SRIRAM A, ZBONTAR J, MURRELL T, et al. GrappaNet: Combining parallel imaging with deep learning for multi-coil MRI reconstruction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE Press, 2020: 14303-14310.
    [20] LU T Y, ZHANG X L, HUANG Y H, et al. pFISTA-SENSE-ResNet for parallel MRI reconstruction[J]. Journal of Magnetic Resonance, 2020, 318: 106790. doi: 10.1016/j.jmr.2020.106790
    [21] TRABELSI C, BILANIUK O, ZHANG Y, et al. Deep complex networks[EB/OL]. (2017-03-27)[2022-12-14]. http://arxiv.org/abs/1705.09792.
    [22] WANG S S, CHENG H T, YING L, et al. Deep complexMRI: Exploiting deep residual network for fast parallel MR imaging with complex convolution[J]. Magnetic Resonance Imaging, 2020, 68: 136-147. doi: 10.1016/j.mri.2020.02.002
    [23] FENG C M, YANG Z Y, FU H Z, et al. DONet: Dual-octave network for fast MR image reconstruction[C]//Proceedings of the IEEE Transactions on Neural Networks and Learning Systems. Piscataway: IEEE Press, 2021: 1-11.
    [24] ZHANG Y L, LI K P, LI K, et al. Image super-resolution using very deep residual channel attention networks[C]//Proceedings of the European Conference on Computer Vision. Berlin: Springer, 2018: 294-310.
    [25] HUANG Q Y, YANG D, WU P X, et al. MRI reconstruction via cascaded channel-wise attention network[C]//Proceedings of the IEEE 16th International Symposium on Biomedical Imaging. Piscataway: IEEE Press, 2019: 1622-1626.
    [26] HAMMERNIK K, KLATZER T, KOBLER E, et al. Learning a variational network for reconstruction of accelerated MRI data[J]. Magnetic Resonance in Medicine, 2018, 79(6): 3055-3071. doi: 10.1002/mrm.26977
  • 加载中
图(9) / 表(5)
计量
  • 文章访问数:  687
  • HTML全文浏览量:  141
  • PDF下载量:  26
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-12-19
  • 录用日期:  2023-03-10
  • 网络出版日期:  2023-03-22
  • 整期出版日期:  2025-01-31

目录

    /

    返回文章
    返回
    常见问答