融合Transformer自适应特征选择的结直肠息肉分割

梁礼明,康婷,王成斌,等. 融合Transformer自适应特征选择的结直肠息肉分割[J]. 光电工程,2025,52(3): 240279. doi: 10.12086/oee.2025.240279
引用本文: 梁礼明,康婷,王成斌,等. 融合Transformer自适应特征选择的结直肠息肉分割[J]. 光电工程,2025,52(3): 240279. doi: 10.12086/oee.2025.240279
Liang L M, Kang T, Wang C B, et al. Colorectal polyp segmentation via Transformer-based adaptive feature selection[J]. Opto-Electron Eng, 2025, 52(3): 240279. doi: 10.12086/oee.2025.240279
Citation: Liang L M, Kang T, Wang C B, et al. Colorectal polyp segmentation via Transformer-based adaptive feature selection[J]. Opto-Electron Eng, 2025, 52(3): 240279. doi: 10.12086/oee.2025.240279

融合Transformer自适应特征选择的结直肠息肉分割

  • 基金项目:
    国家自然科学基金资助项目(51365017,61463018);江西省自然科学基金资助项目(20192BAB205084);江西省教育厅科学技术研究青年项目(GJJ2200848)
详细信息
    作者简介:
    *通讯作者: 康婷,1833075267@qq.com。
  • 中图分类号: TP391.4

  • CSTR: 32245.14.oee.2025.240279

Colorectal polyp segmentation via Transformer-based adaptive feature selection

  • Fund Project: National Natural Science Foundation of China (51365017, 61463018), the Natural Science Foundation of Jiangxi Province (20192BAB205084), and the Youth Project of Science and Technology Research of the Jiangxi Provincial Department of Education (GJJ2200848)
More Information
  • 针对结直肠息肉分割中区域误分割和目标定位精度不足等挑战,本文提出一种融合Transformer自适应特征选择的结直肠息肉分割算法。首先通过Transformer编码器提取多层次特征表示,涵盖从细粒度到高层语义的多尺度信息;其次设计双重聚焦注意力模块,通过融合多尺度信息、空间注意力和局部细节特征,增强特征表达与辨识能力,显著提升病灶区域定位精度;再次引入分层特征融合模块,采用层次化聚合策略,加强局部与全局特征的融合,强化对复杂区域特征的捕捉,有效减少误分割现象;最后结合动态特征选择模块的自适应筛选与加权机制,优化多分辨率特征表达,去除冗余信息,聚焦关键区域。在Kvasir、CVC-ClinicDB、CVC-ColonDB和ETIS数据集上进行实验验证,其Dice系数分别达到0.926、0.941、0.814和0.797。实验结果表明,本文算法在结直肠息肉分割任务中具有优越性能和应用价值。

  • Overview: Colorectal cancer ranks among the most common and life-threatening diseases worldwide, with colorectal polyps identified as the primary precursors. Accurate detection and segmentation of polyps are essential for preventing cancer progression and improving patient outcomes. However, existing segmentation methods face persistent challenges, including regional mis-segmentation, low localization accuracy, and difficulties in capturing the complex features of polyps. To overcome these limitations, this study presents a novel colorectal polyp segmentation algorithm that integrates Transformer-based adaptive feature selection to improve segmentation accuracy and robustness.

    The proposed approach utilizes a Transformer encoder to extract multi-level feature representations, capturing information from fine-grained details to high-level semantics. This enables a comprehensive understanding of the morphology of polyps and their surrounding tissues. To further improve feature representation, a dual-focus attention module is introduced, which integrates multi-scale information, spatial attention, and local detail features. This module enhances lesion localization accuracy and reduces errors arising from the complex structures of polyps.

    To address regional mis-segmentation, a hierarchical feature fusion module is developed. By employing a hierarchical aggregation strategy, this module strengthens the integration of local and global features, allowing the model to better capture intricate regional characteristics. Additionally, a dynamic feature selection module is incorporated to optimize multi-resolution feature representations. Through adaptive selection and weighting mechanisms, this module eliminates redundant information and focuses on critical regions, improving segmentation precision.

    In conclusion, this study demonstrates the effectiveness of integrating Transformer-based adaptive feature selection, dual-focus attention, hierarchical feature fusion, and dynamic feature optimization. The proposed algorithm provides a comprehensive and innovative solution to the challenges of colorectal polyp segmentation, offering significant potential for clinical applications in early cancer diagnosis and treatment.

  • 加载中
  • 图 1  基于Transformer自适应特征选择的结直肠息肉分割算法

    Figure 1.  A Transformer-based adaptive feature selection algorithm for colorectal polyp segmentation

    图 2  双重聚焦注意力模块

    Figure 2.  Dual-focus attention module

    图 3  分层特征融合模块

    Figure 3.  Hierarchical feature fusion module

    图 4  动态特征选择模块

    Figure 4.  Dynamic feature selection module

    图 5  Dice系数变化趋势图

    Figure 5.  Trend chart of Dice coefficient changes

    图 6  CVC-ClinicDB和Kvasir数据集上不同网络的可视化分割结果

    Figure 6.  Visualization of segmentation results of different networks on CVC-ClinicDB and Kvasir datasets

    图 7  CVC-ColonDB和ETIS数据集上不同网络的可视化分割结果

    Figure 7.  Visualization of sgmentation results of different networks on CVC-ColonDB and ETIS datasets

    表 1  数据集细节及划分

    Table 1.  Dataset details and division

    DatasetImage resolutionTrain dataTest dataImage type
    CVC-ClinicDB384×28855062Image
    Kvasir-SEGSize variation900100Image and mask
    CVC-ColonDB574×5000380Image
    ETIS1226×9960196Image
    下载: 导出CSV

    表 2  Kvasir和CVC-ClinicDB数据集上不同网络分割结果

    Table 2.  Segmentation results of different networks on Kvasir and CVC-ClinicDB datasets

    DatasetMethodDiceMIoUSEPCF2MAE
    KvasirU-Net[4]0.8180.7460.8560.8570.8270.055
    DCRNet[6]0.8880.8250.9020.9040.8910.035
    CaraNet[7]0.9220.8720.9150.9410.9210.019
    MSEG[8]0.8990.8420.9000.9230.8960.028
    SSFormer-S[9]0.9250.8770.9140.9440.9170.018
    MSRAFormer[10]0.9230.8730.9150.9520.9170.024
    Polyp-PVT[21]0.9170.8640.9130.9470.9140.023
    Ours0.9260.8790.9170.9550.9190.023
    CVC-ClinicDBU-Net[4]0.8230.7550.8340.8390.8270.019
    DCRNet[6]0.8990.8460.9130.8930.9060.010
    CaraNet[7]0.9340.8900.9440.9400.9390.006
    MSEG[8]0.9120.8660.9240.9350.9180.007
    SSFormer-S[9]0.9180.8750.9050.9390.9100.007
    MSRAFormer[10]0.9240.8740.9450.9200.9320.008
    Polyp-PVT[21]0.9370.8890.9490.9360.9450.006
    Ours0.9410.8960.9570.9340.9490.006
    下载: 导出CSV

    表 3  CVC-ConlonDB和ETIS数据集上不同网络分割结果

    Table 3.  Segmentation results of different networks on CVC-ColonDB and ETIS datasets

    DatasetMethodDiceMIoUSEPCF2MAE
    CVC-ConlonDBU-Net[4]0.5120.4440.5230.6210.5100.061
    DCRNet[6]0.7070.6320.7760.7190.7230.052
    CaraNet[7]0.7480.6830.7530.8930.7460.035
    MSEG[8]0.7380.6690.7520.8060.7390.038
    SSFormer-S[9]0.7740.6980.7770.8370.7660.036
    MSRAFormer[10]0.7820.7070.8030.8740.1810.028
    Polyp-PVT[21]0.8080.7270.8210.8490.8090.031
    Ours0.8140.7320.8490.8240.8250.028
    ETISU-Net[4]0.3980.3350.4820.4390.4290.036
    DCRNet[6]0.5500.4860.7460.5040.6000.095
    CaraNet[7]0.7280.6610.7750.8140.7500.017
    MSEG[8]0.7030.6320.7390.7100.7200.015
    SSFormer-S[9]0.7690.6980.8560.7430.8000.016
    MSRAFormer[10]0.7500.6790.8110.7450.7770.013
    Polyp-PVT[21]0.7870.7060.8670.7740.8200.013
    Ours0.7970.7160.8890.7610.8340.018
    下载: 导出CSV

    表 4  不同网络性能对比(CVC-ClinicDB)

    Table 4.  Performance comparison of different networks (CVC-ClinicDB)

    Method Parameters/M GFLOPs Train/($ {\text{round}} \cdot {{\text{s}}^{ - 1}} $)
    U-Net 34.53 65.52 309
    DCRNet 28.70 53.00 285
    CaraNet 44.54 11.45 256
    SSFormer-S 29.31 10.11 220
    MSRAFormer 68.96 21.29 199
    Polyp-PVT 25.12 5.30 233
    Ours 26.05 11.00 183
    下载: 导出CSV

    表 5  各模块在CVC-ClinicDB数据集上的消融研究结果

    Table 5.  Ablation results of each module on the CVC-ClinicDB dataset

    MethodDFAHFFDFSDiceMIoUSEPCF2
    G10.9290.8820.9350.9410.931
    G20.9300.8840.9500.9240.938
    G30.9210.8660.9220.9350.921
    G40.9410.8960.9570.9340.949
    下载: 导出CSV

    表 6  各模块在ETIS数据集上的消融研究结果

    Table 6.  Ablation results of each module on the ETIS dataset

    MethodDFAHFFDFSDiceMIoUSEPCF2
    G10.7820.7050.8560.7480.816
    G20.7870.7050.8750.7550.825
    G30.7800.6980.8390.7710.806
    G40.7970.7160.8890.7610.834
    下载: 导出CSV
  • [1]

    谢斌, 刘阳倩, 李俞霖. 结合极化自注意力和Transformer的结直肠息肉分割方法[J]. 光电工程, 2024, 51(10): 240179. doi: 10.12086/oee.2024.240179

    Xie B, Liu Y Q, Li Y L. Colorectal polyp segmentation method combining polarized self-attention and Transformer[J]. Opto-Electron Eng, 2024, 51(10): 240179. doi: 10.12086/oee.2024.240179

    [2]

    Lin L, Lv G Z, Wang B, et al. Polyp-LVT: polyp segmentation with lightweight vision transformers[J]. Knowledge-Based Syst, 2024, 300: 112181. doi: 10.1016/j.knosys.2024.112181

    [3]

    张艳, 马春明, 刘树东, 等. 基于多尺度特征增强的高效Transformer语义分割网络[J]. 光电工程, 2024, 51(12): 240237. doi: 10.12086/oee.2024.240237

    Zhang Y, Ma C M, Liu S D, et al. Multi-scale feature enhanced Transformer network for efficient semantic segmentation[J]. Opto-Electron Eng, 2024, 51(12): 240237. doi: 10.12086/oee.2024.240237

    [4]

    Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation[C]//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, 2015: 234–241. https://doi.org/10.1007/978-3-319-24574-4_28.

    [5]

    Diakogiannis F I, Waldner F, Caccetta P, et al. ResUNet-a: a deep learning framework for semantic segmentation of remotely sensed data[J]. ISPRS J Photogramm Remote Sens, 2020, 162: 94−114. doi: 10.1016/j.isprsjprs.2020.01.013

    [6]

    Yin Z J, Liang K M, Ma Z Y, et al. Duplex contextual relation network for polyp segmentation[C]//Proceedings of 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), 2022: 1–5. https://doi.org/10.1109/ISBI52829.2022.9761402.

    [7]

    Lou A G, Guan S Y, Ko H, et al. CaraNet: context axial reverse attention network for segmentation of small medical objects[J]. Proc SPIE, 2022, 12032: 120320D. doi: 10.1117/12.2611802

    [8]

    Huang C H, Wu H Y, Lin Y L. HarDNet-MSEG: a simple encoder-decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 FPS[Z]. arXiv: 2101.07172, 2021. https://doi.org/10.48550/arXiv.2101.07172.

    [9]

    Shi W T, Xu J, Gao P. SSformer: a lightweight transformer for semantic segmentation[C]//Proceedings of 2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), 2022: 1–5. https://doi.org/10.1109/MMSP55362.2022.9949177.

    [10]

    Wu C, Long C, Li S J, et al. MSRAformer: multiscale spatial reverse attention network for polyp segmentation[J]. Comput Biol Med, 2022, 151: 106274. doi: 10.1016/j.compbiomed.2022.106274

    [11]

    Wang W H, Xie E Z, Li X, et al. PVT v2: improved baselines with pyramid vision transformer[J]. Comput Visual Media, 2022, 8(3): 415−424. doi: 10.1007/s41095-022-0274-8

    [12]

    Ouyang D L, He S, Zhang G Z, et al. Efficient multi-scale attention module with cross-spatial learning[C]//Proceedings of ICASSP 20232023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023: 1–5. https://doi.org/10.1109/ICASSP49357.2023.10096516.

    [13]

    Zheng M J, Sun L, Dong J X, et al. SMFANet: a lightweight self-modulation feature aggregation network for efficient image super-resolution[C]//Proceedings of the 18th European Conference on Computer Vision, 2024: 359–375. https://doi.org/10.1007/978-3-031-72973-7_21.

    [14]

    Huo X Z, Sun G, Tian S W, et al. HiFuse: hierarchical multi-scale feature fusion network for medical image classification[J]. Biomed Signal Process Control, 2024, 87: 105534. doi: 10.1016/j.bspc.2023.105534

    [15]

    Chen X K, Lin K Y, Wang J B, et al. Bi-directional cross-modality feature propagation with separation-and-aggregation gate for RGB-D semantic segmentation[C]//Proceedings of the 16th European Conference on Computer Vision, 2020: 561–577. https://doi.org/10.1007/978-3-030-58621-8_33.

    [16]

    Zhang Q L, Yang Y B. SA-Net: shuffle attention for deep convolutional neural networks[C]//Proceedings of ICASSP 20212021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021: 2235–2239. https://doi.org/10.1109/ICASSP39728.2021.9414568.

    [17]

    Bernal J, Sánchez F J, Fernández-Esparrach G, et al. WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians[J]. Comput Med Imaging Graphics, 2015, 43: 99−111. doi: 10.1016/j.compmedimag.2015.02.007

    [18]

    Jha D, Smedsrud P H, Riegler M A, et al. Kvasir-SEG: a segmented polyp dataset[C]//Proceedings of the 26th International Conference on MultiMedia Modeling, 2020: 451–462. https://doi.org/10.1007/978-3-030-37734-2_37.

    [19]

    Tajbakhsh N, Gurudu S R, Liang J M. Automated polyp detection in colonoscopy videos using shape and context information[J]. IEEE Trans Med Imaging, 2016, 35(2): 630−644. doi: 10.1109/TMI.2015.2487997

    [20]

    Silva J, Histace A, Romain O, et al. Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer[J]. Int J Comput Assisted Radiol Surg, 2014, 9(2): 283−293. doi: 10.1007/s11548-013-0926-3

    [21]

    Dong B, Wang W H, Fan D P, et al. Polyp-PVT: polyp segmentation with pyramid vision transformers[Z]. arXiv: 2108.06932, 2024. https://doi.org/10.48550/arXiv.2108.06932.

    [22]

    李大湘, 李登辉, 刘颖, 等. 渐进式CNN-Transformer语义补偿息肉分割网络[J]. 光学 精密工程, 2024, 32(16): 2523−2536. doi: 10.37188/OPE.20243216.2523

    Li D X, Li D H, Liu Y, et al. Progressive CNN-transformer semantic compensation network for polyp segmentation[J]. Opt Precis Eng, 2024, 32(16): 2523−2536. doi: 10.37188/OPE.20243216.2523

  • 加载中

(8)

(6)

计量
  • 文章访问数: 
  • PDF下载数: 
  • 施引文献:  0
出版历程
收稿日期:  2024-11-29
修回日期:  2025-01-24
录用日期:  2025-02-06
刊出日期:  2025-03-28

目录

/

返回文章
返回