ﻻ يوجد ملخص باللغة العربية
Automatic and accurate tumor segmentation on medical images is in high demand to assist physicians with diagnosis and treatment. However, it is difficult to obtain massive amounts of annotated training data required by the deep-learning models as the manual delineation process is often tedious and expertise required. Although self-supervised learning (SSL) scheme has been widely adopted to address this problem, most SSL methods focus only on global structure information, ignoring the key distinguishing features of tumor regions: local intensity variation and large size distribution. In this paper, we propose Scale-Aware Restoration (SAR), a SSL method for 3D tumor segmentation. Specifically, a novel proxy task, i.e. scale discrimination, is formulated to pre-train the 3D neural network combined with the self-restoration task. Thus, the pre-trained model learns multi-level local representations through multi-scale inputs. Moreover, an adversarial learning module is further introduced to learn modality invariant representations from multiple unlabeled source datasets. We demonstrate the effectiveness of our methods on two downstream tasks: i) Brain tumor segmentation, ii) Pancreas tumor segmentation. Compared with the state-of-the-art 3D SSL methods, our proposed approach can significantly improve the segmentation accuracy. Besides, we analyze its advantages from multiple perspectives such as data efficiency, performance, and convergence speed.
Segmentation of colorectal cancerous regions from 3D Magnetic Resonance (MR) images is a crucial procedure for radiotherapy which conventionally requires accurate delineation of tumour boundaries at an expense of labor, time and reproducibility. Whil
Deep learning has quickly become the weapon of choice for brain lesion segmentation. However, few existing algorithms pre-configure any biological context of their chosen segmentation tissues, and instead rely on the neural networks optimizer to deve
In this paper, we propose a similarity-aware fusion network (SAFNet) to adaptively fuse 2D images and 3D point clouds for 3D semantic segmentation. Existing fusion-based methods achieve remarkable performances by integrating information from multiple
We propose a novel, simple and effective method to integrate lesion prior and a 3D U-Net for improving brain tumor segmentation. First, we utilize the ground-truth brain tumor lesions from a group of patients to generate the heatmaps of different typ
Automated segmentation of brain glioma plays an active role in diagnosis decision, progression monitoring and surgery planning. Based on deep neural networks, previous studies have shown promising technologies for brain glioma segmentation. However,