Journal

Journal of Digital Imaging

Papers (5)

Hybrid Transfer Learning for Classification of Uterine Cervix Images for Cervical Cancer Screening

Transfer learning using deep pre-trained convolutional neural networks is increasingly used to solve a large number of problems in the medical field. In spite of being trained using images with entirely different domain, these networks are flexible to adapt to solve a problem in a different domain too. Transfer learning involves fine-tuning a pre-trained network with optimal values of hyperparameters such as learning rate, batch size, and number of training epochs. The process of training the network identifies the relevant features for solving a specific problem. Adapting the pre-trained network to solve a different problem requires fine-tuning until relevant features are obtained. This is facilitated through the use of large number of filters present in the convolutional layers of pre-trained network. A very few features out of these features are useful for solving the problem in a different domain, while others are irrelevant, use of which may only reduce the efficacy of the network. However, by minimizing the number of filters required to solve the problem, the efficiency of the training the network can be improved. In this study, we consider identification of relevant filters using the pre-trained networks namely AlexNet and VGG-16 net to detect cervical cancer from cervix images. This paper presents a novel hybrid transfer learning technique, in which a CNN is built and trained from scratch, with initial weights of only those filters which were identified as relevant using AlexNet and VGG-16 net. This study used 2198 cervix images with 1090 belonging to negative class and 1108 to positive class. Our experiment using hybrid transfer learning achieved an accuracy of 91.46%.

Convolutional Neural Networks for Classifying Cervical Cancer Types Using Histological Images

Cervical cancer is the most common cancer among women worldwide. The diagnosis and classification of cancer are extremely important, as it influences the optimal treatment and length of survival. The objective was to develop and validate a diagnosis system based on convolutional neural networks (CNN) that identifies cervical malignancies and provides diagnostic interpretability. A total of 8496 labeled histology images were extracted from 229 cervical specimens (cervical squamous cell carcinoma, SCC, n = 37; cervical adenocarcinoma, AC, n = 8; nonmalignant cervical tissues, n = 184). AlexNet, VGG-19, Xception, and ResNet-50 with five-fold cross-validation were constructed to distinguish cervical cancer images from nonmalignant images. The performance of CNNs was quantified in terms of accuracy, precision, recall, and the area under the receiver operating curve (AUC). Six pathologists were recruited to make a comparison with the performance of CNNs. Guided Backpropagation and Gradient-weighted Class Activation Mapping (Grad-CAM) were deployed to highlight the area of high malignant probability. The Xception model had excellent performance in identifying cervical SCC and AC in test sets. For cervical SCC, AUC was 0.98 (internal validation) and 0.974 (external validation). For cervical AC, AUC was 0.966 (internal validation) and 0.958 (external validation). The performance of CNNs falls between experienced and inexperienced pathologists. Grad-CAM and Guided Gard-CAM ensured diagnoses interpretability by highlighting morphological features of malignant changes. CNN is efficient for histological image classification tasks of distinguishing cervical malignancies from benign tissues and could highlight the specific areas of concern. All these findings suggest that CNNs could serve as a diagnostic tool to aid pathologic diagnosis.

The Accuracy and Radiomics Feature Effects of Multiple U-net-Based Automatic Segmentation Models for Transvaginal Ultrasound Images of Cervical Cancer

Ultrasound (US) imaging has been recognized and widely used as a screening and diagnostic imaging modality for cervical cancer all over the world. However, few studies have investigated the U-net-based automatic segmentation models for cervical cancer on US images and investigated the effects of automatic segmentations on radiomics features. A total of 1102 transvaginal US images from 796 cervical cancer patients were collected and randomly divided into training (800), validation (100) and test sets (202), respectively, in this study. Four U-net models (U-net, U-net with ResNet, context encoder network (CE-net), and Attention U-net) were adapted to segment the target of cervical cancer automatically on these US images. Radiomics features were extracted and evaluated from both manually and automatically segmented area. The mean Dice similarity coefficient (DSC) of U-net, Attention U-net, CE-net, and U-net with ResNet were 0.88, 0.89, 0.88, and 0.90, respectively. The average Pearson coefficients for the evaluation of the reliability of US image-based radiomics were 0.94, 0.96, 0.94, and 0.95 for U-net, U-net with ResNet, Attention U-net, and CE-net, respectively, in their comparison with manual segmentation. The reproducibility of the radiomics parameters evaluated by intraclass correlation coefficients (ICC) showed robustness of automatic segmentation with an average ICC coefficient of 0.99. In conclusion, high accuracy of U-net-based automatic segmentations was achieved in delineating the target area of cervical cancer US images. It is feasible and reliable for further radiomics studies with features extracted from automatic segmented target areas.

Contrast-Enhancing Snapshot Narrow-Band Imaging Method for Real-Time Computer-Aided Cervical Cancer Screening

Composition of cervical precancerous lesions and carcinoma in situ is rich in hemoglobin, unlike healthy tissues. In this study, we aimed to utilize this difference to enhance the contrast between healthy and diseased tissues via snapshot narrow-band imaging (SNBI). Four narrow-band images centered at wavelengths of characteristic absorption/reflection peaks of hemoglobin were captured with zero-time delay in between by a custom-designed SNBI video camera. Then these spectral images were fused in real time into a single combined image to enhance the contrast between normal and abnormal tissues. Finally, a Euclidean distance algorithm was employed to classify the tissue into clinical meaningful tissue types. Two pre-clinical experiments were conducted to validate the proposed method. Experimental results indicate that contrast between different grades of diseased tissues in the SNBI generated image was indeed enhanced, as compared to conventional white light image (WLI). The computer-aided classification accuracy was 100% and 50% as compared to the gold standard histopathological diagnosis results with the SNBI and the conventional WLI methods, respectively. Further, the boundary contour between health tissue, cervical precancerous regions, and carcinoma in situ can be automatically delineated in SNBI. The proposed SNBI method was also fast, and it generated automatic diagnostic results with clear boundary contours at over 11 fps on a Pentium 1.6-GHz laptop. Hence, the proposed SNBI is of great significance to enlarge worldwide the coverage of regular cervical screening program, and to live guide surgeries such as biopsy sample collection and accurate cervical cancer treatment.

Deep Learning-based Non-rigid Image Registration for High-dose Rate Brachytherapy in Inter-fraction Cervical Cancer

Abstract In this study, an inter-fraction organ deformation simulation framework for the locally advanced cervical cancer (LACC), which considers the anatomical flexibility, rigidity, and motion within an image deformation, was proposed. Data included 57 CT scans (7202 2D slices) of patients with LACC randomly divided into the train (n = 42) and test (n = 15) datasets. In addition to CT images and the corresponding RT structure (bladder, cervix, and rectum), the bone was segmented, and the coaches were eliminated. The correlated stochastic field was simulated using the same size as the target image (used for deformation) to produce the general random deformation. The deformation field was optimized to have a maximum amplitude in the rectum region, a moderate amplitude in the bladder region, and an amplitude as minimum as possible within bony structures. The DIRNet is a convolutional neural network that consists of convolutional regressors, spatial transformation, as well as resampling blocks. It was implemented by different parameters. Mean Dice indices of 0.89 ± 0.02, 0.96 ± 0.01, and 0.93 ± 0.02 were obtained for the cervix, bladder, and rectum (defined as at-risk organs), respectively. Furthermore, a mean average symmetric surface distance of 1.61 ± 0.46 mm for the cervix, 1.17 ± 0.15 mm for the bladder, and 1.06 ± 0.42 mm for the rectum were achieved. In addition, a mean Jaccard of 0.86 ± 0.04 for the cervix, 0.93 ± 0.01 for the bladder, and 0.88 ± 0.04 for the rectum were observed on the test dataset (15 subjects). Deep learning-based non-rigid image registration is, therefore, proposed for the high-dose-rate brachytherapy in inter-fraction cervical cancer since it outperformed conventional algorithms.

Publisher

Springer Science and Business Media LLC

ISSN

0897-1889

Journal of Digital Imaging