Journal

Computer Methods and Programs in Biomedicine

Papers (10)

Developing a new radiomics-based CT image marker to detect lymph node metastasis among cervical cancer patients

In diagnosis of cervical cancer patients, lymph node (LN) metastasis is a highly important indicator for the following treatment management. Although CT/PET (i.e., computed tomography/positron emission tomography) examination is the most effective approach for this detection, it is limited by the high cost and low accessibility, especially for the rural areas in the U.S.A. or other developing countries. To address this challenge, this investigation aims to develop and test a novel radiomics-based CT image marker to detect lymph node metastasis for cervical cancer patients. A total of 1,763 radiomics features were first computed from the segmented primary cervical tumor depicted on one CT image with the maximal tumor region. Next, a principal component analysis algorithm was applied on the initial feature pool to determine an optimal feature cluster. Then, based on this optimal cluster, the prediction models (i.e., logistic regression or support vector machine) were trained and optimized to generate an image marker to detect LN metastasis. In this study, a retrospective dataset containing 127 cervical cancer patients were established to build and test the model. The model was trained using a leave-one-case-out (LOCO) cross-validation strategy and image marker performance was evaluated using the area under receiver operation characteristic (ROC) curve (AUC). The results indicate that the SVM based imaging marker achieved an AUC value of 0.841 ± 0.035. When setting an operating threshold of 0.5 on model-generated prediction scores, the imaging marker yielded a positive and negative predictive value (PPV and NPV) of 0.762 and 0.765 respectively, while the total accuracy is 76.4%. This study initially verified the feasibility of utilizing CT image and radiomics technology to develop a low-cost image marker to detect LN metastasis for assisting stratification of cervical cancer patients.

Global context-aware cervical cell detection with soft scale anchor matching

Computer-aided cervical cancer screening based on an automated recognition of cervical cells has the potential to significantly reduce error rate and increase productivity compared to manual screening. Traditional methods often rely on the success of accurate cell segmentation and discriminative hand-crafted features extraction. Recently, detector based on convolutional neural network is applied to reduce the dependency on hand-crafted features and eliminate the necessary segmentation. However, these methods tend to yield too much false positive predictions. This paper proposes a global context-aware framework to deal with this problem, which integrates global context information by an image-level classification branch and a weighted loss. And the prediction of this branch is merged into cell detection for filtering false positive predictions. Furthermore, a new ground truth assignment strategy in the feature pyramid called soft scale anchor matching is proposed, which matches ground truths with anchors across scales softly. This strategy searches the most appropriate representation of ground truths in each layer and add more positive samples with different scales, which facilitate the feature learning. Our proposed methods finally get 5.7% increase in mean average precision and 18.5% increase in specificity with sacrifice of 2.6% delay in inference time. Our proposed methods which totally avoid the dependence on segmentation of cervical cells, show the great potential to reduce the workload for pathologists in automation-assisted cervical cancer screening.

A fuzzy distance-based ensemble of deep models for cervical cancer detection

Cervical cancer is one of the leading causes of women's death. Like any other disease, cervical cancer's early detection and treatment with the best possible medical advice are the paramount steps that should be taken to ensure the minimization of after-effects of contracting this disease. PaP smear images are one the most effective ways to detect the presence of such type of cancer. This article proposes a fuzzy distance-based ensemble approach composed of deep learning models for cervical cancer detection in PaP smear images. We employ three transfer learning models for this task: Inception V3, MobileNet V2, and Inception ResNet V2, with additional layers to learn data-specific features. To aggregate the outcomes of these models, we propose a novel ensemble method based on the minimization of error values between the observed and the ground-truth. For samples with multiple predictions, we first take three distance measures, i.e., Euclidean, Manhattan (City-Block), and Cosine, for each class from their corresponding best possible solution. We then defuzzify these distance measures using the product rule to calculate the final predictions. In the current experiments, we have achieved 95.30%, 93.92%, and 96.44% respectively when Inception V3, MobileNet V2, and Inception ResNet V2 run individually. After applying the proposed ensemble technique, the performance reaches 96.96% which is higher than the individual models. Experimental outcomes on three publicly available datasets ensure that the proposed model presents competitive results compared to state-of-the-art methods. The proposed approach provides an end-to-end classification technique to detect cervical cancer from PaP smear images. This may help the medical professionals for better treatment of the cervical cancer. Thus increasing the overall efficiency in the whole testing process. The source code of the proposed work can be found in github.com/rishavpramanik/CervicalFuzzyDistanceEnsemble.

Diagnosis of endometrium hyperplasia and screening of endometrial intraepithelial neoplasia in histopathological images using a global-to-local multi-scale convolutional neural network

Endometrial hyperplasia (EH), a uterine pathology characterized by an increased gland-to-stroma ratio compared to normal endometrium (NE), may precede the development of endometrial cancer (EC). Particularly, atypical EH also known as endometrial intraepithelial neoplasia (EIN), has been proven to be a precursor of EC. Thus, diagnosing different EH (EIN, hyperplasia without atypia (HwA) and NE) and screening EIN from non-EIN are crucial for the health of female reproductive system. Computer-aided-diagnosis (CAD) was used to diagnose endometrial histological images based on machine learning and deep learning. However, these studies perform single-scale image analysis and thus can only characterize partial endometrial features. Empirically, both global (cytological changes relative to background) and local features (gland-to-stromal ratio and lesion dimension) are helpful in identifying endometrial lesions. We proposed a global-to-local multi-scale convolutional neural network (G2LNet) to diagnose different EH and to screen EIN in endometrial histological images stained by hematoxylin and eosin (H&E). The G2LNet first used a supervised model in the global part to extract contextual features of endometrial lesions, and simultaneously deployed multi-instance learning in the local part to obtain textural features from multiple image patches. The contextual and textural features were used together to diagnose different endometrial lesions after fusion by a convolutional block attention module. In addition, we visualized the salient regions on both the global image and local images to investigate the interpretability of the model in endometrial diagnosis. In the five-fold cross validation on 7812 H&E images from 467 endometrial specimens, G2LNet achieved an accuracy of 97.01% for EH diagnosis and an area-under-the-curve (AUC) of 0.9902 for EIN screening, significantly higher than state-of-the-arts. In external validation on 1631 H&E images from 135 specimens, G2LNet achieved an accuracy of 95.34% for EH diagnosis, which was comparable to that of a mid-level pathologist (95.71%). Specifically, G2LNet had advantages in diagnosing EIN, while humans performed better in identifying NE and HwA. The developed G2LNet that integrated both the global (contextual) and local (textural) features may help pathologists diagnose endometrial lesions in clinical practices, especially to improve the accuracy and efficiency of screening for precancerous lesions.

Light scattering imaging modal expansion cytometry for label-free single-cell analysis with deep learning

Single-cell imaging plays a key role in various fields, including drug development, disease diagnosis, and personalized medicine. To obtain multi-modal information from a single-cell image, especially for label-free cells, this study develops modal expansion cytometry for label-free single-cell analysis. The study utilizes a deep learning-based architecture to expand single-mode light scattering images into multi-modality images, including bright-field (non-fluorescent) and fluorescence images, for label-free single-cell analysis. By combining adversarial loss, L1 distance loss, and VGG perceptual loss, a new network optimization method is proposed. The effectiveness of this method is verified by experiments on simulated images, standard spheres of different sizes, and multiple cell types (such as cervical cancer and leukemia cells). Additionally, the capability of this method in single-cell analysis is assessed through multi-modal cell classification experiments, such as cervical cancer subtypes. This is demonstrated by using both cervical cancer cells and leukemia cells. The expanded bright-field and fluorescence images derived from the light scattering images align closely with those obtained through conventional microscopy, showing a contour ratio near 1 for both the whole cell and its nucleus. Using machine learning, the subtyping of cervical cancer cells achieved 92.85 % accuracy with the modal expansion images, which represents an improvement of nearly 20 % over single-mode light scattering images. This study demonstrates the light scattering imaging modal expansion cytometry with deep learning has the capability to expand the single-mode light scattering image into the artificial multimodal images of label-free single cells, which not only provides the visualization of cells but also helps for the cell classification, showing great potential in the field of single-cell analysis such as cancer cell diagnosis.

An efficient Fusion-Purification Network for Cervical pap-smear image classification

In cervical cell diagnostics, autonomous screening technology constitutes the foundation of automated diagnostic systems. Currently, numerous deep learning-based classification techniques have been successfully implemented in the analysis of cervical cell images, yielding favorable outcomes. Nevertheless, efficient discrimination of cervical cells continues to be challenging due to large intra-class and small inter-class variations. The key to dealing with this problem is to capture localized informative differences from cervical cell images and to represent discriminative features efficiently. Existing methods neglect the importance of global morphological information, resulting in inadequate feature representation capability. To address this limitation, we propose a novel cervical cell classification model that focuses on purified fusion information. Specifically, we first integrate the detailed texture information and morphological structure features, named cervical pathology information fusion. Second, in order to enhance the discrimination of cervical cell features and address the data redundancy and bias inherent after fusion, we design a cervical purification bottleneck module. This model strikes a balance between leveraging purified features and facilitating high-efficiency discrimination. Furthermore, we intend to unveil a more intricate cervical cell dataset: Cervical Cytopathology Image Dataset (CCID). Extensive experiments on two real-world datasets show that our proposed model outperforms state-of-the-art cervical cell classification models. The results show that our method can well help pathologists to accurately evaluate cervical smears.

An explainable attention model for cervical precancer risk classification using colposcopic images

Cervical cancer remains a major worldwide health issue, with high morbidity and mortality rates if diagnosed and treated at a later stage. Early identification and risk assessment are crucial for preventive interventions. This paper presents the Cervix-AID-Net model for classifying cervical precancer risk using still images captured from a DYSIS colposcope. The study designs and evaluates the proposed Cervix-AID-Net model to classify high-risk and low-risk cervical precancer classes. The model comprises a Convolutional Block Attention Module (CBAM) and convolutional layers that extract interpretable and representative features from colposcopic images to distinguish high-risk and low-risk cervical precancer. In addition, the proposed Cervix-AID-Net model integrates gradient class activation maps, Local Interpretable Model-agnostic Explanations, CartoonX, and pixel rate distortion techniques to explain model decisions using output feature maps and input features. The evaluation using holdout and ten-fold cross-validation techniques yielded classification accuracies of 99.33% and 99.81%, respectively. The analysis revealed that CartoonX provides meticulous explanations for the decision of the Cervix-AID-Net model due to its ability to provide the relevant piecewise smooth part of the image. The effect of Gaussian noise and blur on the input shows that the performance remains unchanged up to Gaussian noise of 3% and blur of 10%, while the performance decreases thereafter. A comparison study of the proposed model's performance with other deep learning approaches highlights the Cervix-AID-Net model's potential as a supplemental tool for increasing the effectiveness of cervical precancer risk assessment. The proposed method, which incorporates CBAM and explainable artificial intelligence, has the potential to influence the prevention and early detection of cervical cancer. Thus, the proposed framework will help improve patient outcomes and reduce the worldwide burden of this preventable disease.

CervixFormer: A Multi-scale swin transformer-Based cervical pap-Smear WSI classification framework

Cervical cancer affects around 0.5 million women per year, resulting in over 0.3 million fatalities. Therefore, repetitive screening for cervical cancer is of utmost importance. Computer-assisted diagnosis is key for scaling up cervical cancer screening. Current recognition algorithms, however, perform poorly on the whole-slide image (WSI) analysis, fail to generalize for different staining methods and on uneven distribution for subtype imaging, and provide sub-optimal clinical-level interpretations. Herein, we developed CervixFormer-an end-to-end, multi-scale swin transformer-based adversarial ensemble learning framework to assess pre-cancerous and cancer-specific cervical malignant lesions on WSIs. The proposed framework consists of (1) a self-attention generative adversarial network (SAGAN) for generating synthetic images during patch-level training to address the class imbalanced problems; (2) a multi-scale transformer-based ensemble learning method for cell identification at various stages, including atypical squamous cells (ASC) and atypical squamous cells of undetermined significance (ASCUS), which have not been demonstrated in previous studies; and (3) a fusion model for concatenating ensemble-based results and producing final outcomes. In the evaluation, the proposed method is first evaluated on a private dataset of 717 annotated samples from six classes, obtaining a high recall and precision of 0.940 and 0.934, respectively, in roughly 1.2 minutes. To further examine the generalizability of CervixFormer, we evaluated it on four independent, publicly available datasets, namely, the CRIC cervix, Mendeley LBC, SIPaKMeD Pap Smear, and Cervix93 Extended Depth of Field image datasets. CervixFormer obtained a fairly better performance on two-, three-, four-, and six-class classification of smear- and cell-level datasets. For clinical interpretation, we used GradCAM to visualize a coarse localization map, highlighting important regions in the WSI. Notably, CervixFormer extracts feature mostly from the cell nucleus and partially from the cytoplasm. In comparison with the existing state-of-the-art benchmark methods, the CervixFormer outperforms them in terms of recall, accuracy, and computing time.

Cervical cell classification with graph convolutional network

Cervical cell classification has important clinical significance in cervical cancer screening at early stages. In contrast with the conventional classification methods which depend on hand-crafted or engineered features, Convolutional Neural Network (CNN) generally classifies cervical cells via learned deep features. However, the latent correlations of images may be ignored during CNN feature learning and thus influence the representation ability of CNN features. We propose a novel cervical cell classification method based on Graph Convolutional Network (GCN). It aims to explore the potential relationship of cervical cell images for improving the classification performance. The CNN features of all the cervical cell images are firstly clustered and the intrinsic relationships of images can be preliminarily revealed through the clustering. To further capture the underlying correlations existed among clusters, a graph structure is constructed. GCN is then applied to propagate the node dependencies and thus yield the relation-aware feature representation. The GCN features are finally incorporated to enhance the discriminative ability of CNN features. Experiments on the public cervical cell image dataset SIPaKMeD from International Conference on Image Processing in 2018 demonstrate the feasibility and effectiveness of the proposed method. In addition, we introduce a large-scale Motic liquid-based cytology image dataset which provides the large amount of data, some novel cell types with important clinical significance and staining difference and thus presents a great challenge for cervical cell classification. We evaluate the proposed method under two conditions of the consistent staining and different staining. Experimental results show our method outperforms the existing state-of-arts methods according to the quantitative metrics (i.e. accuracy, sensitivity, specificity, F-measure and confusion matrices). The intrinsic relationship exploration of cervical cells contributes significant improvements to the cervical cell classification. The relation-aware features generated by GCN effectively strengthens the representational power of CNN features. The proposed method can achieve the better classification performance and also can be potentially used in automatic screening system of cervical cytology.

Improved rank-based recursive feature elimination method based ovarian cancer detection model via customized deep architecture

Ovarian cancer is often considered the most lethal gynecological cancer because it tends to be diagnosed at an advanced stage, leading to limited treatment options and poorer outcomes. Several factors contribute to the challenges in managing ovarian cancer, namely rapid metastasis, genetic factors, reproductive history, etc. This necessitates the prompt and precise diagnosis of ovarian cancer in order to carry out efficient treatment plans and give patients who are all impacted by OC the care and support they need. This CCLSTM model is suggested under four essential stages including preprocessing, feature extraction, feature selection and detection. Initially, the input data is preprocessed using Improved Two-step Data Normalization. Subsequently, features such as statistical, modified entropy, raw features and mutual information are extracted from the normalized data. Next, obtained features undergo the Improved Rank-based Recursive Feature Elimination method (IR-RFE) to select the most suitable features. Finally, the proposed CCLSTM model takes the selected features as input and provides a final detection outcome. Furthermore, the performance of the proposed CCLSTM technique is examined through a thorough assessment using diverse analyses Additionally, the CCLSTM schemes show a sensitivity value of 0.948, whereas the sensitivity ratings for ALO-LSTM + ALOCNN, Bi-GRU, LSTM, RNN, KNN, CNN, and DCNN are 0.808, 0.893, 0.829, 0.851, 0.765, 0.872, and 0.893, respectively. In the end, the development of CNN and the addition of LSTM technique have produced an ovarian cancer detection technique that is more accurate and consistent compared to other existing strategies.

Publisher

Elsevier BV

ISSN

0169-2607