Journal

Medical Image Analysis

Papers (9)

A Multi-instance Learning Network with Prototype-instance Adversarial Contrastive for Cervix Pathology Grading

The pathological grading of cervical squamous cell carcinoma (CSCC) is a fundamental and important index in tumor diagnosis. Pathologists tend to focus on single differentiation areas during the grading process. Existing multi-instance learning (MIL) methods divide pathology images into regions, generating multiple differentiated instances (MDIs) that often exhibit ambiguous grading patterns. These ambiguities reduce the model's ability to accurately represent CSCC pathological grading patterns. Motivated by these issues, we propose an end-to-end multi-instance learning network with prototype-instance adversarial contrastive learning, termed PacMIL, which incorporates three key ideas. First, we introduce an end-to-end multi-instance nonequilibrium learning algorithm that addresses the mismatch between MIL feature representations and CSCC pathological grading, and enables nonequilibrium representation. Second, we design a prototype-instance adversarial contrastive (PAC) approach that integrates a priori prototype instances and a probability distribution attention mechanism. This enhances the model's ability to learn representations for single differentiated instances (SDIs). Third, we incorporate an adversarial contrastive learning strategy into the PAC method to overcome the limitation that fixed metrics rarely capture the variability of MDIs and SDIs. In addition, we embed the correct metric distances of the MDIs and SDIs into the optimization objective function to further guide representation learning. Extensive experiments demonstrate that our PacMIL model achieves 93.09% and 0.9802 for the mAcc and AUC metrics, respectively, outperforming other SOTA models. Moreover, the representation ability of PacMIL is superior to that of existing SOTA approaches. Overall, our model offers enhanced practicality in CSCC pathological grading. Our code and dataset will be publicly available at https://github.com/Baron-Huang/PacMIL.

A novel attention-guided convolutional network for the detection of abnormal cervical cells in cervical cancer screening

Early detection of abnormal cervical cells in cervical cancer screening increases the chances of timely treatment. But manual detection requires experienced pathologists and is time-consuming and error prone. Previously, some methods have been proposed for automated abnormal cervical cell detection, whose performance yet remained debatable. Here, we develop an attention feature pyramid network (AttFPN) for automatic abnormal cervical cell detection in cervical cytology images to assist pathologists to make a more accurate diagnosis. Our proposed method consists of two main components. First, an attention module mimicking the way pathologists reading a cervical cytology image. It learns what features to emphasize or suppress by refining extracted features effectively. Second, a multi-scale region-based feature fusion network guided by clinical knowledge to fuse the refined features for detecting abnormal cervical cells at different scales. The region proposals in the multi-scale network are designed according to the clinical knowledge about size and shape distribution of real abnormal cervical cells. Our method, trained and validated with 7030 annotated cervical cytology images, performs better than the state of art deep learning-based methods. The overall sensitivity, specificity, accuracy, and AUC of an independent testing dataset with 3970 cervical cytology images is 95.83%, 94.81%, 95.08% and 0.991, respectively, which is comparable to that of an experienced pathologist with 10 years of experience. Besides, we further validated our method on an external dataset with 110 cases and 35,013 images from a different organization, the case-level sensitivity, specificity, accuracy, and AUC is 91.30%, 90.62%, 90.91% and 0.934, respectively. Average diagnostic time of our method is 0.04s per image, which is much quicker than the average time of pathologists (14.83s per image). Thus, our AttFPN is effective and efficient in cervical cancer screening, and improvement of clinical workflows for the benefit of potential patients. Our code is available at https://github.com/cl2227619761/TCT_Detection.

Computer-aided diagnosis tool for cervical cancer screening with weakly supervised localization and detection of abnormalities using adaptable and explainable classifier

While pap test is the most common diagnosis methods for cervical cancer, their results are highly dependent on the ability of the cytotechnicians to detect abnormal cells on the smears using brightfield microscopy. In this paper, we propose an explainable region classifier in whole slide images that could be used by cyto-pathologists to handle efficiently these big images (100,000x100,000 pixels). We create a dataset that simulates pap smears regions and uses a loss, we call classification under regression constraint, to train an efficient region classifier (about 66.8% accuracy on severity classification, 95.2% accuracy on normal/abnormal classification and 0.870 KAPPA score). We explain how we benefit from this loss to obtain a model focused on sensitivity and, then, we show that it can be used to perform weakly supervised localization (accuracy of 80.4%) of the cell that is mostly responsible for the malignancy of regions of whole slide images. We extend our method to perform a more general detection of abnormal cells (66.1% accuracy) and ensure that at least one abnormal cell will be detected if malignancy is present. Finally, we experiment our solution on a small real clinical slide dataset, highlighting the relevance of our proposed solution, adapting it to be as easily integrated in a pathology laboratory workflow as possible, and extending it to make a slide-level prediction.

Cell classification with worse-case boosting for intelligent cervical cancer screening

Cell classification underpins intelligent cervical cancer screening, a cytology examination that effectively decreases both the morbidity and mortality of cervical cancer. This task, however, is rather challenging, mainly due to the difficulty of collecting a training dataset representative sufficiently of the unseen test data, as there are wide variations of cells' appearance and shape at different cancerous statuses. This difficulty makes the classifier, though trained properly, often classify wrongly for cells that are underrepresented by the training dataset, eventually leading to a wrong screening result. To address it, we propose a new learning algorithm, called worse-case boosting, for classifiers effectively learning from under-representative datasets in cervical cell classification. The key idea is to learn more from worse-case data for which the classifier has a larger gradient norm compared to other training data, so these data are more likely to correspond to underrepresented data, by dynamically assigning them more training iterations and larger loss weights for boosting the generalizability of the classifier on underrepresented data. We achieve this idea by sampling worse-case data per the gradient norm information and then enhancing their loss values to update the classifier. We demonstrate the effectiveness of this new learning algorithm on two publicly available cervical cell classification datasets (the two largest ones to the best of our knowledge), and positive results (4% accuracy improvement) yield in the extensive experiments. The source codes are available at: https://github.com/YouyiSong/Worse-Case-Boosting.

Dual-path network with synergistic grouping loss and evidence driven risk stratification for whole slide cervical image analysis

Cervical cancer has been one of the most lethal cancers threatening women's health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists' large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists' workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.

ATEC23 Challenge: Automated prediction of treatment effectiveness in ovarian cancer using histopathological images

Ovarian cancer, predominantly epithelial ovarian cancer (EOC), is a global health concern due to its high mortality rate. Despite the progress made during the last two decades in the surgery and chemotherapy of ovarian cancer, more than 70% of advanced patients are with recurrent cancer and disease. Bevacizumab is a humanized monoclonal antibody, which blocks VEGF signaling in cancer, inhibits angiogenesis and causes tumor shrinkage, and has been recently approved by the FDA as a monotherapy for advanced ovarian cancer in combination with chemotherapy. Unfortunately, Bevacizumab may also induce harmful adverse effects, such as hypertension, bleeding, arterial thromboembolism, poor wound healing and gastrointestinal perforation. Given the expensive cost and unwanted toxicities, there is an urgent need for predictive methods to identify who could benefit from bevacizumab. Of the 18 (approved) requests from 5 countries, 6 teams using 284 whole section WSIs for training to develop fully automated systems submitted their predictions on a test set of 180 tissue core images, with the corresponding ground truth labels kept private. This paper summarizes the 5 qualified methods successfully submitted to the international challenge of automated prediction of treatment effectiveness in ovarian cancer using the histopathologic images (ATEC23) held at the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) in 2023 and evaluates the methods in comparison with 5 state of the art deep learning approaches. This study further assesses the effectiveness of the presented prediction models as indicators for patient selection utilizing both Cox proportional hazards analysis and Kaplan-Meier survival analysis. A robust and cost-effective deep learning pipeline for digital histopathology tasks has become a necessity within the context of the medical community. This challenge highlights the limitations of current MIL methods, particularly within the context of prognosis-based classification tasks, and the importance of DCNNs like inception that has nonlinear convolutional modules at various resolutions to facilitate processing the data in multiple resolutions, which is a key feature required for pathology related prediction tasks. This further suggests the use of feature reuse at various scales to improve models for future research directions. In particular, this paper releases the labels of the testing set and provides applications for future research directions in precision oncology to predict ovarian cancer treatment effectiveness and facilitate patient selection via histopathological images.

Interpretable multi-scale deep learning to detect malignancy in cell blocks and cytological smears of pleural effusion and identify aggressive endometrial cancer

The pleura is a serous membrane that surrounds the surface of the lungs. The visceral surface secretes fluid into the serous cavity, while the parietal surface ensures that the fluid is properly absorbed. However, when this balance is disrupted, it leads to the formation of pleural Effusion. The most common malignant pleural effusion (MPE) caused by lung cancer or breast cancer, and benign pleural effusions (BPE) caused by Mycobacterium tuberculosis infection, heart failure, or infections related to pneumonia. Today, with the rapid advancement of treatment protocols, accurately diagnosing MPE has become increasingly important. Although cytology smears and cell blocks examinations of pleural effusion are the clinical gold standards for diagnosing MPE, the diagnostic accuracy of these tools can be affected by certain limitations, such as low sensitivity, diagnostic variability across different regions and significant inter-observer variability, leading to a certain proportion of misdiagnoses. This study presents a deep learning (DL) framework, namely Interpretable Multi-scale Attention DL with Self-Supervised Learning Feature Encoder (IMA-SSL), to identifyMPE or BPE using 194 Cytological smears whole-slide images (WSIs) and 188 cell blocks WSIs. The use of DL on WSIs of pleural effusion allows for preliminary results to be obtained in a short time, giving patients the opportunity for earlier diagnosis and treatment. The experimental results show that the proposed IMA-SSL consistently obtained superior performance and outperformed five state-of-the-art (SOTA) methods in malignancy prediction on both cell block and cytological smear datasets and also in identification of aggressive endometrial cancer (EC) using a public TCGA dataset. Fisher's exact test confirmed a highly significant correlation between the outputs of the proposed model and the slide status in the EC and pleural effusion datasets (p < 0.001), substantiating the model's predictive reliability. The proposed method has the potential for practical clinical application in the foreseeable future. It can directly detect the presence of malignant tumor cells from cost-effective cell blocks and pleural effusion cytology smears and facilitate personalized cancer treatment decisions.

Ensemble transformer-based multiple instance learning to predict pathological subtypes and tumor mutational burden from histopathological whole slide images of endometrial and colorectal cancer

In endometrial cancer (EC) and colorectal cancer (CRC), in addition to microsatellite instability, tumor mutational burden (TMB) has gradually gained attention as a genomic biomarker that can be used clinically to determine which patients may benefit from immune checkpoint inhibitors. High TMB is characterized by a large number of mutated genes, which encode aberrant tumor neoantigens, and implies a better response to immunotherapy. Hence, a part of EC and CRC patients associated with high TMB may have higher chances to receive immunotherapy. TMB measurement was mainly evaluated by whole-exome sequencing or next-generation sequencing, which was costly and difficult to be widely applied in all clinical cases. Therefore, an effective, efficient, low-cost and easily accessible tool is urgently needed to distinguish the TMB status of EC and CRC patients. In this study, we present a deep learning framework, namely Ensemble Transformer-based Multiple Instance Learning with Self-Supervised Learning Vision Transformer feature encoder (ETMIL-SSLViT), to predict pathological subtype and TMB status directly from the H&E stained whole slide images (WSIs) in EC and CRC patients, which is helpful for both pathological classification and cancer treatment planning. Our framework was evaluated on two different cancer cohorts, including an EC cohort with 918 histopathology WSIs from 529 patients and a CRC cohort with 1495 WSIs from 594 patients from The Cancer Genome Atlas. The experimental results show that the proposed methods achieved excellent performance and outperforming seven state-of-the-art (SOTA) methods in cancer subtype classification and TMB prediction on both cancer datasets. Fisher's exact test further validated that the associations between the predictions of the proposed models and the actual cancer subtype or TMB status are both extremely strong (p<0.001). These promising findings show the potential of our proposed methods to guide personalized treatment decisions by accurately predicting the EC and CRC subtype and the TMB status for effective immunotherapy planning for EC and CRC patients.

Uncertainty-driven hybrid-view adaptive learning for fully automated uterine leiomyosarcoma diagnosis

Uterine leiomyosarcoma (ULMS) is a rare malignant tumor of the smooth muscle of the uterine wall that is aggressive and has a poor prognosis. Accurately and automatically classifying histopathological whole-slide images (WSIs) is critical for clinically diagnosing ULMS. However, few works have investigated automated ULMS diagnosis methods due to its high degrees of concealment and phenotype diversity. In this study, we present a novel uncertainty-driven hybrid-view adaptive learning (UHAL) framework to efficiently capture the distinct features of ULMS by mining pivotal biomarkers at the cell level and minimizing the redundancy from hybrid views under an uncertainty discrimination mechanism, ultimately ensuring reliable diagnoses of ULMS WSIs. Specifically, hybrid-view adaptive learning incorporates three modules: phenotype-driven patch self-optimization to select salient patch features, unsupervised inter-bags adaptive learning effectively filters out redundant information, and compensatory inner-level adaptive learning further refines tumor features. Furthermore, the uncertainty discrimination mechanism achieves enhanced reliability by assigning quantitative confidence coefficients to predictions under the Dirichlet distribution, leveraging uncertainty to update the features for obtaining accurate diagnoses. The experimental results obtained on the ULMS dataset indicate the superior performance of the proposed framework over that of ten state-of-the-art methods. Extensive experimental results obtained on the TCGA-Esca, TCGA-Lung, and Spinal infection datasets further validate the robustness and generalizability of the UHAL framework.

Publisher

Elsevier BV

ISSN

1361-8415