Journal

Artificial Intelligence in Medicine

Papers (2)

Multiple instance convolutional neural network with modality-based attention and contextual multi-instance learning pooling layer for effective differentiation between borderline and malignant epithelial ovarian tumors

Malignant epithelial ovarian tumors (MEOTs) are the most lethal gynecologic malignancies, accounting for 90% of ovarian cancer cases. By contrast, borderline epithelial ovarian tumors (BEOTs) have low malignant potential and are generally associated with a good prognosis. Accurate preoperative differentiation between BEOTs and MEOTs is crucial for determining the appropriate surgical strategies and improving the postoperative quality of life. Multimodal magnetic resonance imaging (MRI) is an essential diagnostic tool. Although state-of-the-art artificial intelligence technologies such as convolutional neural networks can be used for automated diagnoses, their application have been limited owing to their high demand for graphics processing unit memory and hardware resources when dealing with large 3D volumetric data. In this study, we used multimodal MRI with a multiple instance learning (MIL) method to differentiate between BEOT and MEOT. We proposed the use of MAC-Net, a multiple instance convolutional neural network (MICNN) with modality-based attention (MA) and contextual MIL pooling layer (C-MPL). The MA module can learn from the decision-making patterns of clinicians to automatically perceive the importance of different MRI modalities and achieve multimodal MRI feature fusion based on their importance. The C-MPL module uses strong prior knowledge of tumor distribution as an important reference and assesses contextual information between adjacent images, thus achieving a more accurate prediction. The performance of MAC-Net is superior, with an area under the receiver operating characteristic curve of 0.878, surpassing that of several known MICNN approaches. Therefore, it can be used to assist clinical differentiation between BEOTs and MEOTs.

Overlapping cytoplasms segmentation via constrained multi-shape evolution for cervical cancer screening

Segmenting overlapping cytoplasms in cervical smear images is a clinically essential task for quantitatively measuring cell-level features to screen cervical cancer This task, however, remains rather challenging, mainly due to the deficiency of intensity (or color) information in the overlapping region Although shape prior-based models that compensate intensity deficiency by introducing prior shape information about cytoplasm are firmly established, they often yield visually implausible results, as they model shape priors only by limited shape hypotheses about cytoplasm, exploit cytoplasm-level shape priors alone, and impose no shape constraint on the resulting shape of the cytoplasm In this paper, we present an effective shape prior-based approach, called constrained multi-shape evolution, that segments all overlapping cytoplasms in the clump simultaneously by jointly evolving each cytoplasm's shape guided by the modeled shape priors We model local shape priors (cytoplasm-level) by an infinitely large shape hypothesis set which contains all possible shapes of the cytoplasm In the shape evolution, we compensate intensity deficiency for the segmentation by introducing not only the modeled local shape priors but also global shape priors (clump-level) modeled by considering mutual shape constraints of cytoplasms in the clump We also constrain the resulting shape in each evolution to be in the built shape hypothesis set for further reducing implausible segmentation results We evaluated the proposed method in two typical cervical smear datasets, and the extensive experimental results confirm its effectiveness.

Publisher

Elsevier BV

ISSN

0933-3657