JLJun Liu
Papers(4)
Multicenter deep lear…Diagnosis of cervical…Recognition of Cervic…The role of BATF2 def…
Collaborators(4)
Rihui LiXiaoxue SunYuanxiu PengYu Chang
Institutions(4)
Fourth Peoples Hospit…Stanford UniversityNanchang Hangkong Uni…Huazhong University o…

Papers

Multicenter deep learning–based automatic delineation of CTV and PTV in uterine malignancy CT imaging

Accurate delineation of the clinical target volume (CTV) and planning target volume (PTV) is essential for effective radiotherapy in uterine malignancies. Manual contouring is laborious, time-consuming, and subjective, and current automatic methods often focus on a single cancer type with limited external validation. To address this, we developed a deep-learning model capable of accurately delineating both CTV and PTV across multiple uterine malignancies using CT imaging. We retrospectively collected 602 contrast-enhanced CT scans, comprising 302 cases (cervical and endometrial cancers) from our institution and an additional 300 cervical cancer scans from external centers. Expert radiation oncologists manually delineated the CTV and PTV on each image. Among the 302 internal cancer cases, 177 cervical cancer cases were used for model training with five-fold cross-validation. Additionally, 41 cervical cancer cases were reserved as an internal testing cohort, while 84 endometrial cancer cases constituted the first external testing cohort to assess the model's generalizability across cancer types. The remaining 300 cervical cancer scans from external centers formed a second external testing cohort to assess model robustness across institutions. We evaluated three segmentation architectures-2D, full-resolution 3D, and cascaded 3D networks-and measured their performance using three standard metrics: Dice Similarity Coefficient (DSC), 95% Hausdorff Distance (HD95), and Average Surface Distance (ASD). The model-generated segmentations demonstrated strong concordance with the expert contours. In the internal testing cohort with the same cancer type, performance metrics (DSC, HD95, ASD) were consistently high. Similarly, the external testing cohort with different cancer types showed robust performance, indicating effective generalizability. On the internal testing cohort, the model demonstrated strong performance, achieving mean DSCs of 83.42% for PTV and 81.23% for CTV, with low spatial errors (PTV: ASD 2.01 mm, HD95 5.71 mm; CTV: ASD 1.35 mm, HD95 4.75 mm). In the endometrial cancer cohort, PTV segmentation achieved a DSC of 82.88%, while CTV segmentation yielded an HD95 of 5.85  mm and an ASD of 1.34  mm. Additionally, clinical evaluation revealed that approximately 90% of the model-generated contours required no or only minor revision. We present a multicenter-validated deep-learning based framework for automatic CTV and PTV delineation across diverse uterine malignancies on CT. Our model offers a scalable, generalized solution with the potential to reduce the workload in radiation oncology, improve consistency, and streamline clinical workflows.

Recognition of Cervical Precancerous Lesions Based on Probability Distribution Feature Guidance

Introduction: Cervical cancer is a high incidence of cancer in women and cervical precancerous screening plays an important role in reducing the mortality rate. Methods: In this study, we proposed a multichannel feature extraction method based on the probability distribution features of the acetowhite (AW) region to identify cervical precancerous lesions, with the overarching goal to improve the accuracy of cervical precancerous screening. A k-means clustering algorithm was first used to extract the cervical region images from the original colposcopy images. We then used a deep learning model called DeepLab V3+ to segment the AW region of the cervical image after the acetic acid experiment, from which the probability distribution map of the AW region after segmentation was obtained. This probability distribution map was fed into a neural network classification model for multichannel feature extraction, which resulted in the final classification performance. Results: Results of the experimental evaluation showed that the proposed method achieved an average accuracy of 87.7%, an average sensitivity of 89.3%, and an average specificity of 85.6%. Compared with the methods that did not add segmented probability features, the proposed method increased the average accuracy rate, sensitivity, and specificity by 8.3%, 8%, and 8.4%, respectively. Conclusion: Overall, the proposed method holds great promise for enhancing the screening of cervical precancerous lesions in the clinic by providing the physician with more reliable screening results that might reduce their workload.

4Papers
4Collaborators