DeepLabV3+ With Convolutional Triplet Attention and Histopathology‐Guided Voting for Hyperspectral Image Segmentation of Serous Ovarian Cancer
ABSTRACTDeep learning has been extensively applied in medical image analysis, providing healthcare professionals with more efficient and accurate diagnostic information. Among these advanced semantic segmentation models, the baseline DeepLabV3+ model is more adept at processing low‐dimensional data such as RGB images, but its performance on high‐dimensional data like hyperspectral images is suboptimal, limiting its generalization and discriminative capabilities. We propose a highly innovative hybrid architecture integrating a Convolutional Triplet Attention Module (CTAM) to capture cross‐dimensional spectral‐spatial dependencies and a Histopathology‐Guided Voting Mechanism (HVM) to incorporate WHO diagnostic criteria. The results demonstrate that our model can accurately differentiate and localize low‐grade and high‐grade serous ovarian cancer tissues, with an accuracy of 92.7% and 90.2%, respectively. Furthermore, our performance exceeds the pathologist's consensus (85.4%) and surpasses state‐of‐the‐art models (e.g., U‐Net, PAN, FPN) by a significant margin of over 20% in LGSC classification, rigorously validating its scientific superiority.