Enhancing Ovarian Cancer Diagnosis: A Multimodal Deep Learning Approach Integrating Ultrasound and Clinical Data

Enhancing Ovarian Cancer Diagnosis: A Multimodal Deep Learning Approach Integrating Ultrasound and Clinical Data

Enhancing Ovarian Cancer Diagnosis: A Multimodal Deep Learning Approach Integrating Ultrasound and Clinical Data

Enhancing Ovarian Cancer Diagnosis: A Multimodal Deep Learning Approach Integrating Ultrasound and Clinical Data
Enhancing Ovarian Cancer Diagnosis: A Multimodal Deep Learning Approach Integrating Ultrasound and Clinical Data

Ovarian cancer (OC) diagnosis remains a significant challenge, often hampered by the subjective interpretation of ultrasound (US) images. This study presents a novel approach leveraging multimodal deep learning to improve diagnostic accuracy and consistency. A retrospective analysis of 1899 patients (2019-2024) who underwent preoperative US examinations and subsequent surgeries for adnexal masses formed the basis of this research. The core of the study involved developing and validating a multimodal deep learning model that integrates 2D grayscale US images with readily accessible clinical data.

The developed model was designed to perform two key functions: OC diagnosis and extraction of US morphological features. Model performance was rigorously evaluated using established metrics including receiver operating characteristic (ROC) curves, accuracy, and the F1 score. The results demonstrated a significant improvement in diagnostic performance compared to using US images alone. Specifically, the multimodal model achieved areas under the curve (AUCs) of 0.9393 (95% CI 0.9139-0.9648) and 0.9317 (95% CI 0.9062-0.9573) in the internal and external test sets, respectively. This represents a substantial enhancement in diagnostic capability.

Furthermore, the study highlighted the model’s positive impact on radiologist performance. The integration of clinical data with US image analysis significantly improved the AUCs achieved by radiologists and enhanced inter-reader agreement, addressing a key limitation in current diagnostic practices. The model’s ability to extract US morphological features was also impressive, achieving accuracies of 86.34% and 85.62% in the internal and external test sets, respectively. This robust performance suggests the potential for automating the generation of structured ultrasound reports, streamlining the diagnostic workflow.

In conclusion, this research demonstrates the significant potential of multimodal deep learning to enhance the accuracy and consistency of ovarian cancer diagnosis. By combining the power of image analysis with readily available clinical information, this approach offers a promising path towards improved patient outcomes and more efficient healthcare delivery. The model’s effective feature extraction capabilities further suggest its potential to revolutionize the reporting process associated with ultrasound imaging in the context of ovarian cancer detection.

阅读中文版 (Read Chinese Version)

Disclaimer: This content is aggregated from public sources online. Please verify information independently. If you believe your rights have been infringed, contact us for removal.