Paper 13407-15
Explainable unsupervised TNM category differentiation in PET images with deep texture analysis
18 February 2025 • 11:00 AM - 11:20 AM PST | Town & Country C
Abstract
We propose an integrated approach to TNM category classification in PET images that combines ROI detection via a U-Net with unsupervised manifold learning. First, the U-Net identifies candidate regions of interest (ROIs) for primary tumors, nodes, and metastases. A patch-based convolutional neural network (CNN), pretrained on tumor data, then extracts metabolic deep texture features, and Gradient-weighted Class Activation Mapping (Grad-CAM) highlights salient regions within each patch. These Grad-CAM heatmaps are subsequently embedded into a low-dimensional space using Uniform Manifold Approximation and Projection (UMAP), and clustered into T, N, and M classes with k-means. This unsupervised pipeline eliminates the need for extensive node and metastasis annotations, as only tumor patches are used for the initial supervised training. Our method achieves 89.5% accuracy and a 93.1% F1 score in distinguishing T, N, and M categories, indicating robust performance. By combining the interpretability of Grad-CAM with data-driven manifold learning, this approach shows promise for enhancing esophageal cancer staging and aiding more personalized treatment planning.
Presenter
Robert John
Univ. of Surrey (United Kingdom)
Rob is a fourth year PhD student at the Centre for Vision, Speech and Signal Processing in the University of Surrey. His research looks at the use of AI in PET for Oesophageal Cancer staging.