Document Type

Article

Publication Date

2026

DOI

10.3389/fcomp.2026.1721892

Publication Title

Frontiers in Computer Science

Volume

8

Pages

1721892

Abstract

Every year there are an estimated 80,000–90,000 new glioma cases, highlighting the need for reliable imaging-based decision support. Although deep learning has improved tumor sub-region segmentation, many state-of-the-art models fail to fully capture complementary information across T1, T1Gd, T2, and FLAIR MRI modalities and often operate as “black boxes,” limiting physician trust when precise delineation is critical for surgical planning, radiation targeting, and treatment monitoring. To address these limitations, we propose AIMS, an Adaptive Integrated Multi-Modal Segmentation framework that maintains modality-specific feature streams and employs adaptive self-attention within a hierarchical CNN-Transformer architecture to prioritize and fuse multi-modal MRI features. We evaluated AIMS on the BraTS 2019 adult glioma dataset using five-fold cross-validation and compared it against strong hybrid baselines with paired statistical testing; generalization was assessed on an independent BraTS 2021 cohort without fine-tuning. AIMS achieved high ensemble Dice Similarity Coefficients of 0.936 for enhancing tumor, 0.942 for tumor core, and 0.931 for whole tumor on BraTS 2019, with statistically significant improvements over competing methods, and maintained strong performance on BraTS 2021 despite protocol and scanner variability. Finally, Grad-CAM-based explanations applied to adaptive attention and fusion layers, together with quantitative sanity checks, provided modality-aware and spatially meaningful visualizations that support clinical interpretation. By improving both segmentation accuracy and model transparency relative to strong baselines, AIMS advances multi-modal glioma segmentation and helps bridge human–machine teaming by enabling faster, clinician-aligned tumor delineation without sacrificing reliability.

Rights

© 2026 Savaria and Sun.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Data Availability

Article states: "The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation."

Original Publication Citation

Savaria, E. P., & Sun, J. (2026). Adaptive self-attention for enhanced segmentation of adult gliomas in multi-modal MRI. Frontiers in Computer Science, 8, Article 1721892. https://doi.org/10.3389/fcomp.2026.1721892

ORCID

0000-0003-0394-6494 (Savaria), 0009-0000-8905-7553 (Sun)

Share

COinS