基于多尺度通道空间感知Mamba的阿尔茨海默症PET影像分类方法

    Multi-Scale Channel-Spatial Perception Mamba for Alzheimer’s Disease PET Image Classification Method

    • 摘要: 针对现有基于卷积神经网络和Transformer的医学影像诊断方法存在长程依赖建模能力不足及二次计算复杂度问题,本文提出一种基于多尺度通道空间感知Mamba模型的三维正电子发射断层扫描(Positron Emission Tomography, PET)影像分类框架SSHCM(State Space Hybrid Convolutional Model)。该模型融合线性状态空间模型与多尺度特征交互机制,通过堆叠式LMamba块对三维体素序列进行长程动态建模;设计逐层跨尺度通道注意力融合模块实现全局上下文语义自适应融合;构建通道空间感知模块,结合大核卷积与倒置式瓶颈结构优化空间特征融合,提升病灶定位精度。在阿尔茨海默病神经影像学计划1187名受试者数据集上的结果表明:本方法的准确率和AUC较ResNet、ViT及Mamba变种模型提升明显。在AD分类任务和MCI转化预测任务的准确率分别达到97.03%和83.33%。

       

      Abstract: To address the limitations of existing medical image diagnosis methods based on convolutional neural networks and Transformers, such as insufficient long-range dependency modeling and quadratic computational complexity, this paper proposes a 3D Positron Emission Tomography (PET) image classification framework named SSHCM(State Space Hybrid Convolutional Model) . The framework is built upon a multi-scale channel space perception Mamba architecture, which integrates a linear state-space model with a multi-scale feature interaction mechanism, using stacked LMamba blocks to capture long-range dependencies in 3D voxel sequences dynamically. A layer-wise cross-scale channel attention fusion module is designed to achieve adaptive fusion of global contextual semantic. Additionally, a channel-spatial perception module is constructed by combining large kernel convolutions with an inverted bottleneck structure, enhancing spatial feature fusion and improving lesion localization accuracy. Experimental results on the Alzheimer's Disease Neuroimaging Initiative dataset with 1187 subjects show that the proposed model significantly outperforms ResNet, ViT, and Mamba variant models in terms of both accuracy and AUC. Specifically, the model achieves accuracy rates of 97.03% for AD classification and 83.33% for MCI conversion prediction tasks.

       

    /

    返回文章
    返回