Algorithm Fuses Neuroimaging Data to Improve Alzheimer’s Diagnosis
Deep learning algorithm uses a two-stage process to extract more information from PET and MRI data.
The accurate diagnosis of Alzheimer's disease and mild cognitive impairment is essential for timely treatment. Different neuroimaging modalities, such as magnetic resonance imaging (MRI) and positron emission tomography (PET), provide complementary information that can synergize to improve diagnostic performance for these conditions.
In a study published Jan. 19 in the IEEE Journal of Biomedical and Health Informatics, researchers applied a deep learning algorithm to MRI and PET neuroimaging data to improve the diagnosis of Alzheimer’s disease. In the first stage, two "stacked deep polynomial networks" (SDPNs) were used to learn clinically relevant features of either the MRI or the PET data. In the second stage, all the learned features were fed to another SDPN to combine the MRI and PET neuroimaging information.
Using this layerwise stacking of feature extraction, the multimodal SDPN algorithm was applied to an Alzheimer's Disease Neuroimaging Initiative dataset, which consisted of MRI and PET images from 51 patients with Alzheimer's disease, 99 patients with mild cognitive impairment, and 52 normal controls. This algorithm outperformed state-of-the-art multimodal feature learning-based algorithms on four classification tasks that involved distinguishing among the different types of subjects.
According to the authors, the powerful algorithm could be applied not only to neuroimaging data, but also to other types of medical data.