Paper 13407-42
Parameter efficient fine-tuning of transformer-based masked autoencoder enhances resource constrained neuroimage analysis
20 February 2025 • 10:30 AM - 10:50 AM PST | Town & Country C
Abstract
In the U.S., the Food and Drug Administration (FDA) has recently approved over 100 devices with AI capability. Research breakthroughs in AI have led to a corresponding sharp rise in patenting activity worldwide. In the future, foundation models will provide an initial starting point to finetune models for different downstream tasks. Even so, fine-tuning foundation models is challenging due to their large number of parameters, limited availability of neuroimaging data sets for fine-tuning, coupled with limited compute resources. In this work we test different parameter-efficient finetuning (PEFT) methods to greatly reduce the total number of trainable parameters for multiple neuroimaging tasks. We show that PEFT methods can be competitive with and outperform full fine-tuning in test performance with a significant reduction in model parameters (0.04 to 32%) across multiple tasks. PEFT methods boosted performance in resource constrained settings, using only 258 MRI scans, by 3% for AD classification.
Presenter
Nikhil J. Dhinagar
Keck School of Medicine of USC (United States)
Nikhil J. Dhinagar, PhD is a research scientist at the Imaging Genetics Center at the University of Southern California. His expertise is in developing novel solutions for neuroimaging problems including Alzheimer’s disease, Autism Spectrum Disorder, Parkinson’s Disease among others using artificial intelligence, machine learning and computer vision. Nikhil completed his post-doctoral training at the David Geffen School of Medicine at University of California, Los Angeles (UCLA).