IISc Researchers Use GPUs to Discover Human Brain Activity in Recent Study

A new Graphics Processing Unit (GPU)-based machine learning algorithm developed by researchers at the Indian Institute of Science (IISc) can help scientists better understand and predict connectivity between different brain regions.

The algorithm, called Regularized, Accelerated, Linear Fascicle Evaluation, or ReAl-LiFE, can rapidly analyze the enormous amounts of data generated from diffusion Magnetic Resonance Imaging (MRI) scans of the human brain.

Using ReAL-LiFE, the team could evaluate dMRI data over 150 times faster than existing state-of-the-art algorithms, according to an IISc press release issued on Monday.

“Tasks that previously took hours to days can be completed within seconds to minutes,” said Devarajan Sridharan, Associate Professor at the Centre for Neuroscience (CNS), IISc, and corresponding study author published in the journal Nature Computational Science.

Millions of neurons fire in the brain every second, generating electrical pulses that travel across neuronal networks from one point in the brain to another through connecting cables or “axons”. These connections are essential for the computations that the brain performs.

Human Brain Activity

“Understanding brain connectivity is critical for uncovering brain-behaviour relationships at scale,” said Varsha Sreenivasan, a Ph.D. student at CNS and the first author of the study. However, conventional approaches to studying brain connectivity typically use animal models and are invasive. dMRI scans, on the other hand, provide a non-invasive method to check human brain connectivity.

The cables (axons) connecting different brain areas are its information highways. Because bundles of axons are shaped like tubes, water molecules move through them, along their length, in a directed manner. dMRI allows scientists to track this movement to create a comprehensive network of fibers across the brain called a connectome.

Unfortunately, it is not straightforward to pinpoint these connectomes. The data obtained from the scans only provide the net flow of water molecules at each point in the brain, the release noted.

“Imagine that the water molecules are cars. The obtained information is the direction and speed of the vehicles at each point in space and time, with no information about the roads. Our task is similar to inferring the networks of roads by observing these traffic patterns,” explains Sridharan.

Conventional algorithms closely match the predicted dMRI signal from the inferred connectome with the observed dMRI signal to identify these networks accurately.

Scientists had previously developed an algorithm called LiFE (Linear Fascicle Evaluation) to carry out this optimization. Still, one of its challenges was that it worked on traditional Central Processing Units (CPUs), which made the computation time-consuming.

In the new study, Sridharan’s team tweaked their algorithm to cut down the computational effort involved in several ways, including removing redundant connections, thereby improving LiFE’s performance significantly.

To speed up the algorithm further, the team redesigned it to work on specialized electronic chips – the kind found in high-end gaming computers – called Graphics Processing Units (GPUs), which helped them analyze data 100-150 times faster than previous approaches.

This improved algorithm, ReAl-LiFE, could also predict how a human test subject would behave or carry out a specific task.

In other words, the team could explain variations in behavioral and cognitive test scores across 200 participants using the connection strengths estimated by the algorithm for each individual.

Such analysis can have medical applications too. “Data processing on large scales is becoming increasingly necessary for big-data neuroscience applications, especially for understanding healthy brain function and brain pathology,” says Sreenivasan.

For example, using the obtained connectomes, the team hopes to identify early signs of aging or deterioration of brain function before they manifest behaviourally in Alzheimer’s patients.

“In another study, we found that a previous version of ReAL-LiFE could do better than other competing algorithms for distinguishing patients with Alzheimer’s disease from healthy controls,” says Sridharan.

He adds that their GPU-based implementation is very general and can be used to tackle optimization problems in many other fields.

Bella E. McMahon
I am a freelance writer who started blogging in college. I am fascinated by human nature, politics, culture, technology, and pop culture. In addition to my writing, I enjoy exploring new places, trying out new things, and engaging in conversations with new people. Some of my favorite hobbies are reading, playing music, making crafts, writing, traveling, and spending time with my family.