AI tool helps scientists to decode brain activity and reconstruct what it sees

Published: 13 Dec 2023

In the growing landscape of mental health and ageing-related disorders in Singapore and the world, understanding the brain is a key necessity.

With the help of AI and decoding data from brain activity, a team of researchers at the Yong Loo Lin School of Medicine, National University of Singapore (NUS Medicine) is able to construct videos that reflect/recall/mirror the images which study subjects were shown. They say this development/technology can potentially be crucial for the early detection of brain diseases, customized treatments, devices, and learning programmes.

The study by Associate Professor Helen Zhou from the Centre for Sleep and Cognition in NUS Medicine, and Director, Center for Translational MR Research at NUS Medicine, Mr Jiaxin Qing, PhD student at Department of Information Engineering, The Chinese University of Hong Kong (CUHK IE), and Ms Chen Zijiao, PhD Student at the Center for Translational MR Research at NUS Medicine was published in Mind-Video.

Participants were shown videos of various lengths, ranging from two seconds to a few minutes, on moving objects, animals, humans, and more, while the researchers carried out the scanning via fMRI, a non-invasive procedure which helps to visualise the active parts of the brain when it is engaged in different activities.

Upon collecting the data, an advanced AI model, Stable Diffusion, was used to decode the brain activity and translate the information into reconstructed videos of about two to three seconds long, as viewed by the participants. The team achieved an impressive accuracy rate of 85%.

This discovery serves as a springboard for the early detection of brain diseases, customized treatments, devices, and learning programmes. For patients, it offers hope for better, more independent lives and effective treatments.

“Our work can help to further our understanding of how the brain processes information with an unprecedented degree of detail, while paving the way for a more advanced communication system via technology and brain stimulation strategies. At the same time, we have plans to develop it further with generalisability and interpretability, to set the foundation for future translational work, including helping those individuals with impaired sensory perception or enhancing human potential,” said Assoc Prof Zhou.

The team will be presenting their work at the 2023 Conference on Neural Information Processing, which will be held this month, in New Orleans, USA.

Read more in the media release here.