AI trained to get images from MRI brain scans
Researchers at Osaka University in Japan are among the ranks of scientists using A.I. to make sense of human brain scans. While others have tried using AI with MRI scans to visualize what people are seeing, the Osaka approach is unique because it used Stable Diffusion to generate the images. This greatly simplified their model so it required only a few thousands, instead of millions, of training parameters.
Normally, Stable Diffusion takes text descriptions/prompts which are run through a language model. That language model is trained against a huge library of images to generate a text-to-image latent space that can be queried to generate new amalgamated images (yes, a gross simplification).
The Osaka researchers took this one step further. The researchers used functional MRI (fMRI) scans from an earlier, unrelated study in which four participants looked at 10,000 different images of people, landscapes and objects while being monitored in an fMRI. The Osaka team then trained a second A.I. model to link brain activity in fMRI data with text descriptions of the pictures the study participants looked at.
Together, these two models allowed Stable Diffusion to turn fMRI data into relatively accurate images that were not part of the A.I. training set. Based on the brain scans, the first model could recreate the perspective and layout that the participant had seen, but its generated images were of cloudy and nonspecific figures. But then the second model kicked in, and it could recognize what object people were looking at by using the text descriptions from the training images. So, if it received a brain scan that resembled one from its training marked as a person viewing an airplane, it would put an airplane into the generated image, following the perspective from the first model. The technology achieved roughly 80 percent accuracy.
The team shared more details in a new paper, which has not been peer-reviewed, published on the preprint server bioRxiv.