Your dreams: Coming soon to YouTube?

Skip to Navigation

Ezine

  • Published: Oct 1, 2011
  • Author: David Bradley
  • Channels: MRI Spectroscopy
thumbnail image: Your dreams: Coming soon to YouTube?

The B movie in your head

Computational models of functional Magnetic Resonance Imaging have allowed researchers to reconstruct moving images from the blood flow in the visual cortex of volunteers as they watch a video clip. Fancifully, the technique might one day allow one to "record" one's dreams or to visualise what a patient in a chronic vegetative state or coma might be seeing and so perhaps open up a way to communicate with such patients.

It is the stuff of science fiction: recording a dream as one might record a TV show, tapping into another person's innermost imaginings or scanning the mind of a coma victim for signs of consciousness. Now, researchers at the University of California, Berkeley, have taken the first very tentative steps towards making such fictions a reality by mapping fMRI brain imaging to video clips using a computer simulation.

 

 

"This is a major leap toward reconstructing internal imagery," explains UCB neuroscientist Jack Gallant. "We are opening a window into the movies in our minds." Gallant's words certainly hint at a future of mind scanning although the technology has so far only reconstructed distorted and murky movie clips that the experiment's volunteers had viewed. There are no superlatives substantial enough to explain how big a breakthrough this research is. The breakthrough not only paves the way for reproducing the "movies" inside our heads, the dreams, memories, images from our imagination, but could ultimately provide a way to develop a mind-machine interface for people with severe physical disabilities and paralysis. Such an interface might eventually allow them to control any equipment from feeding device to mobile phone, computer, or mechanical exoskeleton.

Previously, Gallant and his team recorded brain activity in the visual cortex while a subject viewed monochrome images. They created a computational model from the scans that allowed them to reproduce those very images that the volunteers had seen, with what they called "overwhelming accuracy", at least insofar as they could identify which image was being reproduced. Now, they have turned to moving pictures.

Moving pictures

"Our natural visual experience is like watching a movie," explains post-doctoral researcher Shinji Nishimoto. "In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences." Nishimoto and two colleagues volunteered to lie in the MRI machine for several hours at a time while the rest of the team presented them with the video clips, Hollywood movie trailers, and carried out the fMRI scanning.

The volunteers viewed two separate sets of trailers, while their visual cortex was scanned. The computer divided up the brain scans into tiny three-dimensional cubes, volumetric pixels, or voxels. The team then built a model for each voxel that describes how shape and motion information in the movie is mapped to the activity in the brain revealed by the fMRI. Brain activity was recorded while the subjects watched the first set of trailers in order to teach the computer program what patterns in the brain represented what specific activity in the video clips. The computer could then learn, second by second, to associate specific visual patterns in the movie clips with the corresponding brain activity. Given that neural activity is so much faster than blood flow changes, the team used a two-stage model that describes the underlying neural population and blood flow signals separately.

The team then tested the computer to see whether or not it could generate a moving image that reconstructed brain activity produced by a second set of video clips. They then fed 5000 hours of random YouTube video clips into the computer program so that it would contain a database of moving pictures against which it could map brain activity in the volunteers watching any random video clip. The 100 clips that the computer program selected as being most similar to the clip evoked by the subject's brain activity were then merged to produce a blurry, yet continuous moving picture that eerily resembles the actual movie clip the volunteer had watched.

The next step toward this science fiction future of electronic mind reading will be to determine how the brain works in a natural situation as opposed to the video presentation and scanning conditions used in the experiment. Ultimately, the technology might help us understand better what goes on in the minds of people who cannot communicate verbally, such as stroke victims, coma patients and people with neurodegenerative diseases.

Sceptical perspective

A sceptical observer might point out that neural activity is not synonymous with blood flow changes; the former being so much faster than the latter, whereas is blood flow that is monitored by fMRI. However, Gallant told spectroscopyNOW that their two-stage model that separately describes the underlying neural population and blood flow signals has been validated. "Our models are validated to much higher standards than most other work in the field," he told us. "Almost all fMRI studies focus on mere statistical significance. We focus on importance, that is, we focus on maximizing the predictive power of the models. I think it is our focus on predictive power that produces such good models."

Regarding the use of research team members as subjects of the study, Gallant points out that, "The model presented in this paper only focuses on the earliest stage of visual processing - primary visual cortex. There is lots of evidence that indicates that the results of studies in this area do not depend much on the subject," he says. He adds that such considerations may be important in future studies that focus on more abstract parts of the brain than the primary visual cortex.

 



The views represented in this article are solely those of the author and do not necessarily represent those of John Wiley and Sons, Ltd.

 Computational models of functional Magnetic Resonance Imaging has allowed researchers to reconstruct moving images from the blood flow in the visual cortex of volunteers as they watch a video clip. Fancifully, the technique might one day allow one to
 

Social Links

Share This Links

Bookmark and Share

Microsites

Suppliers Selection
Societies Selection

Banner Ad

Click here to see
all job opportunities

Copyright Information

Interested in separation science? Visit our sister site separationsNOW.com

Copyright © 2017 John Wiley & Sons, Inc. All Rights Reserved