Amazing video shows us the actual movies that play inside our mind

This is about as awesome as neuroscience gets. This video shows us some everyday clips, and - thanks to some super-advanced brain imaging and computer simulations - how those clips are seen inside our brains.


Researchers at UC Berkeley used functional magnetic resonance imaging (fMRI) and some seriously complex computational models to figure out what images our minds create when presented with movie and TV clips. So far, the process is only able to reconstruct the neural equivalents of things people have already seen, but eventually it might be possible to construct the images people see in dreams and memories.

This could also open up new ways to communicate with those whose speech is severely impaired, such as stroke victims, patients with neurological diseases, and even people in comas. It's probably worth stressing that we're decades away from using this tech to read people's thoughts and intentions, just in case that's something you're worried about.

The researchers developed this technique by showing study participants a series of black-and-white photographs while imaging their minds. By comparing the photographs with the scans, they were able to engineer a way to recognize any image from how the brain responded. With that basic principle in place, it was then only a question of building up a sufficiently complex computer model to decode moving, color images like those in the video above.

Here's more explanation as to how they did it:

The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity. Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.

You can read more about it at the Berkeley website, and check out their paper in Current Biology here.


the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged

Ah, OK, this is cool and everything, but they aren't actually reconstructing images from brain activity, just matching up videos which are probably, maybe similar.

For a second I thought neuroscience had jumped 50 years ahead all of a sudden.

The title is quite misleading, too.