Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders
I'll make this as simple as I can for you.
This is the image that the subject looked at.
v
^
And this is the image that Now Yukiyasu Kamitani at ATR Computational Neuroscience Laboratories in Kyoto, Japan, directly extracted from the subject's brain via an MRI scan.
Yes, ladies and gentlemen. Mind reading has become a reality for 21st Century Earth.
This is the first "mind-reading" technology that can create images from scratch rather than picking it out of a selection of prepared images.
Earlier this year Jack Gallant and his colleagues at the university of California showed that they could tell which of a set of images someone was looking at from a brain scan. They did this by developing software that compared the subject's brain activity while looking at training photographs.
But Yukiyasu Kamitani's breakthrough actually puts together an image using the data from the brain scans themselves. Kamitani: "By analysing the brain signals when someone is seeing an image, we can reconstruct that image."
At the moment, the resolution of the output and sensitivity of the instruments means that only simple, contrasting images can be read. They do this by making the subject look at several 10/10 squares while having their brain scanned. The new software finds the brain activity that corresponds to each pixel being blacked out.
Then, when the test image is used, such as the word "neuron" above, the software compares the brain activity observed with the catalogue of data from that subject's brain. The results, as you see above, are the proof of concept.
As fMRI technology improves, the resolution could be increased substantially. The next step is to see if it is possible to image things people are thinking of as well as looking at. Dream videos, anyone?
Not of mine, though, God no. I wouldn't subject you to
that. Or myself to the courts.