The race is on among the scientific community to develop AI technologies that can literally “read” minds. In 2018, AI experts in the US, China, and Japan published studies showing that computers can see what people are thinking by using MRI (functional magnetic resonance imaging) that measure brain activity. This is connected to deep neural networks, which copy human brain functions.

How Advanced is the Tech?

Media were quick to respond, and headlines that AI could now read minds mushroomed the world over. But how accurate is this really? Somewhat. AI technology is still unable to anticipate what we think, want, or feel. A more accurate description of what the tech can do is, as expert Anjana Ahuja wrote in the Financial Times, is reconstruct a visual field.

The bulk of research so far has been aimed at reading images of what people are looking at or, more rarely, what they are (very generally) thinking about. Previous studies have focused on programs generating images based on letters or shapes they had been taught to recognize when seen through subjects’ minds.

Reconstructing “Face Data”

In 2014, a research group led by Alan S. Cowen found that it was possible to reconstruct face data from a person’s mind after monitoring brain activity carefully through a series of tests. The data provided indications of how autistic children responded to faces. Recently, Japan’s ATR Computational Neuroscience Laboratories and Kyoto University published a piece of research, according to which a program was not only able to successfully generalize the reconstruction to artificial shapes, but also decipher images it had been trained to recognize when people looked at them.

The method indicated that the scientific model did actually produce or reconstruct images from brain activity, not just match it to exemplars.