Will mind-reading AI mean the end of privacy as we know it?

While mind-reading AI promises to give voice to those unable to speak and change how we interact with computers, it also raises a host of ethical considerations that demand careful consideration.
  • Researchers in Singapore develop an AI system that can visualise your thoughts
  • Meta’s new AI can ‘read’ your mind
  • This ‘mind-reading’ AI turns your thoughts into text
  • Ethical implications of mind-reading AI

In the not-so-distant past, the notion of your thoughts being accessible to anyone but yourself was considered ludicrous. Fast forward to today, and the once far-fetched dream of mind reading is becoming a reality, thanks to recent advances in artificial intelligence technology that enable algorithms to translate our neural patterns into understandable language. The implications of such technology are staggering — with mind-reading AI, our private thoughts may soon not be so private anymore. This breakthrough has the potential to change lives, offering a voice to those who have lost theirs to injury or illness. It could also completely change the way we interact with computers and other electronic devices, eliminating the need for physical interfaces. At the same time, the idea of someone else, perhaps a corporation or a government, having access to our innermost thoughts is extremely unsettling. It’s a stark reminder that with every technological breakthrough comes a need for careful consideration of its impact on our lives and society. In this article, we’ll take a deep dive into the fascinating world of mind-reading AI. We’ll explore how it works, the potential it holds, and the ethical labyrinth that accompanies the power to peek into the human psyche.

“This AI model is kind of a translator. It can understand your brain activities just like ChatGPT understands the natural languages of humans”.

Jaxin Qing, a researcher at the National University of Singapore

Researchers in Singapore develop an AI system that can visualise your thoughts

At the National University of Singapore, a team of researchers has achieved a technological breakthrough that could have far-reaching implications: an AI system that can interpret and visualise what a person is seeing based only on their brainwave patterns. As part of the research, 58 volunteers were subjected to MRI scans while they were being shown an array of images, varying from animals to architectural wonders. During these sessions, each lasting nine seconds, the participant’s brain activity is monitored to capture the complex neural responses to visual stimuli. This data is then fed into the AI system named MinD-Vis, which analyses the brain scans and learns to associate specific patterns of brain activity with the corresponding images. Over time, it constructs a tailored AI model for each participant, effectively learning to ‘read’ their unique neural signatures. The result is a computer-generated reconstruction of the images seen by the volunteers based solely on their brain activity.

“This AI model is kind of a translator. It can understand your brain activities just like ChatGPT understands the natural languages of humans”, explains Jaxin Qing, one of the lead researchers on the study. “So next time you come in, you will do the scan and in the scan, you will see the visual stimuli like this. And then we’ll record your brain activities at the same time. And your brain activities will go into our AI translator and this translator will translate your brain activities into a special language that a Stable Diffusion can understand, and then it will generate the images you are seeing at that point. So that’s basically how we can read your mind in this sense”. The implications for such a technology are profound and multifaceted. Chen Zijiao from the university’s School of Medicine envisions a future where this system could offer a lifeline for individuals with impaired motor abilities. It could enable them to operate prosthetic limbs through thought alone or provide a new avenue for communication for those unable to speak, using their thoughts to transcend the barriers of physical speech.

Beyond medical applications, the technology could also change the way we interact with digital worlds. Chen suggests that integrating this AI into virtual reality headsets could allow users to navigate the metaverse effortlessly with their minds, eliminating the need for handheld controllers and creating an unprecedented level of immersion. However, although this development marks a significant leap forward in the field of neural decoding, the research team says it will take a while before the technology is ready for widespread public use. They recognise significant challenges ahead, including the ethical and privacy concerns associated with using brainwave data. “The privacy concerns are the first important thing, and then people might be worried whether the information we provided here might be assessed or shared without prior consent”, says Juan Helen Zhou, an associate professor at the National University of Singapore. “So the thing to address this is we should have very strict guidelines, ethical and law in terms of how to protect the privacy”.

Meta’s new AI can ‘read’ your mind

Similarly, Facebook’s parent company, Meta, recently published a research paper that outlines a new AI system that decodes visual representations from the human brain in real time. It does so by capturing thousands of intricate brain activity measurements each second and then reconstructing how our brains perceive and process the images we are looking at. The system uses a technique called magnetoencephalography (MEG), which measures the magnetic fields generated by the brain’s neuronal activity. With this non-invasive approach, brain function can be observed with a high temporal resolution, offering researchers a dynamic view of how the brain operates.

The AI works in three steps to understand and recreate images just like we see them. First, it uses an image encoder to change a regular image into a version it can work with. Then, the brain encoder matches up the brain’s reactions — captured through special signals — with the AI’s version of the image, ensuring the AI understands not just what the picture looks like but also how we feel about it. Finally, the image decoder uses this information to recreate the image. In short, it’s like the AI draws a picture based on how our brain sees and feels about the original image, making a copy that’s as close as possible to what we experienced.

This groundbreaking technology could have a wide range of applications. Among other things, Meta’s AI could enable more immersive VR experiences and help paralysed patients communicate through thought alone. However, the technology remains imperfect in its current form. While it can identify object categories quite accurately, it struggles to produce fine details, as noted by the researchers. This suggests more refinement is needed before the AI can reach its full potential. Moreover, the ability to extract, analyse, and reproduce people’s private thoughts also raises pressing ethical questions about transparency, security, and consent. So, while this technology does indeed hold huge promise, we must ensure it is not misused or abused.

This ‘mind-reading’ AI turns your thoughts into text

A team of researchers from the University of Technology Sydney (UTS) recently unveiled the world’s first non-invasive AI system called DeWave, which can translate your thoughts into written words. To use the system, you need to wear a close-fitting cap on your head, which then uses an electroencephalogram (EEG) to record your brain activity. During a reading session, the captured data is converted into text. “This research represents a pioneering effort in translating raw EEG waves directly into language, marking a significant breakthrough in the field”, explains computer scientist Chin-Teng Lin. At this stage, DeWave has achieved an accuracy of just over 40 per cent in turning thoughts into text, beating previous records by 3 per cent. Researchers aim to reach 90 per cent accuracy in the future, making it as reliable as current language translators and speech recognition software.

DeWave makes translating brain signals into language easier and less invasive. Instead of needing surgery or big, expensive MRI machines, users just wear an EEG cap. This approach is much more practical for everyday use because it doesn’t require surgery or costly equipment. Impressively, it can turn EEG data directly into words, even without tracking eye movements. This method typically breaks down brain signals into clear, word-like pieces, assuming that the brain takes short breaks as our eyes move from word to word. DeWave, however, uses an advanced encoder to turn EEG waves into a coded format. This code is then compared against a ‘codebook’, with the closest matches identified as the corresponding words. “It is the first to incorporate discrete encoding techniques in the brain-to-text translation process, introducing an innovative approach to neural decoding”, adds Lin. “The integration with large language models is also opening new frontiers in neuroscience and AI”.

To train the AI, the researchers used a combination of existing language models, including BERT and GPT. The training involved using brain activity data from EEG recordings of people reading. This data helped the system match brain signals to specific words, linking neural activity to language elements. DeWave was then enhanced by integrating it with a large, open-source language model, enabling it to construct coherent sentences from individual words. The system performed well with verbs since they create unique brain wave patterns. But for nouns, it often found similar words instead of exact ones, such as ‘the man’ instead of ‘the author’. “We think this is because when the brain processes these words, semantically similar words might produce similar brain wave patterns”, says Yiqun Duan, a computer scientist at UTS and the first author of the study. “Despite the challenges, our model yields meaningful results, aligning keywords and forming similar sentence structures”.

“If a government or institution can read people’s minds, it’s a very sensitive issue. There needs to be high-level discussions to make sure this can’t happen”.

Yu Takagi, a neuroscientist and an assistant professor at Osaka University

Ethical implications of mind-reading AI

While the idea of AI understanding human thoughts is thrilling, it also brings up important ethical issues. These include how the technology works, how we handle the deep insights it gathers from our minds, and who gets to access this sensitive information. At the heart of these issues is the need to guard personal privacy against intrusions. “For us, privacy issues are the most important thing”, says Yu Takagi, a neuroscientist and an assistant professor at Osaka University. “If a government or institution can read people’s minds, it’s a very sensitive issue. There needs to be high-level discussions to make sure this can’t happen”.

The AI’s potential to understand human thoughts could significantly influence consumer habits. There’s a concern that social media and marketers might use this technology to push highly personalised content that could sway users into overspending or adopting unsustainable lifestyles beyond their means. While users will always be able to opt out of having their thoughts read, it could leave them feeling excluded in a world where personalised digital interactions are increasingly becoming the norm. Or, even worse, it could put them at a disadvantage compared to those willing to sacrifice their privacy for greater personalisation. Knowing that our deepest thoughts might no longer be private could deeply change how we behave, potentially limiting our freedom of thought and self-expression.

Closing thoughts

This article highlights significant progress in mind-reading AI, marking a major breakthrough that could change society. Yet, it raises important ethical questions as well. Who should be allowed to access our thoughts, and under which circumstances? To prevent misuse or control by a few, what steps are necessary to protect our freedoms and privacy from excessive monitoring? How will the possibility of someone accessing our inner thoughts impact our thinking and identity? How would society function if your deepest feelings could be exposed without consent?

Navigating the future of mind-reading AI requires careful ethical consideration, emphasising human dignity, consent, and privacy. The ability to access neural signals of consciousness presents a thrilling yet risky prospect. Handled responsibly, it promises empowerment; left unregulated, it could introduce a subtle form of control. As we move forward, our shared values and moral judgments must lead the way, ensuring technology serves humanity’s best interests rather than merely extending its capabilities.

We’re in the midst of a technological revolution and the trends, technologies, and innovations to look out for are all game-changers. They bring competitive advantages, increase the effectiveness of operations, make our daily lives more efficient, improve healthcare, and significantly change the landscape and beyond.

Free trendservice

Receive the latest insights, research material, e-books, white papers and articles from our research team every month, for free!