MIT’s new AI uses radio signals to see through walls

Picture of Richard van Hooijdonk
Richard van Hooijdonk
  • This tech can help sick people live more independent lives
  • RF-Pose estimates people’s movements through radio signals that bounce off their bodies
  • But this system could potentially be used to spy on people

Imagine having the ability to see through walls. That sounds impossible, doesn’t it? But in fact, it’s much closer to reality than you might think. A team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), led by professor Dina Katabi, have developed an innovative neural network that uses radio signals to track people’s movements through walls. RF-Pose, as the neural network is called, analyses radio signals that it bounces off people’s bodies, using this information to create a dynamic stick figure that mimics their movements in real time. Think of it as radar for people.

This tech can help sick people live more independent lives

According to Katabi’s team, the system’s primary application would be in healthcare, where it could be used to monitor patients afflicted by diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy. In addition to helping doctors better understand how these diseases progress, allowing them to devise more effective medication schedules, this tech could also help these patients lead more independent lives by allowing their healthcare providers to keep track of them throughout the day. “We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives health care providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” says Katabi. “A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices.” The system could have applications beyond healthcare as well, such as gaming (think Kinect, but much better) or locating survivors in search-and-rescue missions.

Man in window and colourful stick figure
According to Katabi’s team, the system’s primary application would be in healthcare, where it could be used to monitor patients afflicted by diseases like Parkinson’s, multiple sclerosis (MS), and muscular dystrophy.

Neural networks usually learn to identify specific objects by analysing vast datasets of images, each of which needs to be assessed and labelled individually, a laborious process that slows machine learning to a crawl. This standard technique can’t work with radio signals, though, because it’s impossible to tell whether they represent a human or something else. For this tech to work, the researchers had to devise another way to identify the signals, and they came up with an innovative solution.

RF-Pose estimates people’s movements through radio signals that bounce off their bodies

First, they used a camera to capture thousands of images of people performing all kinds of activities, including walking, sitting, talking, opening doors, and waiting for elevators. At the same time, they used a wireless device to bounce radio signals off them while they were performing those activities. The two different data sets were then combined, allowing each stick figure to be assigned a corresponding radio signal. This linked data was then fed into the neural network, enabling it to assign a distinct radio signature to a particular type of human movement. In the end, the system was able to learn to associate the actions depicted by the stick figures with the appropriate radio signals to accurately estimate people’s movements and postures, and the miracle of this technique is that it even works if there’s a wall in the way. Furthermore, RF-Pose’s performance isn’t affected even if there are multiple people in the room crossing paths, allowing a level of accuracy that’s critical in the context of healthcare.

The researchers say that the system is also capable of identifying people out of a line-up of 100 individuals, which it does with 83 per cent accuracy. In practice, this means that it’s pretty good at deciphering who it’s watching, and can tell a patient from a guest. That allows people with significant motor impairment the chance to live more independently, an outcome they desperately want. “By using this combination of visual data and AI to see through walls, we can enable better scene understanding and smarter environments to live safer, more productive lives,” says Mingmin Zhao, a Ph.D. student on Katabi’s team. Right now, the system only works with 2D stick figures, but the team is working on incorporating 3D models, which would allow them to capture even more details.

But this system could potentially be used to spy on people

This technology comes with some rather obvious privacy and security issues, though. Who can guarantee that the system won’t be used to spy on people, rather than just monitor people’s health? Could it be adopted by police or security agencies to watch suspects? According to Katabi, the people who took part in their study gave consent to have their movements tracked, and the team took further steps to protect their privacy by anonymising and encrypting the data collected by the system. Katabi also suggested that the final product may include some sort of consent mechanism that would prevent anyone from monitoring people without their knowledge. However, she hasn’t disclosed any more details about what this mechanism would look like or how it would work, and now that this system has been proven effective, there’s no reason it can’t be replicated by others with less helpful intentions. As legitimate as these concerns may be, in this case, the benefits outweigh any potential risks.

Share via
Copy link