Urban Warfare Operations are complicated by a three-dimensional environment, limited fields of view because of buildings, enhanced concealment and cover for defenders, and the ease of placement of booby traps and snipers. Unknown individuals hidden from view can slow emergency efforts and can increase the inherent dangers of tactical operations.
Security forces require effective ground ISR technologies that can overcome these challenges and provide effective situational awareness. One of the technologies useful in such situations is through-wall imaging which apply radio frequency (RF) and other sensing modes to penetrate wall materials and optimally estimate the terrorists hiding in other rooms and buildings and their activities.
Through-wall imaging has also been of particular interest due to its benefits for scenarios like disaster management, surveillance, and search and rescue, where assessing the situation prior to entering an area can be very crucial. This technology may improve situational awareness during emergencies and law enforcement activities such as after a building collapse, during fires in large buildings, and during tactical operations (e.g., building clearance, hostage threat situations).
US security forces have long been using see through the walls systems like RANGE-R system that can register motion inside closed spaces. The radar system’s high sensitivity is even able to detect a hiding man’s breathing somewhere deep inside a building, behind several walls. These systems have been used in FBI hostage-rescue missions, by firefighters during collapsed building search-and-rescue operations, in the U.S. Marshals Service operations when catching fugitives, and so on.
However, the general problem of through-wall imaging using RF signals is a very challenging problem, and has hence been a topic of research in a number of communities such as electromagnetics, signal processing, and networking. Detectability of the human being, wall modeling, and target differentiation have been identified as the main open issues in the field of TWRI.
See through wall technology
TTWS operate by sending out an electromagnetic signal (i.e.., radio waves) towards the wall. The part of signal enters through the wall and gets reflected by the target. The Part of the reflected signal then travels to a receiver that detects the reflected signal. The reflected signal then indicates the presence of an object; the strength of the reflected signal may indicate the proximity and/or how reflective the object is (i.e., size, material composition). Stepped frequency and pulse compression radar are the most frequently used types of radar in through-the-wall radar imaging (TWRI). In addition, a preference has been shown for using frequencies in the range of 1–3 GHz.
A pulsed system can determine the distance to an object by measuring the time difference between transmission and reception of a pulse. Ultra-wide band (UWB) radar has several advantages over conventional techniques, including better penetration properties, better range determination and decreased signal detection by second/third parties. The use of Doppler techniques allows for the detection of small amounts of movement, such as the movement of a chest cavity during breathing. With advanced signal formation (e.g., pulses) and signal processing methodologies, the operator can deduce other qualities of the target such as the range to the target and whether the target is stationary or moving.
The time needed for a radio signal to travel from a transmitting antenna to a reflective object ( a person) and back to a receiving antenna can be used to constrain the position of that object. Knowing the travel time between one such pair of antennas, you can determine that the reflector is located somewhere on an ellipse that has the two antennas at its foci (an ellipse being the set of points for which the distances to the two foci sum to a constant value). With two such pairs of antennas, you can better pin down the location of the reflector, which must lie at the intersection of two ellipses. With even more antenna pairs, it’s possible to work out where two or more reflecting objects are located.
Increasing the size and number of antennas can increase directionality and reception, make triangulation calculations more accurate, and improve the sensitivity of identification algorithms. However more and larger antennas require the device itself to be larger and more difficult to carry, manage, and operate.
Sensing with Radio Frequency (RF) signals has been a topic of interest to the research community for many years. More recently, sensing with everyday RF signals, such as WiFi, has become of particular interest for applications such as imaging, localization, tracking, occupancy estimation, and gesture recognition.
Another technique developed by University of Utah engineers uses a wireless network of radio transmitters to track people moving behind solid walls. The engineers’ system uses radio topographic imaging (RTI) to “see”, locate and track people or objects in an area surrounded by inexpensive radio transceivers that send and receive signals.
Unlike radar, which reads the reflections of radio signals bounced off targets, RTI measures the “shadows” in radio waves created when they pass through a moving person or object. By measuring the radio signal strengths on numerous paths as the radio waves pass through a person or object a computer image can be constructed.
Because TTWS use the emission and reception of radio waves for detection, TTWS devices have an inherent technology limitation: solid metal surfaces, such as aluminum siding, will block radar signals. Thus, detecting a man in the closed body of a car or in a building encased in aluminum siding is impossible. Water has properties similar to metal: a wet porous concrete is quite an effective defense against TTWS radio waves as well.
The challenge of through wall imaging is large amount of signal loss as it passes to and from the wall known as attenuation, and it depends on the properties of the barrier materials and the thickness of the materials. The lower frequencies (i.e., longer wavelengths) tend to penetrate barriers better than higher frequencies (i.e., shorter wavelengths), however it is difficult to detect moving objects using Doppler at lower frequencies and also shorter wavelengths can provide better location data than longer wavelengths.
Other challenges come from the wall itself, whose reflections were 10,000 to 100,000 times as strong as any reflection coming from beyond it. Another challenge was that wireless signals bounce off not only the wall and the human body but also other objects, including tables, chairs, ceilings, and floors. So there is requirement to cancel out all these other reflections and keep only those from someone behind the wall.
TTWS vary in their capabilities, the information they provide, and their complexity. Compact and easily transportable devices tend to provide a minimal amount of information, but this is balanced with their ease of use and transportability. Larger devices may provide additional information, but at the expense of being more cumbersome and therefore more difficult to manage and position during operational use. The utility of any one device is dependent on the capabilities of that device and the requirements of the situations where the device is utilized. The detection range of most devices is 50-65 feet (about 15-20 meters), while devices with bigger antennae and stronger power supplies can ‘knock through’ approximately 230 feet (about 70 meters).
MIT’s Wi-Vi system uses Wi-Fi to see through walls
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed “Wi-Vi,” system that uses reflected Wi-Fi signals to track the movement of people behind walls and closed doors. They also developed sophisticated signal processing could cancel out the arbitrary reflections, and keep only those reflecting from moving human bodies.
Dina Katabi, a professor in MIT’s Department of Electrical Engineering and Computer Science, and her graduate student Fadel Adib have tuned a system that uses two transmission antennas and a single receiver. The two antennas transmit almost identical signals, except the second antenna’s signal is the inverse of the first, resulting in interference.
This interference causes the signals to cancel each other out. Since any static objects that the signals hit create identical reflections, they are also cancelled out by this effect. Only the reflections that change between the two signals, like moving bodies on the other side of the wall, arrive back at the receiver, allowing the system to track the moving people. Adib says, “So, if the person moves behind the wall, all reflections from static objects are cancelled out, and the only thing registered by the device is the moving human.”
One of the techniques used to detect a person as he or she moved around the adjacent room is called inverse synthetic aperture radar, which is sometimes used for maritime surveillance and radar astronomy. Their system was based on Frequency-modulated continuous-wave radar, can measure distance by comparing the frequency of the transmitted and reflected waves. Our system operated between about 5 and 7 gigahertz, transmitted signals that were just 0.1 percent the strength of Wi-Fi, and could determine the distance to an object to within a few centimeters.
“Using one transmitting antenna and multiple receiving antennas mounted at different positions allowed us to measure radio reflections for each transmit-receive antenna pair. With two receiving antennas, we could map out two such ellipses, which intersected at the person’s location. With more than two receiving antennas, it was possible to locate the person in 3D—you could tell whether a person was standing up or lying on the floor, for example. Things get trickier if you want to locate multiple people this way, but as we later showed, with enough antennas, it’s doable, ” writes Adib.
It’s easy to think of applications for such a system. Smart homes could track the location of their occupants and adjust the heating and cooling of different rooms. You could monitor elderly people, to be sure they hadn’t fallen or otherwise become immobilized, without requiring these seniors to wear a radio transmitter. We even developed a way for our system to track someone’s arm gestures, enabling the user to control lights or appliances by pointing at them.
The natural next step for our research team—which by this time also included graduate students Chen-Yu Hsu and Hongzi Mao, and Professor Frédo Durand—was to capture a human silhouette through the wall. The fundamental challenge here was that at Wi-Fi frequencies, the reflections from some body parts would bounce back at the receiving antenna, while other reflections would go off in other directions. So our wireless imaging device would capture some body parts but not others, and we didn’t know which body parts they were.
Our solution was quite simple: We aggregated the measurements over time. That works because as a person moves, different body parts as well as different perspectives of the same body part get exposed to the radio waves. We designed an algorithm that uses a model of the human body to stitch together a series of reflection snapshots. Our device was then able to reconstruct a coarse human silhouette, showing the location of a person’s head, chest, arms, and feet.
While this isn’t the X-ray vision of Superman fame, the low resolution might be considered a feature rather than a bug, given the concerns people have about privacy. And we later showed that the ghostly images were of sufficient resolution to identify different people with the help of a machine-learning classifier. We also showed that the system could be used to track the palm of a user to within a couple of centimeters, which means we might someday be able to detect hand gestures.
Because the device can detect action behind a wall, the system could be used as a gesture-based interface for controlling appliances or lighting. Venkat Padmanabhan, a principal researcher at Microsoft Research, says the possibility of using Wi-Vi as a gesture-based interface that does not require a line of sight between the user and the device itself is perhaps its most interesting application of all.
About three years ago, we decided to try sensing human emotions with wireless signals. And why not? When a person is excited, his or her heart rate increases; when blissful, the heart rate declines. But we quickly realized that breathing and heart rate alone would not be sufficient. After all, our heart rates are also high when we’re angry and low when we’re sad.
Looking at past research in affective computing—the field of study that tries to recognize human emotions from such things as video feeds, images, voice, electroencephalography (EEG), and electrocardiography (ECG)—we learned that the most important vital sign for recognizing human emotions is the millisecond variation in the intervals between heartbeats. That’s a lot harder to measure than average heart rate. And in contrast to ECG signals, which have very sharp peaks, the shape of a heartbeat signal on our wireless device isn’t known ahead of time, and the signal is quite noisy. To overcome these challenges, we designed a system that learns the shape of the heartbeat signal from the pattern of wireless reflections and then uses that shape to recover the length of each individual beat.
Using features from these heartbeat signals as well as from the person’s breathing patterns, we trained a machine-learning system to classify them into one of four fundamental emotional states: sadness, anger, pleasure, and joy. Sadness and anger are both negative emotions, but sadness is a calm emotion, whereas anger is associated with excitement. Pleasure and joy, both positive emotions, are similarly associated with calm and excited states.
3D through wall imaging using drones and Wi-Fi
Chitra Karanam, a PhD student, and Yasamin Mostofi, a professor at the department of electrical and computer engineering at the University of California, Santa Barbara, have demonstrated 3D through-wall imaging of a completely unknown area, using WiFi RSSI and Unmanned Aerial Vehicles (UAVs) that move outside of the area of interest to collect WiFi measurements.
RSSI stands for received signal strength indicator. A drone beams radio waves at 2.4GHz, and another drone on the other side of the wall measures the power of the signals. The researchers used two 3D Robotics RTF X8+ drones, measuring 40 x 40cm (15.7 x 15.7in) and weighing 3.5kg (7.7lb) each.
Just like pixels make up a two-dimensional image, voxels describe a patch of three-dimensional space. The final step is to use the voxels to build a model of the interior hiding behind the walls. Both drones are always parallel with one another and travel in a zig-zag motion. They hover outside the 10-cm (4-in) brick walls, sweeping the area at different angles to try to get a good picture inside.
“Each voxel looks at its neighbors’ image decisions. For instance, if a voxel’s own decision is that this voxel should be empty but then the neighbors’ decisions are all full (non-empty), then the voxel may want to revise its decision since there should be a spatial correlation.
“So the belief propagation method is a way of doing this iterative update. At some point, the image will converge to something and will not change any more. That is the final imaging result we produce, and its quality has improved a lot beyond that initial processing phase,” said Mostofi.
“3D imaging is harder than 2D, since there are a lot more unknown areas. We approximate the wave model and solve Maxwell’s equations that describe the propagation of the Wi-Fi waves. Next, the signals are compressed to form an image,” she said. Some of the clear structure like edges are lost. But the drone manages to estimate the length of objects to a pretty good accuracy – a wall that is actually 1.48 metres is measured as 1.5 metres.