Urban Warfare Operations are complicated by a three-dimensional environment, limited fields of view and fire because of buildings, enhanced concealment and cover for defenders, below-ground infrastructure, and the ease of placement of booby traps and snipers. DARPA is investing in many novel camera technologies . In the event of urban siege, such cameras can help provide the situational awareness necessary for fast-response teams to intercept the targets.
Stanford engineers have developed a 4D camera with an extra-wide field of view. They believe this camera can be better than current options for close-up robotic vision and augmented reality.
U.S. Defense Department Advanced Research Projects Agency (DARPA), has provided a $4.4 million grant Morgridge Institute for Research and University of Wisconsin-Madison researchers to fast-track research and development of the camera that can snap pictures around corners. While regular cameras rely on the opening burst of light on the subject, while this project focuses on the indirect light that comes later and scatters and bounces through the scene. “We are interested in capturing exactly what a conventional camera doesn’t capture,” Velten says.
Another one of DARPA’s projects is the VirtualEye system, consisting of two cameras, developed in collaboration with Nvidia. This sophisticated tech enables troops to digitally map a building or a room and virtually walk around the space to explore it and see exactly what they could encounter, before they physically enter it.
New camera designed by Stanford researchers could improve robot vision and virtual reality
Stanford researchers have developed a new camera with many desirable features like wide field of view, provides detailed depth information and compact size that allow them to be incorporated in wearables, robotics, autonomous vehicles and augmented and virtual reality.
Light field (LF) cameras measure a rich 4D representation of light that encodes color, depth and higher-order behaviours such as specularity, transparency, refraction and occlusion. Post-capture capabilities such as perspective shift, depth estimation, and refocus are well known
from consumer-grade LF cameras, and these also offer simplification of an expanding range of tasks in computer vision.
Capability for wide field of view (FOV) LF capture would greatly benefit a wide range of applications from navigation in autonomous vehicles recognition and tracking, and object segmentation and detection. Optics designers have recently developed wide-FOV 2D imaging techniques employing monocentric optics . Mono-centric lenses are concentric glass spheres of differing index of refraction, i.e. multi-shell spherical lenses. These offer rotational symmetry, diffraction-limited resolution and wide FOV in a small form factor
Examples where it would be particularly useful include robots that have to navigate through small areas, landing drones and self-driving cars. As part of an augmented or virtual reality system, its depth information could result in more seamless renderings of real scenes and support better integration between those scenes and virtual components.
“It could enable various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’ve made of,” said Wetzstein. “This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it.”
This research was funded by the NSF/Intel Partnership on Visual and Experiential Computing and DARPA.
Camera that see around corners
The camera technology pioneered by Morgridge imaging specialist Andreas Velten, works by shining a rapid pulse of laser light into a room that when hits the walls, ceiling, and other surfaces and objects results in scattering of light or photons. The scattered photons are recaptured by finely tuned sensors connected to the camera and information from this scattered light helps the researchers digitally rebuild a 3D environment that is either hidden or obstructed from view.
Velten and Gupta have developed a theoretical framework to determine how complex a scene they can recapture. They are creating models where they bounce light a half-dozen or more times through a space to capture objects that are either hidden or outside the field of view. “The more times you can bounce this light within a scene, the more possible data you can collect,” Velten said. “Since the first light is the strongest, and each proceeding bounce gets weaker and weaker, the sensor has to be sensitive enough to capture even a few photons of light.”
While in its infancy, the technology has generated excitement about potential applications in medical imaging, disaster relief, navigation, robotic surgery and national security. Velten currently has a NASA project examining whether the technology can be used to probe the dimensions of moon caves.
The DARPA grant will fund four years of research, with the first two dedicated to investigating the full potential of the technology. The second two years will be spent developing the hardware, making it viable for production and implementation. UW-Madison is one of eight universities to receive 2016 DAPRA grants to investigate different forms and applications of non-line-of-sight imaging.
DARPA’s Virtual Eye Program
DARPA, in cooperation with Nvidia, has developed a way to capture an environment that may be a preview of how VR cameras of the future could work. DARPA’s “Virtual Eye” uses two cameras that each capture not only light but also depth information. By combining the data from the two cameras, the Virtual Eye can reconstruct a 3D model of the environment. In a true VR image, the perspective adjusts according to up, down, forward, backward, left or right movements of the user.
The Virtual Eye, enable soldiers or police to throw a couple of cameras into a building to “digitally map” a room before they enter. They can literally check the interior of a room for number of people, weapons they are carrying, where they are hiding and their activites before barging inside, before they’re even detected by the people in the room.
Trung Tran, DARPA program manager, says “I can do all this without having a soldier endanger himself. Especially when you have adversaries like ISIS who are trying to set booby traps to, in fact, harm the soldiers when they come in just to do the room clearing.”
What is interesting about the Virtual Eye technology is that it does not require exotic cameras. Indeed, the demo appears to use two Xbox 360 Kinect cameras (which use infrared to sense depth)
US Special Operations Command’s Intelligence, Surveillance and Reconnaissance (ISR) requirements
The U.S. Special Operations Command has issued a Request for Information to identify effective technologies for urban and unconventional warfare and get them into the field as quickly as possible. This includes ground ISR for urban environments made up of sensors, video and tags, command and control and a low probability of detection (LPD). Technologies to allow hidden chamber detection in buildings, see through the wall and tool that instantly creates a map of a room, as well as situational awareness tools that allow operators in a tactical environment to use mission planning data, GPS data, handheld radios and other intelligence products in one device.
References and Resources also include:
https://www.yahoo.com/tech/darpa-wants-camera-see-around-234214050.html
https://morgridge.org/newsarticle/plumbing-possibilities-camera-sees-around-corners/
http://www.computationalimaging.org/wp-content/uploads/2017/04/LFMonocentric.pdf