Home / Technology / AI & IT / Neural and tensor holography enables realistic 3D holographic images in virtual and augmented reality headsets

Neural and tensor holography enables realistic 3D holographic images in virtual and augmented reality headsets

Virtual reality headsets have gained tremendous popularity. However, one common problem tends to occur— the headsets can make users feel sick. Nausea can be experienced because users’ eyes are being tricked into seeing a 3D scene, when, in fact, they are staring at a 2D fixed-distance display. One possible solution for a better 3D viewing experience would be to have these headsets display 3D computer-generated holograms (CGHs).

 

A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image. In contrast, a hologram encodes both the brightness and phase of each light wave. But despite their realism, holograms are a challenge to make and share.

 

First developed in the mid-1900s, early holograms were recorded optically. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase. This reference generates a hologram’s unique sense of depth. The resulting images were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share.

 

However, creating these 3D CGHs has been no easy feat. CGH algorithms based on physical simulation is very time-consuming and can take minutes to complete on current high-end graphics processing units (GPUs). This prevents real-time application of CGHs for the aforementioned applications, particularly for virtual and augmented reality. Existing CGH algorithms also don’t handle occlusion well and are hard to achieve photorealistic results.

 

Researchers led by Wojciech Matusik, from Department of Electrical Engineering and Computer Science, MIT have developed  “tensor holography”, that enables the creation of real-time, photorealistic 3d holography using deep learning.

 

“People previously thought that with existing consumer-grade hardware, it was impossible to do real-time 3D holography computations,” says Liang Shi, the study’s lead author and a PhD student in MIT’s Department of Electrical Engineering and Computer Science (EECS). “It’s often been said that commercially available holographic displays will be around in 10 years, yet this statement has been around for decades.

 

Shi believes the new approach, which the team calls “tensor holography,” will finally bring that elusive 10-year goal within reach. The advance could fuel a spillover of holography into fields like VR and 3D printing. “Shi worked on the study, published in Nature, with his advisor and co-author Wojciech Matusik.

 

They used deep learning to accelerate computer-generated holography, allowing for real-time hologram generation. The team designed a convolutional neural network — a processing technique that uses a chain of trainable tensors to roughly mimic how humans process visual information. Training a neural network typically requires a large, high-quality dataset, which didn’t previously exist for 3D holograms.

 

The team built a custom database of 4,000 pairs of computer-generated images. Each pair matched a picture — including color and depth information for each pixel — with its corresponding hologram. To create the holograms in the new database, the researchers used scenes with complex and variable shapes and colors, with the depth of pixels distributed evenly from the background to the foreground, and with a new set of physics-based calculations to handle occlusion. That approach resulted in photorealistic training data. Next, the algorithm got to work.

 

By learning from each image pair, the tensor network tweaked the parameters of its own calculations, successively enhancing its ability to create holograms. The fully optimized network operated orders of magnitude faster than physics-based calculations. That efficiency surprised the team themselves. The CNN performs this task in tens of milliseconds on a consumer-grade graphics processing unit (GPU). The CNN is very memory-efficient (<1mb) and can run interactively (<1s) on mobile devices.

 

“We are amazed at how well it performs,” says Matusik. In mere milliseconds, tensor holography can craft holograms from images with depth information — which is provided by typical computer-generated images and can be calculated from a multicamera setup or LiDAR sensor (both are standard on some new smartphones). This advance paves the way for real-time 3D holography. What’s more, the compact tensor network requires less than 1 MB of memory. “It’s negligible, considering the tens and hundreds of gigabytes available on the latest cell phone,” he says.

 

Real-time 3D holography would enhance a slew of systems, from VR to 3D printing. The team says the new system could help immerse VR viewers in more realistic scenery, while eliminating eye strain and other side effects of long-term VR use. The technology could be easily deployed on displays that modulate the phase of light waves. Currently, most affordable consumer-grade displays modulate only brightness, though the cost of phase-modulating displays would fall if widely adopted.

 

The tensor holography technology has a wide range of applications, especially in the consumer electronics industry. Within this industry, the applications for this technology are most readily applied to virtual and augmented reality. This technology could have further applications within the fields of 3D printing, microscopy, and healthcare.

 

Three-dimensional holography could also boost the development of volumetric 3D printing, the researchers say. This technology could prove faster and more precise than traditional layer-by-layer 3D printing, since volumetric 3D printing allows for the simultaneous projection of the entire 3D pattern. Other applications include microscopy, visualization of medical data, and the design of surfaces with unique optical properties.

 

“It’s a considerable leap that could completely change people’s attitudes toward holography,” says Matusik. “We feel like neural networks were born for this task.” The work was supported, in part, by Sony.

 

Neural holography to Delivers Perceptual Reality to Holographic Displays

Research at Stanford University may lead to more realistic displays in virtual and augmented reality headsets. A team at the school has developed a technique to reduce a speckling distortion often seen in regular laser-based holographic displays, as well as a technique to more realistically represent the physics that would apply to the 3D scene if it existed in the real world.

 

The research confronts the fact that current augmented and virtual reality headsets only show 2D images to each of the viewer’s eyes, rather than 3D or holographic images. “They are not perceptually realistic,” said Gordon Wetzstein, associate professor of electrical engineering and leader of the Stanford Computational Imaging Lab.

Image quality for existing holographic displays has been limited. It has been a challenge to create a holographic display that is on par with LCD display quality, Wetzstein said. It is difficult to control the shape of lightwaves at the resolution of a hologram. The inability to overcome the gap between what is happening in a simulation and what the same scene would look like in a real environment has also hindered advances.

Scientists have tried to create algorithms to address both problems. Wetzstein and his colleagues previously developed algorithms using neural networks — an approach called neural holography.

“Artificial intelligence has revolutionized pretty much all aspects of engineering and beyond,” Wetzstein said. “But in this specific area of holographic displays or computer-generated holography, people have only just started to explore AI techniques.”

In the current work, postdoctoral research fellow Yifan Peng, co-lead author of the research papers, helped design an optical engine to go into the holographic displays. The team’s neural holographic display involved training a neural network to mimic the real-world physics of what was happening in the display, and was able to generate real-time images. The team then paired it with an AI-inspired algorithm to provide an improved system for holographic displays that use coherent light sources — LEDs and SLEDs. These sources are favored for their cost, size, and energy requirements, and they also have the potential to avoid the speckled appearance of images produced by systems that rely on coherent light sources such as lasers.

However, the same characteristics that help partially coherent light sources to avoid speckling tend to result in blurred images with a lack of contrasting. By building an algorithm specific to the physics of partially coherent light sources, the researchers produced the first high-quality and speckle-free holographic 2D and 3D images using LEDs and SLEDs.

The research was published in Science Advances (www.doi.org/10.1126/sciadv.abg5040).

 

 

References and Resources also include:

https://tlo.mit.edu/technologies/real-time-photorealistic-3d-holographic-display-using-deep-neural-rendering

https://www.photonics.com/Articles/Algorithm_Improves_Holographic_Displays/a67532

https://www.sciencedaily.com/releases/2021/03/210310121953.htm

About Rajesh Uppal

Check Also

Revolutionizing Wireless Fronthaul in Industrial Environments: Harnessing Terahertz Frequencies

Introduction: The relentless growth of the Internet of Things (IoT) has driven the demand for …

error: Content is protected !!