We see things because our eyes are sophisticated light detectors: they constantly capture the light rays bouncing off nearby objects so our brain can construct an ever-changing impression of the world around us. When our eyes see a three-dimensional image such as apple, Light reflects off the surface of the apple into two eyes and your brain merges their two pictures into a single stereoscopic (three-dimensional) image. If you move your head slightly, the rays of light reflected off the apple have to travel along slightly different paths to meet your eyes, and parts of the apple may now look lighter or darker or a different color.
Photography, as this became known, has revolutionized the way people see and engage with the world but no matter how realistic or artistic a photograph appears, there’s no question of it being real. A typical lens-based photograph encodes the brightness of each light wave — a photo can faithfully reproduce a scene’s colors, but it ultimately yields a flat image. All the light traveling from the apple comes from a single direction and enters a single lens before it hits the light-sensitive image sensor chip (the CCD or CMOS chip in a digital camera), so the camera can record only a two-dimensional pattern of light, dark, and color. In addition, we look at a photo and instantly see that the image is dead history: the light that captured the objects in a photograph vanished long ago and can never be recaptured. Holograms are also a bit like photographs that never die.
Hologram also looks real and three-dimensional and moves as you look around it, just like a real object. They’re sort of “photographic ghosts”: they look like three-dimensional photos that have somehow got trapped inside glass, plastic, or metal. When you tilt a credit-card hologram, you see an image of something like a bird moving “inside” the card. Hologram is a combination of two Greek words––“holos” meaning “whole,” and “gramma” meaning “message”; coming together to create a whole message, or in other words, a complete picture.
That happens because of the unique way in which holograms are made. Photography measures how much light of different color hits the photographic film. However, light is also a wave, and is therefore characterized by the phase. Hologram encodes both the brightness and phase of each light wave. Phase specifies the position of a point within the wave cycle and correlates to depth of information, meaning that recording the phase of light scattered by an object can retrieve its full 3D shape, which cannot be obtained with a simple photograph. That combination delivers a truer depiction of a scene’s parallax and depth.
First developed in the mid-1900s, early holograms were recorded optically. You make a hologram by reflecting a laser beam off the object you want to capture. That required splitting a laser beam, with half the beam used to illuminate the subject and the other half used as a reference for the light waves’ phase. In fact, you split the laser beam into two separate halves by shining it through a half-mirror (a piece of glass coated with a thin layer of silver so half the laser light is reflected and half passes through—sometimes called a semi-silvered mirror).
One half of the beam bounces off a mirror, hits the object, and reflects onto the photographic plate inside which the hologram will be created. This is called the object beam. The other half of the beam bounces off another mirror and hits the same photographic plate. This is called the reference beam. This reference generates a hologram’s unique sense of depth. A hologram forms where the two beams meet up in the plate.
For more detailed knowledge about holographic technology please visit: Holographic Technology: Bringing the Virtual World to Life
With the invention of intense coherent light sources (lasers) and their most recent technological advancements, optical holography has become a popular technique for three-dimensional (3D) imaging of macroscopic objects, security applications, and microscopic imaging. Due to its noninvasive and label-free properties, holography has been applied to biological imaging, air/water quality monitoring, and quantitative surface characterization measurement.
Holography is also used to detect stress in materials. A stressed material will deform, sometimes so minutely that it is not visible. A hologram can amplify this change since the light reflected off of the material will now be at a different angle than it was initially. A Comparison between the before and after holograms can determine where the greatest stress is. In Europe telephone credit cards use holograms to record the amount of remaining credit. Fighter pilots use holographic displays of their instruments so they can keep looking straight up. Museums keep archival records in holograms.
Holographic Technology
Holography is a technique used to record a wavefront diffracted from an object. Both amplitude and phase information of an object wave are recorded by utilizing the interference of light. A medium containing the information is called a ‘hologram’. A three-dimensional (3D) image can be reconstructed from a hologram by utilizing the theory of diffraction of light.
A hologram is a recording in a two- or three-dimensional medium of the interference pattern formed when a point source of light (the reference beam) of fixed wavelength encounters light of the same fixed wavelength arriving from an object (the object beam). When the hologram is illuminated by the reference beam alone, the diffraction pattern recreates the wave fronts of light from the original object. Thus, the viewer sees an image indistinguishable from the original object.
There are many types of holograms, and there are varying ways of classifying them. For our purpose, we can divide them into two types: reflection holograms and transmission holograms.
The reflection hologram
The reflection hologram, in which a truly three-dimensional image is seen near its surface, is the most common type shown in galleries. The hologram is illuminated by a “spot” of white incandescent light, held at a specific angle and distance and located on the viewer’s side of the hologram. Thus, the image consists of light reflected by the hologram. Recently, these holograms have been made and displayed in color—their images optically indistinguishable from the original objects. If a mirror is the object, the holographic image of the mirror reflects white light; if a diamond is the object, the holographic image of the diamond is seen to “sparkle.”
Transmission holograms
The typical transmission hologram is viewed with laser light, usually of the same type used to make the recording. This light is directed from behind the hologram and the image is transmitted to the observer’s side. The virtual image can be very sharp and deep. For example, through a small hologram, a full-size room with people in it can be seen as if the hologram were a window. If this hologram is broken into small pieces (to be less wasteful, the hologram can be covered by a piece of paper with a hole in it), one can still see the entire scene through each piece. Depending on the location of the piece (hole), a different perspective is observed. Furthermore, if an undiverged laser beam is directed backward (relative to the direction of the reference beam) through the hologram, a real image can be projected onto a screen located at the original position of the object.
A hologram is an image that essentially resembles a 2D window looking onto a 3D scene. The pixels of each hologram scatter light waves falling onto them, making these waves interact with each other in ways that generate an illusion of depth. Holograms are 3D images created by scattering light onto a 2D surface, but a person typically has to look directly at that surface in order to see the futuristic projections. Producing 3D images that people can see from any angle, even when someone walks all the way around the projection, is much trickier. “To see the light, it needs to scatter off of something and enter your eye,” Smalley told NBC News MACH in an email. “Getting that scattering to happen in thin air is difficult.”
Using lasers to trap and manipulate tiny particles in free space, the engineers created so-called volumetric displays of a butterfly, a prism, and the BYU logo. The projections right now are tiny, but study leader Daniel Smalley, an assistant professor of electrical and computer engineering, said this technology could one day be useful for medical imaging — for instance, to generate 3D images that serve as roadmaps that guide surgeons through challenging procedures. However, 3-D holographic projection has never been realized. A team of scientists from Bilkent University, Turkey, has now demonstrated the first realistic 3-D holograms that can be viewed from any angle.
Finally, however, it looks like we are approaching the day where that dream of real holograms may actually become a reality. Scientists around the world have come up with new and inventive ways to use lasers, modern digital processors, and motion sensing technologies to create several different types of holograms which could change the way we consume and interact with media in the very near future. In 2018, Researchers at Brigham Young University (BYU), in Provo, Utah, reported of turning make-believe into reality. As part of an initiative they’ve nicknamed the “Princess Leia Project,” they’ve developed a way to project 3D images that appear to float in thin air.
Holography has been utilized as both a natural 3D display and a lensless 3D image recording technique. One of the most remarkable features of holography is that 3D motion-picture recording of any ultrafast physical phenomenon can be achieved with light-in-flight recording, even for light pulse propagation in 3D space.
Digital holography
A significant step forward from analog holography is to record digitally the interference pattern with an electronic sensor and to reconstruct the object numerically, including the amplitude and phase information, with a computer. Digital holography (DH) is a technique in which a digital hologram that contains an object wavefront is recorded, and both 3D and quantitative phase images of an object are reconstructed using a computer .
Digital holography refers to the acquisition and processing of holograms with a digital sensor array typically a CCD camera or a similar device. Image rendering or reconstruction of object data is performed numerically from digitized interferograms. Digital holography offers a means of measuring optical phase data and typically delivers three-dimensional surface or optical thickness images. Several recording and processing schemes have been developed to assess optical wave characteristics such as amplitude, phase, and polarization state, which make digital holography a very powerful method for metrology applications.
In recent years, there have been rapid improvements in electronic devices such as image sensors, spatial light modulators (SLM) and computers. An SLM with high pixel density enables natural, colorful and high-quality lensless 3D motion-picture image formation on a holographic display. A lensless image sensor with high pixel density and a large number of pixels can capture an image of fine interference fringes digitally, and a computer with high computational performance can then reconstruct a holographic image numerically with high throughput from a digitally recorded hologram.
Numerical reconstruction in DH is commonly based on the Fresnel–Kirchhoff integral, which, however, cannot be directly implemented due to its complexity. Simplifying it results in several numerical algorithms, such as the Fresnel approach, paraxial transfer function approach (also called convolution method, or CONV), and nonparaxial transfer function approach (also called angular spectrum method, ASM for short). More recently, compressed sensing has also been studied for holographic reconstruction.
Many of these methods share in common the need for detailed knowledge about the experimental setup, such as the wavelength of the laser, pixel pitch of the camera, and the object distance. The last one is normally estimated through autofocusing algorithms, many of which are computationally intensive and time-consuming. Additional steps, such as phase shifting and filtering in the frequency domain, are also necessary to suppress the zero-order and twin images, before using Fresnel propagation or Fourier transform to reconstruct the wavefront.
This technique has potential application to the fields of microscopy , quantitative phase imaging (QPI), particles and flow measurement in 3D space, 3D imaging of biological specimens , multiple 3D image encryption , 3D object recognition , 3D tomographic imaging of amplitude and phase distributions, 3D surface shape measurements with nanometer accuracy in the depth direction, ultrafast 3D optical imaging with an ultrashort pulsed laser, and holographic 3D imaging with a single photo detector .
Physicists create Star Trek-style holograms
Now, a team at Bilkent University, Turkey, has devised a way to project holograms depicting complex 3-D images. Their method is highlighted in Nature Photonics. “We achieved this feat by going to the fundamentals of holography, creating hundreds of image slices, which can later be used to re-synthesize the original complex scene,” says Dr. Ghaith Makey, the first author of the paper.
“So far, it has not been possible to simultaneously project a fully 3-D object, with its back, middle and front parts. Our approach solves this issue with a conceptual change in the way we prepare the holograms. We exploit a simple connection between the equations that define light propagation, the same equations that were invented by Jean-Baptiste Joseph Fourier and Augustin-Jean Fresnel in the early days of the field,” says Prof. Onur Tokel, one of the lead authors of the paper.
However, in order to reach their goal, the researchers had to introduce another critical ingredient. The 3-D projection would suffer from interference between the constituent layers, which had to be efficiently suppressed. “A technological breakthrough can rarely be traced to a fundamental mathematical result,” says Prof. Fatih Ömer Ilday, the other lead author of the paper. “Realistic 3-D projections could not be formed before, mainly because it requires back-to-back projection of a very large number of 2-D images to look realistic, with potential crosstalk between images. We use a corollary of the celebrated central limit theorem and the law of large numbers to successfully eliminate this fundamental limitation.”
Prof. Tokel says, “Our holograms already surpass all previous digitally synthesized 3-D holograms in every quality metric. Our method is universally applicable to all types of holographic media. The immediate applications may be in 3-D displays, medical visualization, air traffic control, but also in laser-material interactions and microscopy” says Prof. Serim Ilday of the Bilkent team. “The most important concept associated with holography has always been the third dimension. We believe future challenges will be exciting, considering the vision set by the holodeck, or the holovision of Isaac Asimov in the Foundation novels. Even Jules Verne touched upon this idea in Carpathian Castle, published in 1892. Clearly, the ensuing decades left us craving for more. We are closer to the goal of realistic 3-D holograms.”
The researchers generate M secondary diffraction images by tuning the period of the liquid crystal grating. To display the secondary diffraction image with uniform intensity, they adjust the state of the polarized light valve.
To enlarge the size of the display, the researchers generate a hologram of the 3D object and divide it into two subholograms that are equal in size. The first subhologram is loaded on the SLM before voltage is applied to the grating. The second subhologram is then loaded on the SLM while voltage is being applied, to generate the zero-order primary maximum and ±1 order secondary maximum on the spectral plane.
The researchers developed a signal controller for the system to control the switching speed of the hologram and the tuning of the liquid crystal grating. Adjustments to the polarized light valve ensure that only positive, first-order diffracted light can pass through.
When the switching time becomes fast enough, the reconstructed images of subhologram 1 and subhologram 2 can be spliced seamlessly in space to create a large-size holographic 3D display that is aligned with the visual persistence effect of the human eye.
In experiments, the holographic 3D display system demonstrated a viewing angle of 57.4 in., which is 7× that of a conventional system using a single SLM. When the team tested the system’s ability to reproduce large-size holographic images, the system demonstrated that it could magnify the size of an image by 4.2×.
The images produced by holographic 3D displays circumvent the uncomfortable side effects of traditional 3D viewing systems and present images that are nearly the same as what humans see in their actual surroundings. However, in traditional 3D holographic displays, the pixel pitch and size of the SLM limit the viewing angle and the size of the holographic image. Currently, the viewing angle of holographic reproduction based on a single SLM is usually less than 9°, and the size is less than 2 cm.
According to the researchers, the new holographic 3D display system has a simple structure and is easy to operate. The system reconstructs the details of the recorded object completely and ensures that the intensity distribution is uniform.
In addition to 3D holographic displays, the system can be used for augmented reality (AR) displays. The team expects the display system to have broad applicability, with applications including medical diagnostics, advertising, entertainment, and education.
The research was published in Light: Science & Applications (www.doi.org/10.1038/s41377-022-00880-y).
3D Holographic Display Achieves Wide Viewing Angle, Large Images, reported in July 2022
A Beihang University research team created a holographic 3D display system that widens its viewing angle and enlarges image size through the simultaneous implementation of two different hologram generation methods. The system features a tunable liquid crystal grating with an adjustable period to widen the viewing angle. It provides a secondary diffraction of the reconstructed image to increase the image size.
The holographic 3D display system is composed of a laser, a beam expander, a beamsplitter, a spatial light modulator (SLM), a 4f system with two lenses, a filter, a polarized light valve, and a signal controller, in addition to the tunable liquid crystal grating. The response time of the grating is 29.2 ms, which meets the requirements for synchronous control.
To achieve a wide viewing angle, the researchers apply voltage to the liquid crystal grating, causing the liquid crystal molecules to assume a periodic order and the diffraction image to undergo a secondary diffraction.
High-Fidelity Mobile Hologram
With the unprecedented rate of advances in high-resolution rendering, wearable displays, and wireless networks, mobile devices will be able to render media for 3D hologram displays. Hologram is a next-generation media technology that can present gestures and facial expressions by means of a holographic display. The content to display can be obtained by means of real-time capture, transmission, and 3D rendering techniques. In order to provide hologram display as a part of real-time services, extremely high data rate transmission, hundreds of times greater than current 5G system, will be essential.
For example, 19.1 Gigapixel requires 1 terabits per second (Tbps) . A hologram display over a mobile device (one micro meter pixel
size on a 6.7 inch display, i.e., 11.1 Gigapixel) form-factor requires at least 0.58 Tbps. Moreover, support of a human-sized hologram requires a significantly large number of pixels (e.g., requiring several Tbps) . The peak data rate of 5G is 20 Gbps. 5G cannot possibly support such an extremely large volume of data as required for hologram media in real-time. To reduce the magnitude of data communication required for hologram displays and realize it in the 6G era, AI can be leveraged to achieve efficient compression, extraction, and rendering of the hologram data. The market size for the hologram displays is expected to be $7.6 billion by year 2023.
“All-in-One” Method Reconstructs Holograms Using Deep Neural Network, reported in Jan 2019
Digital holography is a widely used imaging technique that can record the entire wavefront information, including amplitude and phase, of a 3D object. With an interferometer and an image sensor, a 2D hologram can be acquired and stored in a computer
After capturing a digital hologram, appropriate algorithms are used to reconstruct the object numerically. Conventional approaches require prior knowledge and cumbersome operations for an in-focus and successful reconstruction. For a 3D object, an all-in-focus image and a depth map are particularly desired for many applications, but conventional reconstruction methods tend to be computationally demanding.
Recently, researchers at the University of Hong Kong demonstrated an automated “all-in-one” method that can tackle holographic reconstruction problems through a deep neural network trained with appropriate data. After appropriate training, the network can holographically reconstruct the amplitude, quantitative phase, extended focused image, and depth map. The cumbersome operations involved in conventional reconstruction approaches are avoided and system parameters become unnecessary. The intensive computational demand is also significantly alleviated by total automation. Qualitative visualization and quantitative measurements confirm the superior performance of the learning-based method over conventional ones.
Through this data-driven approach, it is possible to reconstruct a noise-free image that does not require any prior knowledge and can handle diverse reconstruction modalities simultaneously. To advance this work, the researchers plan to apply this technique to high-speed and high-resolution temporal holographic reconstruction of 3D scenarios. They note that this method is universal to various digital holographic configurations and is potentially applicable to biological and industrial applications.
In recent years, deep learning has emerged as a rapidly developing technique that benefits various application areas such as image processing, computer vision, and natural language processing. This powerful tool has also been shown to be useful to holography.
VividQ, a UK-based deeptech startup with technology for rendering holograms on legacy screens reported in July 2021
The startup is aiming its technology at Automotive HUD, head-mounted displays (HMDs), and smart glasses with a Computer-Generated Holography that projects “actual 3D images with true depth of field, making displays more natural and immersive for users.”
The startup is aiming its technology at Automotive HUD, head-mounted displays (HMDs), and smart glasses with a Computer-Generated Holography that projects “actual 3D images with true depth of field, making displays more natural and immersive for users.” It also says it has discovered a way to turn normal LCD screens into holographic displays.
“When we say holograms, what we mean is a hologram is essentially an instruction set that tells light how to behave. We compute that effect algorithmically and then present that to the eye, so it’s indistinguishable from a real object. It’s entirely natural as well. Your brain and your visual system are unable to distinguish it from something real because you’re literally giving your eyes the same information that reality does, so there’s no trickery in the normal sense,” he said.
“Scenes we know from films, from Iron Man to Star Trek, are becoming closer to reality than ever,” Darran Milne, co-founder and CEO of VividQ, said. “At VividQ, we are on a mission to bring holographic displays to the world for the first time. Our solutions help bring innovative display products to the automotive industry, improve AR experiences, and soon will change how we interact with personal devices, such as laptops and mobiles.”
Deep Learning Enables Real-Time 3D Holograms On a Smartphone, reported in March 2021
Holographic video displays create 3D images that people can view without feeling eye strain, unlike conventional 3D displays that produce the illusion of depth using 2D images. However, although companies such as Samsung have recently made strides toward developing hardware that can display holographic video, it remains a major challenge actually generating the holographic data for such devices to display.
The resulting images from Optical hologram were static, so they couldn’t capture motion. And they were hard copy only, making them difficult to reproduce and share. Computer-generated holography sidesteps these challenges by simulating the optical setup. But the process can be a computational slog. “Because each point in the scene has a different depth, you can’t apply the same operations for all of them,” says Shi. “That increases the complexity significantly.” Directing a clustered supercomputer to run these physics-based simulations could take seconds or minutes for a single holographic image. Plus, existing algorithms don’t model occlusion with photorealistic precision. So Shi’s team took a different approach: letting the computer teach physics to itself.
Each hologram encodes an extraordinary amount of data in order to create the illusion of depth throughout an image. As such, generating holographic video has often required a supercomputer’s worth of computing power. In order to bring holographic video to the masses, scientists have tried a number of different strategies to cut down the amount of computation needed — for example, replacing complex physics simulations with simple lookup tables. However, these often come at the cost of image quality. Now researchers at MIT have developed a new way to produce holograms nearly instantly—a deep-learning based method so efficient, it can generate holograms on a laptop in a blink of an eye. They detailed their findings this week, which were funded in part by Sony, online in the journal Nature.
Using physics simulations for computer-generated holography involves calculating the appearance of many chunks of a hologram and then combining them to get the final hologram, Shi notes. Using lookup tables is like memorizing a set of frequently used chunks of hologram, but this sacrifices accuracy and still requires the combination step, he says. In a way, computer-generated holography is a bit like figuring out how to cut a cake, Shi says. Using physics simulations to calculate the appearance of each point in space is a time-consuming process that resembles using eight precise cuts to produce eight slices of cake. Using lookup tables for computer-generated holography is like marking the boundary of each slice before cutting. Although this saves a bit of time by eliminating the step of calculating where to cut, carrying out all eight cuts still takes up a lot of time.
In contrast, the new technique uses deep learning to essentially figure out how to cut a cake into eight slices using just three cuts, Shi says. The convolutional neural network—a system that roughly mimics how the human brain processes visual data—learns shortcuts to generate a complete hologram without needing to separately calculate how each chunk of it appears, “which will reduce total operations by orders of magnitude,” he says.
The researchers first built a custom database of 4,000 computer-generated images, which each included color and depth information for each pixel. This database also included a 3D hologram corresponding to each image. Using this data, the convolutional neural network learned how to calculate how best to generate holograms from the images. It could then produce new holograms from images with depth information, which is provided with typical computer-generated images and can be calculated from a multi-camera setup or from lidar sensors, both of which are standard on some new iPhones.
The new system requires less than 620 kilobytes of memory, and can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on a single consumer-grade GPU. The researchers could run it an iPhone 11 Pro at a rate of 1.1 holograms per second and on a Google Edge TPU at a rate of 2 holograms per second, suggesting it could one day generate holograms in real-time on future virtual-reality (VR) and augmented-reality (AR) mobile headsets.
Real-time 3D holography might also help enhance so-called volumetric 3D printing techniques, which create 3D objects by projecting images onto vats of liquid and can generate complex hollow structures. The scientists note their technique could also find use in optical and acoustic tweezers useful for manipulating matter on a microscopic level, as well as holographic microscopes that can analyze cells and conventional static holograms for use in art, security, data storage and other applications.
Future research might add eye-tracking technology to speed up the system by creating holograms that are high-resolution only where the eyes are looking, Shi says. Another direction is to generate holograms with a person’s visual acuity in mind, so users with eyeglasses don’t need special VR headsets matching their eye prescription, he adds.
References and Resources also include:
http://www.spie.org/news/hologram-reconstruction?
https://phys.org/news/2019-03-physicists-star-trek-style-holograms.html
https://interestingengineering.com/10-best-real-world-applications-of-hologram-technology
https://spectrum.ieee.org/tech-talk/computing/software/realtime-hologram
https://www.photonics.com/Articles/3D_Holographic_Display_Achieves_Wide_Viewing/a68191