A head-up display, also known as a HUD, is any transparent display that presents data without requiring users to look away from their usual viewpoints. One example of Heads Up Displays are systems that are mounted within the car’s dash and are designed to project information through the windshield onto the road ahead. The information that they project could be anything from which radio station is playing to your speed limit.
Traditionally these systems were used in aircraft to project information that would be seen on the instrument panel. The origin of the name stems from a pilot being able to view information with the head positioned “up” and looking forward, instead of angled down looking at lower instruments. All the information they need to complete a mission is displayed – and tailored to what they need to know at a certain time. Typically, this might be the altitude, a horizon line, or information about navigation or take-off and landing. A HUD also has the advantage that the pilot’s eyes do not need to refocus to view the outside after looking at the optically nearer instruments.
Although they were initially developed for military aviation, HUDs are now used in commercial aircraft, automobiles, and other professional applications. Most carmakers are looking at using head-up-display technology to show speed, obstacle detection, mileage, navigation, incoming calls, warnings regarding fuel, and other information in a more convenient way for drivers than traditional dashboard dials and gauges. As the information can be projected on a transparent screen, a driver need not have to look away from the roads ahead. He can observe both the road as well as data simultaneously. In this way, HUDs can help to reduce crashes on roads.
Head-Up Displays (HUDs) technology
The Head-up Display (HUD) is a multimedia system projecting real-time data that’s critical for the driver in the context of concurrent driving conditions. All this information can be displayed on the windshield, combiner glass, or a projector screen to mitigate driver distraction and offer safety and convenience.
A head-mounted display (or helmet-mounted display, for aviation applications), both abbreviated HMD, is a display device, worn on the head or as part of a helmet, that has a small display optic in front of one (monocular HMD) or each eye (binocular HMD). A typical HMD has one or two small displays, with lenses and semi-transparent mirrors embedded in eyeglasses (also termed data glasses), a visor, or a helmet. The display units are miniaturised and may include cathode ray tubes (CRT), liquid crystal displays (LCDs), liquid crystal on silicon (LCos), or organic light-emitting diodes (OLED). Some vendors employ multiple micro-displays to increase total resolution and field of view.
HMDs differ in whether they can display only computer-generated imagery (CGI), or only live imagery from the physical world, or combination. Most HMDs can display only a computer-generated image, sometimes referred to as virtual image. Some HMDs can allow a CGI to be superimposed on real-world view. This is sometimes referred to as augmented reality or mixed reality. Combining real-world view with CGI can be done by projecting the CGI through a partially reflective mirror and viewing the real world directly. This method is often called optical see-through. Combining real-world view with CGI can also be done electronically by accepting video from a camera and mixing it electronically with CGI. This method is often called video see-through.
The original head-up displays are being replaced with newer technology called augmented reality (AR). These AR systems are the new version of head-up displays, but are far more advanced. ARs can integrate with GPS systems, infrared cameras, sensors (LiDAR-based), the Internet and mobile apps to turn your car’s windshield into an on-board information screen. While the cameras and sensors detect, monitor and guide surroundings as well as driver and pedestrian movements, WiFi and GPS connectivity ensures that the car stays connected with other vehicles and road infrastructure.
The tech for HUD varies on the system. Some cars use transparent phosphors on the windshield that reacts when a laser shines on it. When the laser is off, you don’t see anything, but when the light is on the information is projected on the glass. Others use a similar system but incorporate mirrors to project the images on the windshield. Compact projectors with high-resolution displays, are typically used for augmented reality glasses.
For AR to be successful and effective, advanced projection technology and displays are must-haves.
Today’s automotive HUDs have small displays with basic graphic functionality. The projected HUD graphics are typically located 2-3 m in front of the driver, which places the image near the car’s front bumper. This location is referred to as the virtual image distance (VID). A horizontal and vertical field of view (FOV) specified in degrees defines the display size. The eyebox of the HUD is the area in which the driver is able to view the entire display, and can be limited in today’s HUDs. The graphics are mostly static and do not interact with the real world as seen from the driver’s point of view.
Instead of showing secondary static information, AR HUDs can display graphics that interact with the driver’s FOV, overlaying critical information directly onto the real world. This, of course, requires the integration of a vast amount of real-time vehicle sensor data, which is no easy task.
Today, ADAS alerts are primarily indicated via a blinking symbol or an audible alarm. But an AR
HUD can identify threats by directly marking them within the driver’s FOV. AR graphics are overlaid onto real-world objects in such a way that the driver can immediately recognize the threat and quickly take appropriate action, such as braking for a road obstacle. Presenting ADAS alerts in this manner could significantly increase driver situational awareness, especially when driving at night or in low visibility conditions.
One of the central requirements for an AR HUD is the ability to project images at least 7 m in front of the driver, with 10 to 20 m preferable. Projecting images at this distance creates the illusion that the images are fused with the real world; the images look like a natural extension of the object being highlighted. Creating images that fuse with the real world is only one advantage of a longer VID. The other advantage is the reduction in eye accommodation time, which becomes more significant with age. When displaying ADAS information on an AR HUD with a long VID, the driver can more quickly react to the threat and take the appropriate action.
Since the AR-HMD should show the real world scene as well as the virtual information, AR-HMD requires optical combiner to combine the virtual information with the real world scene. Among them, the beam splitter (BS) is widely used in the AR application. The AR-HMD using BS as optical combiner can provide large FOV, but the form factor of the system is bulky due to the BS. In order to make compact system with BS, there are several approaches such as convex half mirror and free-form optic
Early head-up displays used cathode-ray tubes to generate images. Subsequent innovations such as night-vision systems, computer-generated holographic technology and optical waveguides gave pilots greater and clearer information. Recently, researchers have developed a heads-up display that uses holographic technology to make the display more visible for pilots and free up space in the cockpit.
HUDs are split into four generations reflecting the technology used to generate the images.
First Generation—Use a CRT to generate an image on a phosphor screen, having the disadvantage of the phosphor screen coating degrading over time.
Second Generation—Use a solid-state light source, for example, LED, which is modulated by an LCD screen to display an image. These systems do not fade or require the high voltages of first-generation systems. These systems are on commercial aircraft.
Third Generation—Use optical waveguides to produce images directly in the combiner rather than use a projection system.
Fourth Generation—Use a scanning laser to display images and even video imagery on a clear transparent medium.
Recent advances in display technology have improved the design of HUD systems, with many now utilizing liquid crystal displays (LCD) and light-emitting diode (LED) technology, which convey brighter images and are less expensive to manufacture. The HUD technology currently dominating the market is based on LCD’s virtual image type, which has not been widely used in general cars due to its high cost ($1000–1500) and bulky size (4 L).
Newer micro-display imaging technologies are being introduced, including liquid crystal display (LCD), liquid crystal on silicon (LCoS), digital micro-mirrors (DMD), and organic light-emitting diode (OLED).
Newer projection technologies, including micromirror-based devices based on electromechanical systems, are coming into the market, creating brighter images and more saturated colors. Microsystem technology and MEMS (microelectromechanical systems) micromirrors have been intensively developed in the past 2 decades and have found various applications due to its high integration, small size, and suitability for low-cost batch production.
The micromirror automotive HUD significantly reduces the cost and size of conventional design by more than 10 times. The disadvantage of the micromirror-based HUD is that the distance between the image and the driver is shorter than that in a conventional HUD and subsequently the driver needs to adjust the focus slightly from the road to see the HUD image. This should be acceptable considering the huge benefits in size and cost reduction.
One key challenge in AR HUD design is the processing and displaying of graphics based on
the vehicle’s sensor data, commonly referred to as sensor fusion. Integrating real-time vehicle sensor data and the HUD HMI software to accurately overlay symbols on a rapidly changing environment presents a significant design challenge.
Another design challenge found in AR HUDs is managing solar load, or solar irradiance. AR HUDs have larger FOVs and associated “openings” that let in more solar irradiance than traditional HUDs. This, coupled with the longer VIDs and associated higher magnification, creates significant thermal design challenges. The solar energy concentrates to a very small unit
area, significantly increasing the solar load on that area. Depending on the surface’s absorption
characteristics, the temperature can rise quite high, resulting in thermal damage.
According to Saumya Sharma, IEEE member and advisory engineer at IBM Research garnering greater adoption of AR & MR systems will require improving the realism of optics and their displays. “Not only are they benefitting [from nanotechnology], but AR and MR would be impossible without device scaling and manufacturing capabilities for shrinking processors and displays.” To Sharma, nanotechnology has a central role in this task: “The science to accurately pattern surfaces and use new materials for display technologies in a repeatable way that is robust for manufacturing will require nanotechnology.”
When it comes to AR HUDs, latency is a major safety concern. When on the road, the vehicle’s sensors must recognize an object, send the signal to the computer, compute the outcome, and then alert the driver. All of this needs to happen in milliseconds. The amount of electronics and speed of communication in a car is expected to grow. High speed Ethernet connections are being used more commonly for internal communication to achieve these extreme data transfer speeds.
Pico-projector and augmented reality screens
Pico-projectors equipped with MEMS scanner are promising candidates as the core of next generation stereoscopic display systems capable of augmented reality. High performance MEMS scanners combined with red, green and blue lasers forms the basis of a color laser pico- projector. The intensity levels of the lasers are adjusted individually for every pixel to be displayed and the combined laser beam is scanned across the field, writing the 2D image pixel by pixel. Although only a single pixel is displayed at any given time, the persistence of vision allows to see a flicker-free image if the refresh rate is 60Hz or above.
As no imaging devices are used in the projector, i.e. lenses, the projected image is focused at all distances. Thus, the image can be projected onto any surface at any distance without the need for adjusting the focus. The use of RGB lasers provides a wide color gamut that enables projecting images with a greater range of colors. MEMS scanners and the lasers can be packed into a very small volume, making it possible for pico-projectors to be fitted into small devices, such as cell phones. As pico-projectors are hand-held consumer devices, the laser power is restricted to meet laser safety regulations. Low lumen output of the projector (typically less than 20 lumens) limits its use in bright conditions and the size of the projected image.
Performance of MEMS laser scanners for displays has improved greatly over the last decade thanks to the excellent mechanical and optical properties offered by silicon. They have been used in various display and imaging products. The performance of high resolution and high frequency MEMS laser scanners is close to meeting the demands of full HD displays (~120 million pixels per second).
Optical waveguide technology replaces the bulky lenses of conventional head-up displays with microscopic light-manipulating materials, encapsulated in between two flat glass surfaces. This allows engineers to build an extremely compact design, while pilots benefit from clear and enlarged symbols and information.
Holography improves heads-up displays for aircraft pilots
In the Optical Society journal Applied Optics, the Scientists from the University of Arizona demonstrate a functional prototype heads-up display that uses holographic optical elements to achieve a 3D eye box substantially larger than what is available without the holographic element. The researchers say that their approach could be turned into a commercial product in as little as a few years and might also be used to increase the size of the displayed area.
“Increasing the size of either the eye box or the displayed image in a traditional heads-up display requires increasing the size of the projection optics, relay lenses and all the associated optics, which takes up too much space in the dashboard,” said first author Colton Bigler in findings published in the Optical Society journal Applied Optics. “Instead of relying on conventional optics, we use holography to create a thin optical element that can be ultimately applied onto a windshield directly.”
In the new heads-up display, holographic optical elements redirect light from a small image into a piece of glass, where it is confined until it reaches another holographic optical element that extracts the light. The extraction hologram then presents a viewable image with a larger eye box size than the original image. “The only limitation is the size of the glass displaying the image,” said research team leader Piere-Alexandre Blanche.
The same laser light interactions used to create the holograms that protect credit cards from forgery can also be used to fabricate optical elements such as lenses and filters in light-sensitive materials. These holographic elements are not only smaller than traditional optical components but can be mass produced because they are easily fabricated.
While the researchers demonstrated the technology using just one colour, they say it could be expanded to create full-colour heads-up displays. They also hope to use holographic technology to increase the size, or field of view, of the display. The researchers are working with multinational conglomerate Honeywell to develop the display for aircraft.
VitreaLab: Quantum technology for integrated optics solutions
VitreaLab is an Austrian company using quantum technology for the development of integrated optics solutions and next-generation display products. VitreaLab’s technology based on the ultra-fine control of light within a thin piece of glass. Basically, light from an RGB laser is emitted through the side of a glass chip containing waveguides manufactured by femtosecond-laser micromachining (or so-called direct laser writing) of glass. Combinations of various waveguide structures are able to split, combine, steer and spread the light inside to produce millions of beams that can be used to create a variety of novel displays, explained by Jose Pozo, EPIC’s CTO.
This approach has several advantages over competing technologies, such as wedges or slab waveguides. First, it enables true 3D and other geometries inconceivable with standard fabrication systems. Second, waveguides can be produced in any type of transparent materials and crystals. Third, it allows the low-loss guiding of high-power laser light from ultraviolet to infrared. Fourth, it allows the manufacture of different kinds of microstructures, without modifying the fabrication equipment – a feature crucial for rapid prototyping and low-cost mass-manufacturing.
With these features, VitreaLab’s innovative laser-lit chips would be able to unlock the world’s first full holographic display and enable standard LCD technology to achieve up to five times higher energy efficiency, higher contrast ratios and colour rendering. This would apply to all types of displays particular mobile devices because as the screen is the largest drain on the battery, VitreaLab´s solution would enable almost a doubling of battery lifetime.
One of VitreaLab’s main challenges is to ensure their designs are compatible with low-cost mass manufacturing. For this reason, they have to be stringent in the design of their chips. In this connection, a positive factor is the downward trend in the cost of femtosecond lasers which have fallen considerably in the last 10 years and are likely to decrease even more in the near future. This will enable the company to reduce the cost of laser writing particularly for smartphone displays.
HUD has allowed fighter and bomber pilots to keep their attention on the actual horizon and targets rather than the gauges or handheld maps inside the cockpit. Navy, Air Force, and Marine Corps has widely adopted JHMCS as their HMD solution. The system uses a visor that attaches to the crown of a pilot’s helmet that projects critical flight data, weapons cueing, and sensor symbology, as well as mission information generated from the aircraft’s mission sub-systems and data-links in front of the pilot’s right eye. The projection is reflected like a heads-up display onto a piece of sapphire glass that is specially tailored to the contours of each pilot’s facial structure. The unit isn’t cheap, with each one costing hundreds of thousands of dollars.
Every JHMCS capable cockpit is magnetically mapped before the sensor tracking gear is installed, with magnetics being the system’s primary mode of spatial tracking. JHMCS and the modern HMD for fighter aircraft concept serve a number of functions. These include elevating the pilot’s overall situational awareness, enhancing their ability to keep their eyes “out of the cockpit,” and most importantly, giving the pilot the ability to target weapons and sensors simply by looking at said object or locale.
HUD also enables military augmented reality system which generates a robust, multi-faceted picture of their operational environments, including the location, nature, and activity of both threats and allied forces around them, essential for the success of their missions. Augmented Reality technology is making this kind of rich, real-time situational awareness increasingly available to for aircraft, submarines and tanks and other vehicle-assigned forces, along with a capacity to deploy precision armaments more safely, quickly and effectively. The augmented reality system consists of a computer, a tracking system, and a see-through Head-Mounted Display. The system tracks the position and orientation of the user’s head and superimposes graphics and annotations that are aligned with real objects in the user’s field of view.
The Navy’s prototype Divers Augmented Visual Display is a high-resolution, see-through head-up display (HUD) embedded directly inside of a diving helmet. This system shall enable divers to have real-time visual display of everything from sonar images showing their location, text messages, diagrams, photographs and even augmented reality videos. Having real-time operational data enables them to be more effective and safe in their missions — providing expanded situational awareness and increased accuracy in navigating to a target such as a ship, downed aircraft, or other objects of interest.
Naval Sea Systems Command (00C3) is in the process of developing enhanced sensors — such as miniaturized high resolution sonar and enhanced underwater video systems — to enable divers to ‘see’ in higher resolution up close, even when water visibility is near zero. These enhanced underwater vision systems would be fed directly into the DAVD HUD. The DAVD HUD system can be used for various diving missions, including ship husbandry, underwater construction, and salvage operations. The same system can eventually be used by first responders and the commercial diving community.
According to Verified Market Research, the Global Head Up Display Market was valued at USD 1.26 Billion in 2020 and is projected to reach USD 8.03 Billion by 2028, growing at a CAGR of 26.05% from 2021 to 2028.
The foremost growth driver for the Head Up Display Market is the rise in demand of connected vehicles. Connected cars come with features such as audio and visual entertainment from infotainment units and improve the driving experience by providing convenience and safety features such as navigation, real-time traffic, and parking space updates. Bluetooth connection to a HUD is an example of a connected feature of a HUD, which assists us in looking at incoming calls, messages, and music selection with a minimal amount of distraction. Safety features are mostly provided by advanced HUD systems such as operating with an infrared camera to enable visibility in fog conditions and projecting lines on the highway. Such features, which help a driver to navigate an automobile seamlessly are the prime reason for the positive drive in the demand for connected vehicles.
A number of worldwide car registration and growing awareness regarding road safety has also been a significant market growth driver, and for the demand for Heads Up Display globally. This can be attributed majorly to the rising worldwide population and a rise in disposable income across many economies. As of 2018, there were 1.4 billion cars on the road in the world, to which 2019 has added 63.73 million passenger cars globally. Such a mammoth number of cars increases the traffic congestion, and consequently, the risk of road accidents. This further drives the demand for Heads Up Display.
There are various types of head up displays. The most common HUD has an image generator which is kept on dashboard and specially coated windshield to reflect the images. Recently, various luxury car manufacturers have started to incorporate such type of displays in their premium automobile models. Manufactures are now focusing towards mid segment cars. The HUD market has large scope for research and development. During forecast period new technologies will come up with voice controlled HUDS. Newer technologies such as 3D HUD and laser based displays are attracting customers towards this market. These are the major factors that will fuel this market during forecast period.
In 2021, Volkswagen has launched an Augmented Reality Head Up display. Volkswagen has introduced the technology in the compact segment.
The major players in the market are Panasonic Corporation (Japan), Nippon Seiki Co., Ltd. (Japan), Elbit Systems Ltd. (Israel), Texas Instruments Incorporated (U.S.), Denso Corporation (Japan), BAE Systems (U.K.), Visteon Corporation (U.S.), Continental AG (Germany), Esterline Technologies (U.S.), Pioneer Corporation (Japan) among others.