Display system technology improvements are vital to AR/VR headset adoption
Virtual reality (VR), augmented reality (AR), or mixed reality (MR) headsets and smart glasses have been the subject of much anticipation for the past decade, with promises of being the “next big thing.” Despite their potential, these devices have not seen widespread consumer adoption, in part due to the lack of universal use cases, as well as comfort issues (wearability, visual, and social).
For now, these headsets, goggles, and glasses have found success in enterprise, industrial, and defense markets, where they cater to specialized use cases and niche applications. Technology developments—many enabled by optics and photonics—may help bridge the gap.
Optics and photonics are at the core of all AR, VR, and MR systems, serving as building blocks for display, imaging, and sensor subsystems. The display subsystem assembly (DSA), imaging subsystem assembly (ISA), and sensor subsystem assembly (SSA) are intimately linked together and cross-calibrated to provide the best immersive experience to the user.
While early AR headsets used existing platform components, there is now a need to develop specific technological building blocks for each. Here, I will focus on the display subsystem.
There are usually three parts comprising the DSA: The display engine where the image is generated; the optical combiner, which combines the display and the see-through scene; and the see-through stack which includes all additional optics, vision prescription lenses, protection visors as well as sensors (eye tracking, iris recognition, face tracking, gesture sensing), and dimming technologies (global or pixelated).
Conventional display systems for VR headsets use LCD or AMOLED panels, while AR headsets and smart glasses use microdisplay panels like LCOS, DLP, microOLED, and more recently, microLED. MEMS laser beam scanner (LBS) systems, initially developed for telecom applications, have found extensive use in smart glasses and MR headsets.
However, LBS display systems may face other challenges, such as the number and size of driving chips which might increase the final display engine size and costs, especially as the scanning system gets more intricate. In addition, the exit pupil size and color uniformity may have issues when mated to a diffractive waveguide combiner, which requires complex compensation algorithms.
For compact AR systems, microLED panels offer compelling advantages as they are emissive, requiring no illumination engine, and can deliver the brightest images (>1M nits at the panel). However, implementing a full RGB (red, green, blue) microLED display engine necessitates aligning three single-color panels onto a miniature X-cube, which presents significant challenges and requires further demonstration of high yields.
Polarized light output from display panels is desirable, especially when the combiner is more effective for one polarization than the other, as seen with holographic waveguides or surface relief grating (SRG) waveguides. Reflective waveguides, on the other hand, offer similar efficiency for both polarizations, making them better suited for non-polarized display engines.
Power consumption is a crucial concern for all-day-use smart glasses. For such glasses, where full immersive display like MR headsets may not be necessary, the average pixel lit (APL) in typical use cases can be reduced to below 10 percent or even 5 percent (simple contextual display or ambient AR display). This is especially beneficial when using emissive displays like microLEDs and LBS (to a lesser extent microOLED). MicroLED and LBS display systems typically require low power for low APL but consume much more power at higher APL when compared to their nonemissive counterparts (LCOS and DLP). This defines their best fit for ambient AR and contextual display in smart glasses (<1/4W for the DSA). For headsets with larger batteries capable of providing more power to the system (up to 6-8 W for the entire goggle system), higher APL more adapted to MR use cases can be displayed, as seen in HoloLens and Magic Leap headsets.
Reducing the size of the LED active area to pixel sizes and pitches similar to LCOS, DLP, or micro-OLED results in reduced efficiency due to edge recombination and other parasitic effects. Various techniques, such as pick and place, have been investigated, but were found to be nonscalable for AR displays. Monolithic RGB microLEDs have become the ultimate architecture for many. However, R color generation remains challenging in monolithic GaN/InGaN despite efficient B and G pixels produced by conventional GaN and efficient R pixels by AlInGaP. Some companies have explored the use of vertical nanowires to increase the effective pixel area, while others employ quantum dot conversion for G and R, while maintaining natural B emission from GaN. Monolithic RGB integration in a single display panel remains a holy grail for many microLED start-ups.
MicroLED panels are ideal for new thin “all-in-one” AR display systems, making them desirable image sources for new DSA architectures like those by LusoVU and NewSight Reality. These architectures can combine separate single-color micropanels using nonuniform transmissive or reflective MLAs to not only generate the image, but also synthetically build up a field of view (FOV), eye box, color, and even achieve foveated display and light-field displays. Such a display system can be dynamically tuned when linked to a pupil/gaze tracker to reduce the overall power consumption by generating the best-fit eye box and resolution at refresh rates similar to those for display (pupil steering, foveated display, brightness modulation, color, FOV aspect ratio, etc.). Through such systems, the brightness at the eye can be made higher than the brightness at the display, making them suitable also for micro-OLED panels.
In conventional DSA architectures, efficiency depends on both the display engine and the combiner, with the waveguide efficiency often defined as nits/lumens. Etendue matching is crucial for increasing display engine efficiency, with emissive displays (micro-OLED or microLED) being Lambertian, and nonemissive displays (LCOS and DLP) providing precise illumination numerical aperture matching the exit pupil and FOV, leading to efficient display engines.
The optical combiner in AR systems is increasingly using waveguides, paired with LCOS, DLP, or the awaited microLED RGB microdisplay panels. There are three waveguide platforms today: holographic, surface relief grating, and reflective waveguide architectures.
Waveguide combiners are preferred for smart glasses due to their 2D pupil expansion scheme providing a large eyebox and a small form factor. However, high brightness display engines are required for efficient waveguide combiners, making micro-OLED panels suitable for visor-based and birdbath architectures, while microLED panels seem best suited for waveguide combiners.
Holographic waveguide combiners were first implemented in the 1990s, while surface relief gratings waveguide combiners were explored for various applications before becoming common in smart glasses. Reflective waveguides can carry larger FOV than diffractive or holographic waveguides, and modern reflective glass waveguides offer a great alternative. Several companies are developing plastic reflective waveguide solutions as well.
The see-through stack in AR display systems adds essential functions like focus location, prescription lens integration, eye tracking system, binocular display alignment sensors, and more. The waveguide, being the largest real estate in smart glasses, is utilized to implement many of these functions while providing an unaltered see-through experience.
Advancements in waveguides, microLEDs/OLEDs, and laser beam scanning will drive the next generation of display systems but will require clever optical engineering teamed with material science and packaging technologies to fully realize the potential. And while the consumer market for AR/VR/MR devices has been slow to mature, primarily due to the lack of “killer apps” to drive mainstream adoption, with technical improvements in display technologies, the issues surrounding comfort can be solved.
Bernard Kress is the 2023 SPIE President, and Director, XR Hardware at Google.