The virtuous circle of AI and photonics

01 January 2024
By Benjamin Skuse

While the general population has been exploring ChatGPT’s ability to boost their productivity (or to help them shirk work), scientists have been wielding other forms of artificial intelligence to help discover new materials, reveal trends hidden in big data, and optimize the design of a host of technologies.

In fact, in very short order, artificial intelligence (AI)—any computer system that mimics a human cognitive function, like learning or problem-solving—has gone from interesting subfield of computer science, to indispensable tool for trailblazing scientists, be they biologists, astronomers, engineers, or chemists, working on highly complex, multidimensional problems.

AI has and will continue to enrich all areas of science and technology. But there is one discipline that stands out as having the potential to pay back the favor in spades: photonics. “It’s a very exciting frontier—a kind of fruitful reciprocity between photonics and AI,” says Yuebing Zheng, an associate professor at the University of Texas at Austin. “AI helps photonics design, but at the same time, photonics platforms enable hardware for AI in general.”

Zheng focuses primarily on what AI can do for nanophotonics, the study of light and its interactions with matter at the nanoscale. Traditionally, innovators in this space would take a ‘trial-and-error’ approach, leaning heavily on intuition and expert knowledge to come up with an initial device design or material that broadly meets requirements, and then optimizing it based on repeatedly conducting simulations and experiments to approach the desired performance.

“AI provides a different way: inverse design,” Zheng explains. Instead of starting with an initial design and refining it, inverse design utilizes AI algorithms to scan the full design parameter space in search of solutions that might be nonintuitive but offer optimal performance. This technique has been used to build specific components like optical filters, routers, and switches, as well as advanced materials for enhanced optical sensing, spectroscopy, fluorescence, etc.

Zheng’s group specifically homes in on multilayer nanophotonic structures, where inverse design assists in selecting different materials and layer thicknesses to achieve desired properties. In 2021, the group presented a method combining two artificial neural networks (a form of AI that mimics the way biological neurons signal to one another) working in tandem and capable of rapid inverse design of multilayer thin-film structures known as high reflectors. Multilayer high reflector coatings find use in high-quality optics for mirrors for astronomical telescopes and lasers. They maximize Fresnel reflections through constructive interference, achieved by alternating layers of high- and low-refractive-index materials with thicknesses specifically chosen to maximize reflectivity at a given wavelength range.

Such designs are relatively simple to derive for a few layers, but practical thin-film devices can have tens or hundreds of layers, making design a challenge using traditional methods. In their work, Zheng’s team applied the tandem neural network method to inverse design 20-layer thin-film high reflectors. Not only did the technique successfully replicate a series of known high-reflector designs based on physics-based methods (without prior knowledge of them), but it also generated designs with extended high-reflectance zones, originally accessible only by going back to physical principles or other complicated optimization techniques.

“Our main innovation is that we develop better AI algorithms to achieve these designs,” Zheng says. Essentially, the tandem approach presents all possible designs that meet the given requirements so that the team can then choose the most practical, in terms of simplicity, ease of fabrication, material consumption, etc., rather than finding one design and neglecting the rest of the parameter space.

Other researchers working in this field also focus on improving AI methods used for photonics inverse design. Associate Professor Francesco Da Ros, and his Machine Learning in Photonic Systems group at the Technical University of Denmark, have been focusing on optical amplifiers in recent years. The devices, commonly used as optical repeaters in long-distance fiber-optic cables, amplify optical signals without having to convert them to electrical signals. Optimizing optical amplifiers’ responses is a task fraught with difficulty, with traditional approaches to predicting amplifier behavior often involving complicated differential equations and hard-to-characterize physical parameters.

Initially, the team used black-box-type neural network models to provide amplifier parameters from a given target amplifier response. However, this approach required a large training dataset to generate sensible designs. “And to ignore your prior knowledge, and then just throw something at a problem, leaves out information that you effectively have and that could help you address it,” adds Da Ros.

Going back to the drawing board, the team reviewed the physics of the problem and identified pain points in the existing theoretical model. This model contained a whole system of equations, but there was one term that couldn’t be solved, spoiling all attempts to match the model to reality precisely. To get around this, the team replaced the pernicious term with a neural network. This meant it could be solved, which meant the whole system of equations could be solved.

The technique, which was developed in collaboration with the Coding and Visual Communications group at the university, speeds up optical amplifier design and optimization, while, at the same time, allowing the team to understand the underlying physics. Da Ros has since adopted this “grey-box” approach to characterize and optimize the design of photonic devices, other optical subsystems, and even complete end-to-end optical systems.

As these examples show, AI is already having a huge and wide-ranging impact on the development of photonics. But how does photonics enrich AI?

The rapid growth in popularity of AI tools like ChatGPT, Dall-E, and Bard has brought into focus just how hungry these, and other AI systems are for computing power. As they become ever-more ubiquitous, the computing power needed for their training and deployment is increasing far faster than humanity’s ability to manufacture the underpinning hardware. Testifying before the US Senate in May 2023, Sam Altman, CEO of OpenAI, the company behind ChatGPT, highlighted just how acute this hardware bottleneck is: “We’re so short on GPUs, the less people that use the tool, the better.”

A quick fix would be simply to build more electronic components and data centers to accommodate AI’s growth. But this would be doomed to failure eventually for three interrelated reasons: computing speed, memory bandwidth, and power consumption. Electronic transistors are becoming so densely packed and so small that they are approaching physical limits, with computing speed, in turn, approaching its ceiling. As computing speed has risen dramatically, bandwidth has become a related issue, with computers spending more time waiting for data to be fetched from memory than performing computations. And data storage and computation each consume a terrifying amount of energy—already, data centers use more than two percent of the world’s electrical energy.

Integrated photonics, where photonic devices are incorporated on a microchip, could form a large part of a more sensible solution. “Compared with CPUs and GPUs, the main advantages of implementing AI on photonic platforms are high power efficiency, high bandwidth, low delay, compatibility with existing technologies, and the potential for high-speed parallel processing with ultra-low energy consumption,” explains Jason Png, electronics and photonics department director at Singapore’s Agency for Science, Technology and Research (A*STAR) Institute of High-Performance Computing.

A photonic integrated circuit (PIC) with several neural networks on it, where also the optical neurons used to mitigate signal distortions in fibers are present, compared with biological neurons. Both are based on trainable neural circuits that learn from experience. At the University of Trento, research is ongoing to develop biological/photonic hybrid circuits. Note that the length scales are different in the two photos (the PIC size is a few mm, the neuron body is a few tens of microns). Photo credit: Clara Zaccaria, University of Trento

Using integrated photonics, input information can be encoded in the intensity or phase of an input laser, and, in parallel, by utilizing multiple degrees of freedom such as different colors of light. Computations can then be performed by splitting and recombining the laser beams in complicated ways through miniaturized photonic devices—all contained on a chip.

Lightmatter in Silicon Valley has developed Envise Beta, a photonic compute product that does exactly this for AI applications. Envise Beta is the world’s first photonic AI accelerator capable of executing state-of-the-art AI workloads. Each chip includes identical arrays of vector Mach–Zehnder interferometers (MZIs) along with lasers to drive the photonic processors.

A vector MZI mixes two laser beams and sends them down the arms of the interferometer, where the light is modulated to control its division between two outputs. With electronic input data converted into optical form, an array of vector MZIs is well-suited to performing one of the most complex and time-consuming operations in AI: matrix multiplication. 

Lightmatter CEO Nick Harris uses the analogy of neurons in the brain to explain matrix multiplication and illustrate why it is central to AI processing. “In the brain, it is believed that information (or knowledge) is encoded in the connections between neurons. If you draw a set of neurons, write a number above each, and create a table of all the connections, this table of values is encoding what the connections are between the neurons. So, when information flows through the brain, it’s like a vector being multiplied by that matrix of values.” This is exactly the operation Envise performs.

With Envise and Lightmatter’s Passage, a wafer-scale programmable photonic interconnect, receiving orders from the world’s major cloud companies and data centers, Harris believes the company could solve all three AI bottlenecks—computing speed, memory bandwidth, and power consumption—in one fell swoop.

But Lightmatter’s isn’t the only approach to clearing the AI bottlenecks. In 2020, Png and A*STAR colleagues presented an innovative photonic AI architecture to perform the convolution operation for convolutional neural networks (CNNs)—neural networks inspired by the animal visual cortex—that relied instead on a pair of star couplers, diffractive optical elements that evenly distribute the input signal among many receivers and allow their interconnection.

“In contrast to the conventional MZI method, our simulation results indicate that the star coupler offers a reduction in footprint by an order of magnitude,” says Png. “This significant size reduction not only conserves chip real estate but also curtails propagation loss since light traverses a reduced distance.”

In other work, Stefano Biasi and Lorenzo Pavesi—a researcher and a professor, respectively, from the University of Trento, Italy—have constructed neural networks using microring resonators (MRRs) instead of MZIs. MRRs are tiny waveguide rings that can couple light through subwavelength gaps into other optical waveguides. Resonances occur along the lengths of the rings at wavelengths where a whole number of waves fits exactly around the length of the ring.

In their lab, Biasi, Pavesi, and their collaborators have shown how a photonic neural network based on MRRs can outperform electronics in certain applications. “We have demonstrated that it is possible to use a very small and simple neural network to mitigate for signal deformation when a signal is propagating in optical fibers; what is called chromatic dispersion,” Pavesi explains. “We were able to recover the information by using a neural network at very high speed, and dramatically lower power consumption and cost with respect to what people are doing with electronics.”

Though the dizzying array of architectures and optical devices being employed to realize integrated photonics for AI are fascinating, perhaps even more interesting is how these technologies are being created—using AI. “In the lab, we use AI to train our neural networks,” says Biasi. “And I see that the wider photonics community is using AI even to design proper optical neural networks or some of their optical components.”

This approach is not restricted to academic research labs, Harris says. “What we end up doing is building algorithms that try to learn what the optimal shape of the photonic component is to reduce how much light is lost, or to make it more broadband so that it can handle more colors at the same time. Our devices really don’t look any more like the first principles thing you would draw—they’re very much modulated by these optimization algorithms and things that look like backpropagation from deep learning.”

Such cross fertilization represents the first steps to a truly symbiotic relationship between AI and photonics, though challenges remain from both perspectives. Generating real-world training data and developing less narrow AI algorithms that can be applied to multiple problems, while at the same time shrinking photonic devices and reducing losses, will all be key.

But solutions to these challenges will inevitably emerge as the integration between photonics and AI deepens, as Zheng neatly summarizes: “If you have a strong photonic neural network, you’re going to run really great AI algorithms, and those will help you design better photonic devices and materials—it’s a positive feedback loop that will keep improving.”

Benjamin Skuse is a science and technology writer with a passion for physics and mathematics whose work has appeared in major popular science outlets.

Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research