The rise of brain-like computers and the role of photonics

01 September 2024
By Hank Hogan
SpiNNcloud cardframe with 48 node boards running. This is part of a neuromorphic computer which has 10 billion neurons. Photo credit: ©SpiNNCloud Systems

Data centers will consume the world if present trends continue. A key reason is large language models (LLMs) like ChatGPT and others, which require an enormous amount of computing power. These models are a form of artificial intelligence that currently run on today’s standard computers but could operate on other hardware. LLMs imitate human cognition in responding to trivia questions, answering queries as to what to do when driving up the California coast, editing technical and scientific papers, and dealing with other tasks. Running LLMs comes on top of everything else that data centers do.

However, model complexity is growing 10-fold every 18 months. In three years, the models will need 100 times the computing, networking, energy, and other resources required today. In a decade, the model demands will be a million times greater.

One issue is that, from the ground up, standard computers just are not built for the job, says Harish Bhaskaran. As a professor of applied nanomaterials at the University of Oxford, he researches nontraditional computing.

“You’re effectively trying to go across continents using a car. There’s nothing wrong with a car. But you need an aeroplane,” he says. Takeoff may come soon with the arrival of neuromorphic computing. This approach uses hardware and software to mimic the neurons and synapses found in the brains of people and animals. The goal is to create systems that are more adaptable and energy efficient than traditional computers.

“Let’s try to understand neural computing to figure out what are the architectural principles that are lower power, potentially more efficient, potentially more computationally powerful than traditional approaches,” says Craig Vineyard, of Sandia National Laboratories’ (SNL) neural exploration research lab.

After years of research and development, large-scale neuromorphic computers, some of which have billions of artificial neurons making them potentially big enough to tackle significant computing chores, are arriving. These artificial neurons are the basic processing nodes of neuromorphic technology, with single neuromorphic chips supporting numerous neurons.

Intel’s neuromorphic hardware was found to be 16 times as energy efficient as conventional chips in performing deep learning tasks. Photo credit: Intel.

Systems consisting of many such chips make up entire computers or plug-in boards that are coming from established companies like Intel and Hewlett Packard Enterprise (HPE), as well as from startups like SpiNNcloud Systems. Photonics could play a key role in the new technology’s communications, computing, or both areas.

As part of their efforts to develop the market for this emerging technology, hardware companies like those mentioned are providing chips, boards made up of dozens of chips, and complete neuromorphic computers comprised of thousands of chips. They are collaborating with research labs to help develop the software code and algorithms needed by the new hardware. A similar process of hardware and software development occurred over decades of time for standard digital computers.

In traditional computers, the memory and the central processing unit (CPU) are separate. During computation data flows repeatedly back and forth between the two. Advantages of this setup are easy program execution in user-defined steps, and certainty in the outcome. Disadvantages are that shuffling data limits processing speed and wastes energy.

Another factor hobbling general-purpose standard computers is that LLMs repeatedly perform matrix-vector multiplication, a complex mathematical operation involving two arrays of numbers that creates a third array. Such a calculation is difficult for a CPU. Graphics processing units, or GPUs, handle this task better. They do, though, still shuttle data back and forth to memory.

In contrast to traditional computing, the brain only consumes energy when required, is highly parallel, and does in-memory computation. Regarding the last point, this means that the storage unit also calculates. The synapses that connect neurons exhibit plasticity, with this ability to change based on inputs creating an in-memory system for energy efficient data processing and learning.

According to the US National Institute of Standards and Technology (NIST), the 85 billion neurons in the human brain perform the equivalent of an exaflop—a billion billion mathematical operations per second—with just 20W of power. In comparison, a supercomputer needs one million times the energy to do the same computation.

One of the reasons for this efficiency in the brain is the use of spiking neurons that only fire in response to enough accumulated input. This approach discards useless information and so improves efficiency.

As for the latest neuromorphic computers that emulate spiking neurons and other aspects of the brain’s architecture and operation, the first example comes from Intel. The chip maker has been working on the technology for years. Its second-generation neuromorphic chip, Loihi 2, debuted in 2021. Like the brain, it has spike capability and can do so with multiple output levels instead of a simple on-off signal. Having a range of outputs allows it to support event-based messaging, which means that the chip’s neurons only alert when something happens, like movement in a scene. Each chip supports roughly a million neurons.

A 2022 paper by researchers at Austria’s Graz University of Technology—work funded in part by Intel—found the chip maker’s neuromorphic hardware 16 times as energy efficient as conventional chips in performing deep learning tasks, an indication of the promise of the technology. Evaluations of neuromorphic technology from Intel and other vendors, at SNL and elsewhere, demonstrated that it delivers greater speed and accuracy, with lower energy costs.

In 2024, SNL had a 1.15-billion-neuron neuromorphic system built by Intel that consisted of 1,152 Loihi 2 chips. Researchers at SNL have been figuring out how best to use this system, which has about as many neurons as an owl’s brain. They are investigating which algorithms to use and what problems are the best candidates to run on hardware that has hundreds of times the neurons of an earlier system built by Intel years before using its first-generation neuromorphic chip.

“How can we scale up to take advantage of this unparalleled large-scale system?” Vineyard says of the SNL group’s work.

One widespread application might be what is known as a random walk, a computational technique used to determine how heat spreads through a material, how one gas diffuses into another, or how fusion reactions and other phenomena come about. Better understanding of the process that makes sunshine could help turn it into a power source here on Earth.

The scientists are not starting from scratch in working with their new neuromorphic computing system. Previous work at SNL running algorithms on a much smaller neuromorphic computer showed promise in executing random walk calculations as well as medical imaging and finance applications. The foundational work with the new system will pave the way for other researchers and commercial uses.

Another commercial neuromorphic entrant arrived this year. In May, startup SpiNNcloud Systems announced SpiNNaker2, which has more than 69,000 interconnected microchips. Each chip is a low-power mesh of 152 ARM-based processor cores, and each of these cores can support at least 1,000 neurons, enabling parallel processing. The mass of microchips translates to more than 10 billion neurons, says co-CEO Hector Gonzalez.

He adds that the company’s goal is to construct computers with a building block approach, like children’s toy blocks that snap together. In theory this makes it possible to build a computer with as many neurons as the human brain, about an order of magnitude increase, by simply adding more blocks.

However, in practice, this modular approach requires fast and energy-efficient long-distance connectivity between boards and systems. Today, these connections are electrical, but as connection span increases signaling at a fast enough rate electrically becomes challenging. So, another transport technology could come into play.

“Future generations can also have optics-based interconnects,” Gonzalez says.

The limitations of electrical interconnects are the same that confront today’s standard computing technology as it strives to support increasingly complex LLMs. In response, Intel has announced an optical-compute interconnect, a chiplet that co-packages a photonic with an electrical integrated circuit. Putting the two types of chips close together brings benefits.

“We can deliver much higher bandwidth density. We can deliver much better power efficiency,” says Intel’s Thomas Liljeberg, a senior director for the company’s Integrated Photonics Solutions group.

He adds that once a signal is in optical fiber, it can travel hundreds of meters—a difficult feat using electrical signals. Intel has plans to increase the wavelengths used to carry information from eight to 16, up the line rate from 32- to 128-Gb-per-second and double the fiber optic pairs from eight to 16. Consequently, the bandwidth will rise from today’s rate of four Tb-per-second up to 64 Tb-per-second over the next decade.

The photonics chip incorporates a speck of indium phosphide for the laser gain material, with silicon photonics forming the rest of the chip. Photodetectors are built using silicon, according to Liljeberg.

This connection technology is not designed with neuromorphic systems in mind, but it, or something like it, could help neuromorphic systems expand their reach and increase the number of neurons held within. For his part, Gonzalez is interested in improving performance when a computer’s components are distributed across an array of boards and racks, which may involve photonic interconnects along with 3D packaging of chips and boards. To visualize 3D packaging, picture a standard printed circuit board chopped up and the pieces stacked into a cube, with the goal of reducing the distance between components, improving speed, cutting power, and saving space.

Intel’s optical compute interconnect, a chiplet that co-packages a photonics integrated circuit with an electrical IC. It can transmit and receive data at four Tb-per-second. Photo credit: Intel.

A third example of a recent neuromorphic computing contender comes from HPE, where senior research scientist Bassem Tossoun is part of a team working on a neuromorphic accelerator that builds upon earlier research. “We were able to integrate this device called a memristor, which is very commonly used in the electronics world for neuromorphic computing, onto the photonics domain.”

The group did this by creating a memristor (memory resistor) laser. They could electronically shift the laser’s emission wavelength, and the device would hold that adjustment, even with the power turned off. Like a memristor, where resistance can be changed and stored, the memristor laser is an artificial functional equivalent of a synapse between neurons in the human brain.

A synapse, memristor, and memristor laser can connect processing units like a pipe with a diameter that is modifiable on demand and that remembers its last configuration. Such a pipe impacts the flow of water and so can encode information in the flow rate while providing a connection. A memristor laser does something similar by allowing information storage in the wavelength of light and connecting processing nodes.

A memristor laser enables matrix-vector multiplication, in-memory computing, and other elements of neuromorphic technology. Such a photonics platform would move data at the speed of light, be highly energy efficient, and inherently parallel, Tossoun notes.

HPE’s project stores data via wavelength. The accelerator uses the company’s compound semiconductor on a silicon photonics platform. A compound semiconductor, like indium phosphide or gallium arsenide, is an optical gain medium that enables lasing, something silicon cannot efficiently do. A compound semiconductor in combination with silicon enables integration of lasers, photodetectors, amplifiers, modulators, and more into a monolithic circuit.

Tossoun points out that doing everything—calculations and communications—in the photonics domain reduces latency and saves power. These benefits come from the inherent characteristics of light and the elimination of conversions between electronics and photonics.

Tossoun says that its neuromorphic accelerator prototype is only the beginning. The device’s fabrication process is compatible with standard microchip manufacturing, so HPE is engaging with foundries to take the next step to move their idea into something manufacturable in high volume. The group plans to deliver a prototype this year to the US Defense Advanced Research Projects Agency (DARPA), and then to pave the way to build commercial neuromorphic computers based on the technology within
five years.

Potential applications include supporting LLMs as well as optimization problems. Regarding the latter, neuromorphic computers, photonics or electronics based, might be well suited to solving what are known as NP-hard problems in computational complexity theory: One example is airline scheduling. Suppose a flight is delayed or canceled because a plane is out of commission due to mechanical problems or a sick crew. How should other planes and crews be rescheduled to minimize travel disruptions?

While promising, the technical community must make further refinements to hardware and accompanying software before neuromorphic computing enters widespread use. Over the last decade technologists have created more powerful chips and built computing systems made up of more of those chips. As a result, neuromorphic computing experienced orders-of-magnitude improvement in the number of neurons in systems. Going forward, that progress should continue, although the pace may change.

However, merely having neuromorphic computers with billions of neurons isn’t enough. There must be algorithms that take the best possible advantage of the hardware. Software development is an area that will also see significant effort over the coming decades.

“There’s going to be a lot of innovation, a lot of breakthroughs,” Vineyard says.

Sandia computing researchers William Chapman, Brad Theilman, Craig Vineyard and Mark Plagge (left to right), check out a new 1.15 billion neuron neuromorphic system consisting of 1,152 Intel Loihi 2 chips. Photo credit: Courtesy of Craig Fritz, Sandia National Labs.

There also could be changes in neuromorphic computing fundamentals because this branch of computing takes its cues from the brain. How the organ works at all levels is not yet fully understood by scientists.

“There are still a lot of mysteries about how the brain is really encoding information,” Gonzalez points out.

As for the relationship between neuromorphic and standard computing, Tossoun doesn’t see the new technology taking over every task handled today by general-purpose traditional computers like the laptops and servers used by everyone daily. Standard digital computers are exceptional at many applications, and the performance of the CPUs, GPUs, memory, and other chips within them increases every year.

Instead, he and other experts foresee a heterogenous future, one in which a computing system might have a variety of accelerators. Each would solve specific classes of problems, with computation divided up among accelerators to yield the highest performance.

Bhaskaran adds that big tech players are investing heavily in standard computing’s underlying technology, which continues to improve in price and performance. Still, he thinks the future of computing is one of fundamental change. “We’re at the very cusp of a revolution in hardware.”

Light-based communication could be part of that revolution because interconnects at all levels present major bottlenecks, and photonic connections seem poised to overcome such problems. Computing with light, neuromorphic or otherwise, has further to go but is promising.

Considering all this, Bhaskaran predicts, “Photonics will have a role to play.”

Hank Hogan is a freelance science and technology writer who has covered computer chips and photonics extensively.

 

Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research