www.sciencenews.org
The tiny worm Caenorhabditis elegans has a brain just about the width of a human hair. Yet this animals itty-bitty organ coordinates and computes complex movements as the worm forages for food. When I look at [C. elegans] and consider its brain, Im really struck by the profound elegance and efficiency, says Daniela Rus, a computer scientist at MIT. Rus is so enamored with the worms brain that she cofounded a company, Liquid AI, to build a new type of artificial intelligence inspired by it.Rus is part of a wave of researchers who think that making traditional AI more brainlike could create leaner, nimbler and perhaps smarter technology. To improve AI truly, we need toincorporate insights from neuroscience, says Kanaka Rajan, a computational neuroscientist at Harvard University.Such neuromorphic technology probably wont completely replace regular computers or traditional AI models, says Mike Davies, who directs the Neuromorphic Computing Lab at Intel in Santa Clara, Calif. Rather, he sees a future in which many types of systems coexist.The tiny worm C. elegans is inspiration for a new type of artificial intelligence.Hakan Kvarnstrom/Science SourceImitating brains isnt a new idea. In the 1950s, neurobiologist Frank Rosenblatt devised the perceptron. The machine was a highly simplified model of the way a brains nerve cells communicate, with a single layer of interconnected artificial neurons, each performing a single mathematical function.Decades later, the perceptrons basic design helped inspire deep learning, a computing technique that recognizes complex patterns in data using layer upon layer of nested artificial neurons. These neurons pass input data along, manipulating it to produce an output. But, this approach cant match a brains ability to adapt nimbly to new situations or learn from a single experience. Instead, most of todays AI models devour massive amounts of data and energy to learn to perform impressive tasks, such as guiding a self-driving car.Its just bigger, bigger, bigger, says Subutai Ahmad, chief technology officer of Numenta, a company looking to human brain networks for efficiency. Traditional AI models are so brute force and inefficient.In January, the Trump administration announced Stargate, a plan to funnel $500 billion into new data centers to support energy-hungry AI models. But a model released by the Chinese company DeepSeek is bucking that trend, duplicating chatbots capabilities with less data and energy. Whether brute force or efficiency will win out is unclear.Meanwhile, neuromorphic computing experts have been making hardware, architecture and algorithms ever more brainlike. People are bringing out new concepts and new hardware implementations all the time, says computer scientist Catherine Schuman of the University of Tennessee, Knoxville. These advances mainly help with biological brain research and sensor development and havent been a part of mainstream AI. At least, not yet.Here are four neuromorphic systems that hold potential for improving AI.Making artificial neurons more lifelikeReal neurons are complex living cells with many parts. They are constantly receiving signals from the environment, with their electric charge fluctuating until it crosses a specific threshold and fires. This activity sends an electrical impulse across the cell and to neighboring neurons. Neuromorphic computing engineers have managed to mimic this pattern in artificial neurons. These neurons, part of spiking neural networks, simulate the signals of an actual brain, creating discrete spikes that carry information through the network. Such a network may be modeled in software or built in hardware.Spikes are not modeled in traditional AIs deep learning networks. Instead, in those models, each artificial neuron is a little ball with one type of information processing, says Mihai Petrovici, a neuromorphic computing researcher at the University of Bern in Switzerland. Each of these little balls links to the others through connections called parameters. Usually, every input into the network triggers every parameter to activate at once, which is inefficient. DeepSeek divides traditional AIs deep learning network into smaller sections that can activate separately, which is more efficient.But real brain and artificial spiking networks achieve efficiency a bit differently. Each neuron is not connected to every other one. Also, only if electrical signals reach a specific threshold does a neuron fire and send information to its connections. The network activates sparsely rather than all at once.Comparing networksTypical deep learning networks are dense, with interconnections among all their identical neurons. Brain networks are sparse, and their neurons can take on different roles. Neuroscientists are still working out how complex brain networks are actually organized.J.D. Monaco, K. Rajan and G.M. HwangJ.D. Monaco, K. Rajan and G.M. HwangImportantly, brains and spiking networks combine memory and processing. The connections that represent the memory are also the elements that do the computation, Petrovici says. Mainstream computer hardware which runs most AI separates memory and processing. AI processing usually happens in a graphical processing unit, or GPU. A different hardware component, such as random access memory, or RAM, handles storage. This makes for simpler computer architecture. But zipping data back and forth among these components eats up energy and slows down computation.The neuromorphic computer chip BrainScaleS-2 combines these efficient features. It contains sparsely connected spiking neurons physically built into hardware, and the neural connections store memories and perform computation.BrainScaleS-2 was developed as part of the Human Brain Project, a 10-year effort to understand the human brain by modeling it in a computer. But some researchers looked at how the tech developed from the project might make AI more efficient. For example, Petrovici trained different AIs to play the video game Pong. A spiking network running on the BrainScaleS-2 hardware used a thousandth of the energy as a simulation of the same network running on a CPU. But the real test was to compare the neuromorphic setup with a deep learning network running on a GPU. Training the spiking system to recognize handwriting used a hundredth the energy of the typical system, the team found.For spiking neural network hardware to be a real player in the AI realm, it has to be scaled up and distributed. Then, it could be useful to computation more broadly, Schuman says.Connecting billions of spiking neuronsThe academic teams working on BrainScaleS-2 currently have no plans to scale up the chip, but some of the worlds biggest tech companies, like Intel and IBM, do.In 2023, IBM introduced its NorthPole neuromorphic chip, which combines memory and processing to save energy. And in 2024, Intel announced the launch of Hala Point, the largest neuromorphic system in the world right now, says computer scientist Craig Vineyard of Sandia National Laboratories in New Mexico.Despite that impressive superlative, theres nothing about the system that visually stands out, Vineyard says. Hala Point fits into a luggage-sized box. Yet it contains 1,152 of Intels Loihi 2 neuromorphic chips for a record-setting total of 1.15 billion electronic neurons roughly the same number of neurons as in an owl brain.Like BrainScaleS-2, each Loihi 2 chip contains a hardware version of a spiking neural network. The physical spiking network also uses sparsity and combines memory and processing. This neuromorphic computer has fundamentally different computational characteristics than a regular digital machine, Schuman says.This BrainScaleS-2 computer chip was built to work like a brain. It contains 512 simulated neurons connected with up to 212,000 synapses. Heidelberg Univ.These features improve Hala Points efficiency compared with that of typical computer hardware. The realized efficiency we get is definitely significantly beyond what you can achieve with GPU technology, Davies says.In 2024, Davies and a team of researchers showed that the Loihi 2 hardware can save energy even while running typical deep learning algorithms. The researchers took several audio and video processing tasks and modified their deep learning algorithms so they could run on the new spiking hardware. This process introduces sparsity in the activity of the network, Davies says.A deep learning network running on a regular digital computer processes every single frame of audio or video as something completely new. But spiking hardware maintains some knowledge of what it saw before, Davies says. When part of the audio or video stream stays the same from one frame to the next, the system doesnt have to start over from scratch. It can keep the network idle as much as possible when nothing interesting is changing. On one video task the team tested, a Loihi 2 chip running a sparsified version of a deep learning algorithm used 1/150th the energy of a GPU running the regular version of the algorithm.The audio and video test showed that one type of architecture can do a good job running a deep learning algorithm. But developers can reconfigure the spiking neural networks within Loihi 2 and BrainScaleS-2 in numerous ways, coming up with new architectures that use the hardware differently. They can also implement different kinds of algorithms using these architectures.Its not yet clear what algorithms and architectures would make the best use of this hardware or offer the highest energy savings. But researchers are making headway. A January 2025 paper introduced a new way to model neurons in a spiking network, including both the shape of a spike and its timing. This approach makes it possible for an energy-efficient spiking system to use one of the learning techniques that has made mainstream AI so successful.Neuromorphic hardware may be best suited to algorithms that havent even been invented yet. Thats actually the most exciting thing, says neuroscientist James Aimone, also of Sandia National Labs. The technology has a lot of potential, he says. It could make the future of computing energy efficient and more capable.Designing an adaptable brainNeuroscientists agree that one of the most important features of a living brain is the ability to learn on the go. And it doesnt take a large brain to do this. C. elegans, one of the first animals to have its brain completely mapped, has 302 neurons and around 7,000 synapses that allow it to learn continuously and efficiently as it explores its world.Ramin Hasani studied how C. elegans learns as part of his graduate work in 2017 and was working to model what scientists knew about the worms brains in computer software. Rus found out about this work while out for a run with Hasanis adviser at an academic conference. At the time, she was training AI models with hundreds of thousands of artificial neurons and half a million parameters to operate self-driving cars.A C. elegans brain (its neurons are colored by type in this reconstruction) learns constantly and is a model for building more efficient AI.D. Witvliet et al/bioRxiv.org 2020If a worm doesnt need a huge network to learn, Rus realized, maybe AI models could make do with smaller ones, too.She invited Hasani and one of his colleagues to move to MIT. Together, the researchers worked on a series of projects to give self- driving cars and drones more wormlike brains ones that are small and adaptable. The end result was an AI algorithm that the team calls a liquid neural network.You can think of this like a new flavor of AI, says Rajan, the Harvard neuroscientist.Standard deep learning networks, despite their impressive size, learn only during a training phase of development. When training is complete, the networks parameters cant change. The model stays frozen, Rus says. Liquid neural networks, as the name suggests, are more fluid. Though they incorporate many of the same techniques as standard deep learning, these new networks can shift and change their parameters over time. Rus says that they learn and adapt based on the inputs they see, much like biological systems.To design this new algorithm, Hasani and his team wrote mathematical equations that mimic how a worms neurons activate in response to information that changes over time. These equations govern the liquid neural networks behavior.Such equations are notoriously difficult to solve, but the team found a way to approximate a solution, making it possible to run the network in real time. This solution is remarkable, Rajan says.In 2023, Rus, Hasani and their colleagues showed that liquid neural networks could adapt to new situations better than much larger typical AI models. The team trained two types of liquid neural networks and four types of typical deep learning networks to pilot a drone toward different objects in the woods. When training was complete, they put one of the training objects a red chair into completely different environments, including a patio and a lawn beside a building. The smallest liquid network, containing just 34 artificial neurons and around 12,000 parameters, outperformed the largest standard AI network they tested, which contained around 250,000 parameters.The team started the company Liquid AI around the same time and has worked with the U.S. militarys Defense Advanced Research Projects Agency to test their model flyingan actual aircraft.The company has also scaled up its models to compete directly with regular deep learning. In January, it announced LFM-7B, a 7-billion-parameter liquid neural network that generates answers to prompts. The team reports that the network outperforms typical language models of the same size.Im excited about Liquid AI because I believe it could transform the future of AI and computing, Rus says.This approach wont necessarily use less energy than mainstream AI. Its constant adaptation makes it computationally intensive, Rajan says. But the approach represents a significant step towards more realistic AI that more closely mimics the brain.Matt ChinworthBuilding on human brain structureWhile Rus is working off the blueprint of the worm brain, others are taking inspiration from a very specific region of the human brain the neocortex, a wrinkly sheet of tissue that covers the brains surface.The neocortex is the brains powerhouse for higher-order thinking, Rajan says. Its where sensory information, decision-making and abstract reasoning converge.This part of the brain contains six thin horizontal layers of cells, organized into tens of thousands of vertical structures called cortical columns. Each column contains around 50,000 to 100,000 neurons arranged in several hundred vertical minicolumns.These minicolumns are the primary drivers of intelligence, neuroscientist and computer scientist Jeff Hawkins argues. In other parts of the brain, grid and place cells help an animal sense its position in space. Hawkins theorizes that these cells exist in minicolumns where they track and model all our sensations and ideas. For example, as a fingertip moves, he says, these columns make a model of what its touching. Its the same with our eyes and what we see, Hawkins explains in his 2021 book A Thousand Brains.Its a bold idea, Rajan says. Current neuroscience holds that intelligence involves the interaction of many different brain systems, not just these mapping cells, she says.Though Hawkins theory hasnt reached widespread acceptance in the neuroscience community, its generating a lot of interest, she says. That includes excitement about its potential uses for neuromorphic computing.Hawkins developed his theory at Numenta, a company he cofounded in 2005. The companys Thousand Brains Project, announced in 2024, is a plan for pairing computing architecture with new algorithms.In some early testing for the project a few years ago, the team described an architecture that included seven cortical columns, hundreds of minicolumns but spanned just three layers rather than six in the human neocortex. The team also developed a new AI algorithm that uses the column structure to analyze input data. Simulations showed that each column could learn to recognize hundreds of complex objects.The practical effectiveness of this system still needs to be tested. But the idea is that it will be capable of learning about the world in real time, similar to the algorithms of Liquid AI.For now, Numenta, based in Redwood, Calif., is using regular digital computer hardware to test these ideas. But in the future, custom hardware could implement physical versions of spiking neurons organized into cortical columns, Ahmad says.Using hardware designed for this architecture could make the whole system more efficient and effective. How the hardware works is going to influence how your algorithm works, Schuman says. It requires this codesign process.A new idea in computing can take off only with the right combination of algorithm, architecture and hardware. For example, DeepSeeks engineers noted that they achieved their gains in efficiency by codesigning algorithms, frameworks and hardware.When one of these isnt ready or isnt available, a good idea could languish, notes Sara Hooker, a computer scientist at the research lab Cohere in San Francisco and author of an influential 2021 paper titled The Hardware Lottery. This already happened with deep learning the algorithms to do it were developed back in the 1980s, but the technology didnt find success until computer scientists began using GPU hardware for AI processing in the early 2010s.Too often success depends on luck, Hooker said in a 2021 Association for Computing Machinery video. But if researchers spend more time considering new combinations of neuromorphic hardware, architectures and algorithms, they could open up new and intriguing possibilities for both AI and computing.